Introduction
In a MaxDiff exercise, respondents typically evaluate small groups of items (usually 3–6 at a time), selecting their “best” and “worst” options. These choices reflect relative preferences among the items chosen by the researcher.
However, relative judgments alone can be misleading. For example, a respondent who doesn’t eat sugar may still select a “best” and “worst” ice cream flavor, even though they would never purchase any of them. Their choices may resemble those of an enthusiastic ice-cream eater, despite very different real-world behavior.
MaxDiff anchoring addresses this limitation by introducing an absolute reference point. Anchoring helps distinguish whether items are genuinely important or unimportant, or whether respondents would actually buy or consider them, rather than only how items compare to one another.
Anchoring works by appending an additional grid question to the end of the MaxDiff exercise. This grid includes a subset of items (typically around seven), spanning from each respondent’s most to least favored options based on their earlier MaxDiff choices. Respondents then indicate which of these items they would actually buy or consider important.
This implementation of Anchored MaxDiff, known as the Direct Binary Approach Method, was proposed by Kevin Lattery and has been presented at several Sawtooth Software conferences.
Results are displayed with a utility boundary line separating important and unimportant items. Items above the line have positive utility and are considered important (e.g., buy), while items below the line have negative utility and are considered unimportant (e.g., not buy).
Responses to the anchoring question, together with the MaxDiff exercise results, are used during analysis to estimate the anchor line (utility boundary).
When MaxDiff anchoring is enabled, the exercise is analyzed using anchored MaxDiff methods rather than standard MaxDiff analysis.