MaxDiff was invented by Jordan Louviere in 1987 while on the faculty at the University of Alberta (Flynn and Marley, 2012, "Best Worst Scaling: Theory and Methods"). With MaxDiff, we show respondents a set (subset) of the possible items in the study and we ask them to indicate (among this subset with a minimum of three items) the best and worst items (or most and least important, etc.).
Below is an example, involving a set of four items.
Example MaxDiff Task:
MaxDiff may be thought of as a more sophisticated extension of the Method of Paired Comparisons (MPC). Consider a set in which a respondent evaluates four items, A, B, C and D. If the respondent says that A is best and D is worst, these two "clicks" (responses) inform us on five of six possible implied paired comparisons:
A>B, A>C, A>D, B>D, C>D
where ">" means "is more important/preferred than."
The only paired comparison that we cannot infer is B vs. C. This is acceptable, since other MaxDiff questions will inform us about the relationship between B and C. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons.
MaxDiff questionnaires are relatively easy for most respondents to understand. Furthermore, humans are better at judging items at extremes rather than discriminating among items of middling importance or preference. And since respondents make choices rather than expressing strength of preference, there is no opportunity for scale use bias. This is an extremely valuable property for cross-cultural research studies.
Technical Note: Although we use the terms MaxDiff and Best-Worst interchangeably, academics (including the inventor, Jordan Louviere) are recently making a distinction between the terms. The distinction is drawn depending on the route used to code the data for estimating the scores. The method we use to code the data is the Best-Worst approach. Thus, this software would more formally be described as providing Best-Worst rather than MaxDiff analysis. The benefits of modeling as sequential best-worst (as two separate choices) allow the researcher to more easily investigate the differences in preference scores and response error between best and worst judgments.