The author (Orme) presents results from two studies testing a new procedure called Adaptive MaxDiff Scaling. Rather than focus equal attention on estimating respondents' preferences (or importances) for best AND worst items, A-MaxDiff focuses attention on estimating best/most important items with greater precision. The interview adapts to each respondent, learning from prior responses. Items marked "worst" are discarded from further consideration. The questionnaire proceeds in stages. In the first stage, K items are shown per set. In each subsequent stage, K-1 items are shown per set, until the respondent is doing paired comparisons among the surviving (most preferred) items. Later tasks reflect increased utility balance.
The results show better hit rates for "best" items in holdouts relative to standard MaxDiff. Average population parameters are essentially identical between standard and adaptive forms of MaxDiff. Respondents take slightly less time to complete the adaptive survey, and they perceive it to be more enjoyable and less monotonous than standard MaxDiff. Orme argues that A-MaxDiff should be especially preferred when simulation methods such as TURF are used with MaxDiff data. The main drawback is decreased precision of estimates for "worst" items.