When designing a MaxDiff survey you need to determine how many items each respondent will evaluate, how many items are displayed in each question, and how many questions each respondent will see. Lighthouse Studio will suggest these values to you, but you may use this calculator to experiment with alternatives. Make an adjustment to one of the input boxes and see the results in the "Number of Sets per Respondent" section below.
Number of Items (K):
Number of Items per Set (k):
This field determines how many items appear in each question. We recommend about five or fewer. You should not show more than half of the number items per set.
Number of Seconds per Set:
This field sets the average number of seconds a respondent spends on a task. 20 seconds is the default.
Number of Sets per Respondent
We recommend asking enough questions so that each item appears from three to five times for each respondent. The suggested number of questions therefore is from 3K/k to 5K/k, where K is the total number of items and k is the number of items shown per set. Below are the number of sets for the three different options as well as the approximate time to complete the entire MaxDiff exercise.
- 3 times per respondent (3K/k): 12 sets (approximately 4 minutes)
- 4 times per respondent (4K/k): 16 sets (approximately 5 minutes)
- 5 times per respondent (5K/k): 20 sets (approximately 7 minutes)
Explanation
In 2005, Bryan Orme, the president of Sawtooth Software, conducted research using synthetic data in a MaxDiff survey. The results suggest that asking respondents to evaluate more than about five items at a time within each set may not be very useful. The gains in precision of the estimates are minimal when using more than five items at a time per set for studies involving up to about 30 total items. Orme speculated that the small gains from showing even more items may be offset by respondent fatigue or confusion.
Another finding from Orme's research is that it is counterproductive to show more than half as many items within each set as are in the study. Doing so can actually decrease precision of the estimates. Orme provided this explanation: "To explain this result, consider a MaxDiff study of 10 items where we display all 10 items in each task. For each respondent, we'd certainly learn which item was best and which was worst, but we'd learn little else about the items of middle importance for each individual. Thus, increasing the number of items per set eventually results in lower precision for items of middle importance or preference. This leads to the suggestion that one include no more than about half as many items per task as being studied."
Orme's simulation study also included an internal validation measure using holdouts, leading to a suggestion regarding how many subsets to ask each respondent in MaxDiff studies. He stated, "The data also suggest that displaying each item three or more times per respondent works well for obtaining reasonably precise individual-level estimates with HB. Asking more tasks, such that the number of exposures per item is increased well beyond three, seems to offer significant benefit, provided respondents don't become fatigued and provide data of reduced quality."
Source: MaxDiff Technical Paper (2013)