Imagine you are a decision maker for a nationwide manufacturer of decorative light fixtures, and you have to decide on a subset from hundreds of options to manufacture and distribute to your clients across the country. What color and style do consumers prefer? At which price? Are there differences in preference across geographical regions? Remember: there are hundreds of options. This is exactly the problem that our client had when they approached us.
After a year of using online DIY tools without much success, the client decided they needed a more sophisticated and reliable tool and team to address their business objectives, so they approached me at MDRG. We decided to use a MaxDiff design to prioritize the 119 light fixture designs.
As you probably know, MaxDiff is a technique used to prioritize a list of items that is too long for the average person to accomplish accurately. Each respondent evaluated sets containing five light fixtures with prices. In each set, they were asked to choose the fixture they would be most and least interested in purchasing at the given prices. Respondents were given the ability to enlarge image sizes to clearly view each fixture.
MaxDiff poses several advantages over other ranking and rating methods:
- The task eliminates a lack of discrimination that can be found in rating scales (i.e. most people select a 4 or 5)
- MaxDiff makes it easier on respondents, as it only asks them to select the best and worst items from small subsets, as opposed to making them rank the entire list
- MaxDiff doesn’t just produce a prioritized list, it also produces comparable scores that can implicate the relative strength of one item versus another (i.e. a score of 160 means an item was twice as preferred as a score of 80)
In a traditional MaxDiff, each respondent should see an item 2-3 times so the model can get an accurate read on the respondents’ preferences. However, with 119 items, and 5 items per set, respondents would have to evaluate at least 48 sets of images to see each item 2-3 times. That would be an almost impossible task for any person!
We decided to use a Sparse MaxDiff design, meaning every respondent viewed each item 1 time. This means every respondent evaluated 24 sets of 5 images, selecting the images that made them most and least interested in purchasing at the given prices. This reduces the cognitive load of the respondent and makes for a much easier and more accurate exercise.
We then used an aggregate Logit model to determine the utility score for each light fixture. Logit models are ideal for studies with so many items that respondents cannot evaluate enough items to perform individual-level estimation of scores via Hierarchical Bayes (HB). A utility score is a measure of worth or preference relative to the other items tested. The utility score is then converted to a likelihood of selected as “best” by exponentiating the scores and normalizing the results to sum to 1.0. MDRG typically takes it a step further and indexes the scores so that the average score is 100. (Light fixtures with scores above 100 have above average interest. A score of 120 is 20% greater than the average. Light fixtures with scores below 100 have below average interest. A score of 80 is 20% less than the average.)
Don’t forget: the client is a nationwide manufacturer of light fixtures, and consumers in the South prefer different designs than consumers in the West, for example. The client needed separate prioritized lists for each geographic region (North, South, East, and West) to ensure they are stocking the most appealing designs in stores in each geographic region. We ran a separate Logit model for each region, which produced unique prioritized list for each region.
Now, armed with this information, the client can feel confident that they are manufacturing and distributing the most appealing fixtures in each region. The research also gives them more leverage in meetings with retailers, since they have evidence directly from consumers as to the light fixtures most likely to be purchased.
We are also able to run TURF analysis (Total and Unduplicated Reach and Frequency) to determine the optimal set of images that will appeal to the largest group of consumers to maximize sales and revenue.
We have performed many MaxDiff studies this year and have been able to optimize the process by quickly and easily using prior studies as templates, and altering the design settings depending on the number of images. The client has seen a tremendous lift in sales since they began utilizing the data from these MaxDiff Studies. Sawtooth Software’s top-of-the-line software and flexibility allows us to provide this value.
Author Bio: Scott Mayer is a Quantitative Analyst at MDRG, a full-service market research firm that blends the non-conscious and conscious mind for truer understanding. Scott worked as a data analyst in finance and government before joining MDRG as a quantitative researcher, where he specializes in Telecom, Healthcare, and CPG.