Sometimes, you’ve got more attributes than you know what to do with. How can you successfully design your conjoint when you find yourself in this situation? This article assumes that you have basic familiarity with conjoint studies and understand how to define attributes and levels. It also assumes that you have done the upfront work of any conjoint study of clearly understanding objectives and having done preliminary research – either formal qualitative work, an internal survey, discussions with stakeholder experts, etc. – to make sure the inputs of your study (the attributes and levels) will give you the back-end data you need to answer the study’s objectives.
To begin with, given that we intend to display a full product profile, and we would like to show as many attributes as possible, we are likely going to be using Adaptive Choice-Based Conjoint (ACBC) and building a Hierarchical Bayes (HB) model. ACBC’s personalized, customized exercise flow does a better job of handling larger numbers of attributes than Choice-Based Conjoint (CBC), and it allows a reduced set of attributes to be seen by each unique respondent while preserving the total set of attributes to be used in a final model. There are many articles written on ACBC and when it is best used (see https://sawtoothsoftware.com/resources/technical-papers/categories/adaptive-choice-based-conjoint for more information).
Now, back to the attributes. I’ve often seen clients come to the table with 15-20 attributes, and the necessity to display a full product profile. To make the exercise manageable for respondents, we will use a technique that reduces the number of attributes viewed by each respondent. We create a “pre-screening” question that is placed before the conjoint exercise. Set up as a simple multiple select question, this pre-screener will ask respondents to either choose some attributes to exclude or ask them to choose which attributes to include. In the end, we “construct” a final list of attributes to be shown in the exercise based on those that are left after the pre-screener. In general, I am a fan of the “exclusion” style question, for reasons discussed below. But, before we get to the pre-screening question text, we need to think about the total list of attributes that will be shown in this question.
To begin with, we need to ask, “which of the attributes are essential to display so that a respondent can understand the offering?” Attributes such as brand and price, and core product features must be displayed for the offering displayed to make sense. So, we count these attributes and make note of that number. Let’s imagine we start with 15 attributes in total and 4 are essential to display.
Next, we determine the maximum number of attributes that we can display in the ACBC exercise without overwhelming respondents. What is this number? The answer is it depends. It depends primarily on the complexity of the topic and the attributes under study, which also relates to the respondents who will be taking the survey. Let’s imagine that we are studying preferences for the attributes of a new soft drink. Likely, these attributes (e.g., flavor, caffeine level, organic or not, etc.) are relatively easy for the average consumer to understand, so we can push the number of attributes displayed to perhaps 10 or 11 or even more. However, if we are studying a complex piece of medical equipment that has new features not yet on the market, even expert respondents may feel a heavy cognitive load as they examine each attribute of the offering in the conjoint exercise. In such a case, we may try to minimize the final number of attributes displayed as much as possible.
Bottom line, we start with a larger number of attributes than we want to display in the conjoint exercise. For sake of illustration, let’s use the 15 attributes example, we want to display a maximum of 11, and 4 of the 15 must be shown. So, for our pre-screener, we will have 11 response options (15 total attributes minus the 4 that must be shown in the conjoint). There’s no need to show the 4 in your pre-screener.
Out of the 11 response options, we need to pass forward a maximum of 7. Four are already being passed in, and we can have 11 max. Note that you do not need to pass in the same number of attributes for each respondent. Each respondent’s constructed list can vary in length. In our example, we are capping the list at a maximum of 11, but showing fewer than 11 is fine. Back to pre-screener question text, one way to phrase it is as follows:
“When considering purchase of [your product or service goes here], which of the following are LEAST important to you? (Please select at least 4 options)”
Note that we say at least 4 options, because we want to get the constructed list down to a maximum of 11 items. You may also want to set a cap on the most options that can be selected, so that a respondent doesn’t strip out all of the possible attributes.
Ok, why least important? First, when an attribute gets excluded from an individual respondent’s design, the value of that attribute in the final model gets counted as zero (meaning, it has no value). Therefore, when we ask whether an attribute is least important, this instruction matches well with what happens in the model. Second, we know that respondents can lose attention when looking through large lists, so we want to have them do fewer checks if possible. Here, our list has 11 items, and we ask them to check at least 4 to exclude, as opposed to checking 7 to include. Now, there are also sometimes reasons to have respondents select items to include (those most important), but that’s determined on a case-by-case basis. Consider your specific project and its attributes and whether it makes more sense to include or exclude items.
Now we’ve got a pre-screener set up, the result of which will be a constructed list of attributes customized for each respondent. But wait, there’s one more thing to consider as you present attributes in the pre-screener. How can we provide sufficient information so that the respondent can decide which items to exclude (or include)? One way to help respondents is to present each attribute followed by a basic explanation of its levels. For example, say one of your attributes was caffeine level, the item in the pre-screener list could be set up as: “Caffeine Level (from none to 300mg)”. Or, if you have a more abstract attribute such as “connectivity”, you might say: “Connectivity (requires manual configuration vs automatic network detection)”. Providing some level information helps the respondent have sufficient information to determine if each attribute will make it through to their preliminary consideration set, represented by their customized constructed list.
That’s it! We started with an issue of having too many attributes to display in a full product profile to respondents. We use ACBC due to the personalized nature of the exercise and its ability to work with larger numbers of attributes than CBC. We determine which attributes must be shown in the exercise in order for the product to make sense. We implement a pre-screener question where we ask respondents to exclude or include items to create the final list of attributes they will see. Each respondent sees a set of attributes important to them. We use HB to weave all the data together into a unified model. And voila’! The issue of having too many attributes is solved.
Nico Peruzzi, PhD, is a partner at the research consultancy elucidate. Elucidate has been managing complex ACBC studies for a variety of industries over the past dozen years. Nico can be reached at nperuzzi@elucidatenow.com.