Conjoint Analysis and Relative Advantage Trees

Last updated: 23 Jun 2021

Background

At the 2019 Sawtooth Software Conference, Chrzan and Retzer showed that tree-based models worked about as well as polytomous logit in terms of predicting the choices of holdout respondents in 10 situational choice experiments.  Specifically, they found average holdout hit rates of 51.4% for polytomous MNL and 51.3% for classification trees.  Like polytomous MNL, trees enable us to build simulators.  They have the additional benefit of providing natural visualizations of choice as a hierarchical decision process.  In the discussion following that presentation, Tom Eagle asked if trees could be extended to choice-based conjoint (CBC) experiments.  At the time, we didn’t see a way to do it.

But we do now:  below we describe two variants of decision trees applied to CBC data.  We assess the predictive validity of our two variants relative to that of CBC analyzed with HB MNL.

We use the first variant, Relative Advantage Trees (RATs) when we have quantitative attributes with known preference orders.  Using RRM coding (Chorus 2010) we recode alternative A in terms of the attribute advantages over the other alternatives in the choice set and record if alternative A is chosen or not. We can then use these coded advantages to predict choice.
As an example, imagine a choice set with alternatives A, B, and C with 4 attributes whose levels have known preference orders, as below:

• Purchase price (prefer low to high)
• Warranty (prefer long to short)
• Speed (prefer high to low)
• Cost to operate (prefer low to high)

And imagine one of the choice sets looks like this:

 Price Warranty Speed Cost Choice 2000 60 25 7.50 0 1750 24 50 10 1 1000 3 5 5 0

To recode the concepts with RRM coding, we subtract the best level in each column from each cell in that column. For example, the \$1,000 price level is the best in this set, so we subtract 1000 from each cell in that column. Longer warranty is preferred to shorter, so we subtract the 60 month level from the warranty column. After we’ve done the RRM coding for all our attributes (excluding the Choice column), the choice set would look like this:

 Price Warranty Speed Cost Choice 1000 0 -25 2.50 0 750 -36 0 5 1 0 -57 -45 0 0

With our recoded data in hand, we can create a decision tree analysis to predict the choice and then use the tree for simulations. We test two different types of tree models: Classification and Regression Trees, or CART (Brieman et al 1984) and Conditional Inference (CI) trees (Hothorn, Hornik, and Zeileis 2006). The Bonferroni statistical testing that generates the splits can make CI trees easier to explain to clients, but CART tends to produce more parsimonious trees (with fewer branches), making for easier visualization. Not knowing which tree predicts choice better, we test both.

We use two datasets to test RATs against aggregate and HB MNL models in terms of model fit (McFadden's ρ²) and in terms of predicting holdout respondents' choices (ρ² and MAE).

Our first CBC experiment features five attributes, 212 estimation respondents and 190 validation respondents. All attributes have known preference ordering.

Our analyses produced a CART tree with four terminal nodes (first image below) and a conditional inference tree with 12 terminal nodes (second image below).

In terms of fitting the estimation data, the CI tree works better than CART. The CI tree performs slightly worse than the aggregate MNL model and, as we would expect, the HB model performs the best, because it accounts for heterogeneity among respondents:

 Fit Statistic CART RAT CI RAT MNL HB MNL ρ²(%) 9.97 14.70 15.67 59.71

After building the models, we test how well each model did at predicting our 190 holdout respondents’ choices. To do this, we follow the if-then classification rules from the trees to identify the terminal node. We then give each alternative the choice probability from the terminal nodes of our tree and normalize the probabilities within each choice set so they sum to 100%.

In terms of predicting our holdouts, the models perform more similarly. For our RAT trees, the CI tree still does better than the CART tree. In terms of mean absolute error (MAE), the HB model does best, followed by the MNL model and the CI tree.

 Fit Statistic CART RAT CI RAT MNL HB MNL ρ²(%) 16.98 20.58 20.53 20.98 MAE 8.08 5.18 4.58 4.04

We then repeat the process with another CBC dataset, one with six attributes, two of which were unordered categorical attributes. Trees handle categorical predictors well, so we simply use nominal coding for those two categorical attributes and RRM coding only for the quantitative attributes.

This time when we fit our RAT trees, the CART tree refuses to split into any significant branches at all and our CI tree has 64 terminal nodes!

In terms of model fit, this dataset tells the same story as the first dataset. Namely, the HB model fits the data best, followed by the MNL model and the CI tree.

 Fit Statistic CART RAT CI RAT MNL HB MNL ρ²(%) Bust 8.67 10.50 54.18

In terms of fitting our holdout respondents, the HB model still predicts the best. However, with this dataset, the CI RAT tree slightly outperforms the MNL model.

 Fit Statistic CART RAT CI RAT MNL HB MNL ρ²(%) N/A 15.05 15.13 17.05 MAE N/A 6.37 6.55 4.67

Generalized Nominal Attribute Trees (GNATs)

Of course, we could extend the nominal coding used for the unordered categorical variables above to any CBC experiment, simply using the "single file" data export from Lighthouse Studio as our data set: this results in Generalized Nominal Attribute Trees. In both studies GNATs have lower predictive power than RATs, however.

Summary

Tree-based models seemed like an interesting idea to pursue, since they had worked well for situational choice experiments. Unfortunately, because they do not accommodate respondent heterogeneity, they have significantly less predictive power than does CBC with HB-MNL modeling.