Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Significance of attribute importances

Hello everyone,

do you think it is appropriate to test if an attribute has significant importance on prosumer decision making based on a Wilcoxon test or Sign test?
Thus, I would test the distribution of the individual importance scores for the attribute "color" in comparison to the single median of 0% (no importance). It is probably unlikely to receive insignificant results for pretty much every attribute in the design, though.
Do you have any recommendation on how to better test the significant effect of attributes on customer choices?

Thank you very much!

Best,
Chris
asked May 17 by Chris Berlin Bronze (570 points)

1 Answer

0 votes
Importance scores are a strange and problematic statistic.  The problem is that standard importance scores, by definition, take the full range of utilities within the attribute.  As you know, noise leads to differences between part-worth utilities, so even a completely unimportant attribute will be guaranteed to achieve an importance greater than zero.  

Now, if you know that an attribute has a priori preference order, then it's possible for you to compute something we've called "strict importance" which only takes the difference in utility as a non-zero importance if the difference is in the expected utility direction; otherwise the importance is set to zero for the respondent.
answered May 17 by Bryan Orme Platinum Sawtooth Software, Inc. (164,390 points)
Thank you Bryan, is it possible to calculate this "strict importance" after the estimation of the PWU?
The software will not do it automatically for you; but you could do it easily enough in Excel, R, SPSS, etc.  Just follow the standard approach for calculating importances for each respondent, but set the range of utility to zero if the attribute has a reversed utility relationship.  Then, normalize the ranges to sum to 100% for each respondent.
I mentioned the idea of "strict importance" just as a comment about how the importance calculation can inflate importance for attributes in conjoint analysis when there is just noise on an unimportant attribute.  Typically, if researchers and academics want to measure whether an attribute has a significant effect on choice, they will not use either the importance or "strict importance".   They would either focus on the part-worth utilities themselves, or they would focus on the results of changes in the levels of each attributes on shares of choice from market simulations.

If the attribute has known preference order, then the notion of whether an attribute has a significant effect on choice becomes easier to test, because (for statistical testing purposes) we could fit a linear term to the attribute and then do a formal t-test or Bayesian test of significance on the linear parameter (the beta).

But, if an attribute was unordered, such as brand, it becomes trickier and one must worry about heterogeneity in the sample.  For example, imagine a case with brand where there are 2 brands: Coke and Pepsi.  Imagine that exactly half the sample loves Coke and the other half loves Pepsi (to the same degree).  In aggregate, it will look like the utilities for Coke and Pepsi are zero (since they cancel each other out).  If we conducted statistical tests at the sample level, we might declare that brand had no effect on choice.  But, that would be a wrong conclusion when considering any one person in the sample.  So, this is an instance where calculating the attribute importances at the individual level would actually be more revealing than statistical testing across the entire sample on the part-worth utilities or on the simulated shares of choice.
...