Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Which Sawtooth methods and analyses deal most effectively with potential scaling problems?

There has been considerable discussion about the potential problems associated with scaling effects in studies using conjoint methods.   I have two questions.
 
1. What is Sawtooth’s current perspective on the significance of this issue?  
2. Which Sawtooth methods and analyses deal most effectively with potential scaling problems.
asked Feb 21, 2013 by cunnic Bronze (1,440 points)
Please define more what you mean by scaling effects.  Do you mean scale use bias (such as from ratings-based conjoint questions)?  Or, do you mean the scale factor as from discrete choice and logit-based estimation.  Or, do you mean the scale (range) for quantitative attributes such as price (the range of prices to include as well as the size of the increments between adjacent prices)?
I was referring to the types of scale factors, (e.g., the certainty of an informant’s choices), that Jay Magidson and Jeroen Vermunt discussed at the 2007 Sawtooth Software Conference.

1 Answer

+2 votes
Tricky question, indeed.  The scale factor issue can cause troubles unless you watch for it.  The greater the response error, the lower the scale (smaller differences between the utilities).  The lower the response error, the higher the magnitude of the parameters.

One of the biggest pitfalls is in comparing respondents or groups of respondents on raw parameters resulting from logit-based estimation routines (aggregate logit, latent class, or HB).  Unless you normalize the scale so that each respondent (or group of respondents in the case of LC) has the same range of utility scores, you can draw incorrect conclusions due to large differences in scale factor.  And, if you used raw utilities is k-means cluster procedures, the scale differences could become the main driver of cluster membership (bad!).   

Luckily, Rich Johnson noted a related issue with ratings-based ACA back in the 1980s, so he used a normalization transformation on the utilities so that our software would report average utilities for groups or the market as a whole using the normalized scale.  (And, he recommended the rescaled scores when submitting to k-means cluster).  Today, we use a very similar normalization procedure (when presenting average utility results in our simulation routines) called "Zero-Centered Diffs" that ensures that each respondent gets equal weight.  It is a post hoc "band-aid" to try to remove the differences in scale factor.

But, market simulations use the raw utilities with their potential differences in scale.  But, this has been argued to be proper, since respondents who have more response error should also have probability predictions for market choices that are more even (closer to 1/k where k is the number of alternatives in the market simulations).  And, scale factor differences across people have less impact in market simulations since each respondent's choices must sum to 1.0.

I don't know whether one discrete-choice based conjoint method should be more immune to scale effects than another.
answered Mar 22, 2013 by Bryan Orme Platinum Sawtooth Software, Inc. (163,515 points)
Thanks, Bryan.  Very thoughful response.  Much appreciated!
...