Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Difference in importance scores between Lighthouse Studio output and self-computed scores

Dear all,

I have computed importance scores in Python, starting from the export of my individual part-worth utilities.

I calculated the maximum difference between part-worth utilities per attribute, divided this difference by the sum of differences for each respondent, and averaged the obtained individual percentage.

What surprises me is that my results are slightly different from the results obtained from Lighthouse Studio. Although the differences are very small, this impacts the ranking of preferred attributes.

Would anyone have an idea on the origin of the difference?
asked Sep 3 by Jean Mansuy

1 Answer

0 votes
Jean,

I can't say what's causing the difference in your case, but I can tell you that often what wee see is a computational error somewhere, in Excel if that's what someone is using, or in SPSS formulas or whatever.  If you've built a model with interactions.  Even more often its an order of operations issue where someone computes importances from averaged utilities rather than averaging importances from respondent-level utilities.  On occasion I've seen folks compare importances calculated from one model (e.g. HB) with those our software compares for another model (e.g. MNL).   I've seen a lot of creative variations, but I can't be sure which applies to your situation.  It sounds like you're doing it right, but if so I'd expect the importances to match exactly.
answered Sep 4 by Keith Chrzan Platinum Sawtooth Software, Inc. (75,450 points)
Dear Keith,

I thank you for your answer. I'll continue my investigation based on your feedback.

Best regards,
...