Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How are standard errors in SMRT simulations, related to those in the SSI design test?

Hi there,

I am trying to give an example of how varied the results are likely to be for a possible CBC exercise and struggling to explain this.

I generally create a design and test in SSI-Web, aiming for standard errors of no more than 5% for main effects and 10% for interactions. However, I need to show how higher standard errors would effect simulations but the standard errors in SMRT simulations are very different (i.e. a lot lower, generally below 1%??).

Are the standard errors in a simulation in SMRT not equivalent to the ones in a SSI-Web design test? At the moment, it looks like we could get away with a lot lower sample size than we're recommending!

Many thanks for any advice.

asked Sep 17, 2012 by anonymous
retagged Sep 17, 2012 by Walter Williams

1 Answer

0 votes
Standard errors for part-worth utilities are certainly related to standard errors from simulations, but there isn't a mathmatical way to relate one directly to the other.  Has to be done empirically once the data have been collected.

Simulations are done (share of preference case) by summing up the part-worths, then taking their antilog, and then percenting the results across product alternatives to sum to 100%.

The easiest way to explain to less-technical clients is to give the margins of error based on the rule of proportions.  In other words, 1000 respondents gives you roughly (worst-case scenario) +/- 3% margin of error in market simulations.

This is the conservative way to explain things to the client, though you'll get better than +/-3 3% margin of error with 1000 respondents due to:

1.  +/- 3% is based on the worst-case scenario of 50% share of preference...and your shares of preference are unlikely to be around 50%.

2.  You actually get more statistical information from each respondent than a simple "first choice vote" since the probabilities of choice are split in a continuous fashion among alternatives.

The simulator reports the actual margin of error based on the respondent data and the share of preference rule (or RFC, which also gives you continuous probabilities of choice).  You can use data from a previous similar project to estimate approximate margin of error for your next project.
answered Sep 17, 2012 by Bryan Orme Platinum Sawtooth Software, Inc. (164,715 points)