Four clients have asked about this topic in the past month, so perhaps it's worth a post. It turns out that there is a quick way to use Lighthouse Studio to do power analysis for choice-based conjoint experiments.
Say for example that a client wants a choice-based conjoint experiment with four 3-level and one 2-level attributes. The client plans to show each respondent 10 tasks each with two alternatives and a none and wants to know what sizes of significant coefficients (utilities) the design will be able to detect (at a given level of confidence and power). To answer the client’s questions, follow these four steps:
Step One: Use Lighthouse Studio to make the design (using balanced overlap or complete enumeration or whatever design strategy the client plans to use; for now, we’ll use balanced overlap). Now go to the “Test Design” button and specify 1,000 for Respondents and 33 for Percent None (we’ll get into why we want 1,000 respondents below, but 33% is the expected proportion choosing none when it’s one of three options shown and we have no advance knowledge of the utilities). Now click Test. The results show standard errors of 0.02 for the levels of the 3-level attributes and 0.014 for the levels of the 2-level attribute.
Step Two: For a detailed discussion of where the following numbers come from see Chapter 9 of Statistical Analysis by Sam Kachigan, by far the clearest discussion of power analysis I’ve come across. If we want 95% confidence and 50% power (i.e. a 50/50 chance of finding and missing a significant difference) then we use 1.96 (the Z statistic associated with 95% confidence and 50% power). If we want 95% confidence and 80% power, we will use a combined Z statistic of 2.80 (which is the 1.96 from 95% confidence plus 0.84, the Z value corresponding to 80% power). By similar math if we want 95% confidence and 70% power, we use the combined Z of 2.485 (1.96 from confidence plus 0.525 from power). Quiz time: what’s the combined Z if we want 95% confidence and 99% power? If you said 5.04, well good for you. Academic researchers usually want power of 70% or 80%.
Step Three: Now we multiply the results from Step One and Step Two together. For our 3-level attributes with 1,000 respondents and a 95% confidence level, then we have an 80% chance of detecting as significant coefficients with an absolute value of 0.056 (that is, 0.02 times 2.8) or greater. We have a 70% chance of detecting coefficients for the 2-level attribute with an absolute value larger than 2.485 * 0.014 = 0.035. And so on.
Step Four: Now, we know that we can detect with 95% confidence and 80% power coefficients larger than 0.056 for our three-level attributes. To change that estimate for any number of respondents we multiply that estimate by the square root of 1,000/n, where n is the sample size. So for a sample size of 400, we get SQRT(1000/400) = 1.58 and we multiply this by our detectable difference in Step Three: 1.58 * 0.056 = 0.0885 (we have an 80% chance of detecting, at 95% confidence, utilities with an absolute value of 0.0885 or larger). Similarly, if we increase our sample size to 4,000, then we have 0.056 * SQRT(1000/4000) = 0.028 – our sample size increase results in a more precise experiment, one enabling us to detect smaller significant differences.
* * * * * * *
This kind of analysis will work for other kinds of choice experiments (e.g. MaxDiff, situational choice experiments, menu-based choice) as well, but only with the extra step of making a data file of random responses to run through your experimental design. Perhaps illustrating this will be a topic for another day.