How large of a sample will I need for this project? That is a question that we often receive at Sawtooth Software Technical Support, and one that we are happy to consult on.

When it comes to sample size rule of thumb, there is rarely a one-** size**-fits-all answer, and conjoint research is no exception. Greater sample leads to greater precision, thus increased confidence in our estimates which is simple enough. But things get complicated as we take into consideration population size, segmentation, statistical power, effect sizes, and deciding which estimates to prioritize.

Calculating sample sizes often requires foreknowledge of unknowable factors. And though more samples leads to greater precision, it is also always balanced against the cost of recruiting respondents, incentivizing us to get away with the smallest panel of participants possible.

It can be useful to develop some heuristics to help to quickly determine an acceptable sample size range that you can work with for your project. You probably have your own set of homemade rules and best practices that you have developed over time. Let me share three rules of thumb for sample sizes that Sawtooth Software has utilized over the years specifically for estimating proper sample sizes for choice-based conjoint (CBC) experiments.

### Rule of Thumb for Sample Size #1: Start with 300

It’s down to the wire. The request for proposal is in your hand, you have two seconds to form your bid and you need a best guess at how much that sample is going to cost you. So, with no time to think, what should the size of your sample be? 300 respondents…probably.

**For many statistical applications, conjoint included, 300 respondents is**** a good rule of thumb for sample size**. Planning to report subgroups separately? In that case it’s best to plan additionally for at least 200 members of each subgroup. So if the intent is to compare the choice behavior of urban, suburban, and rural customers your first thought should be a sample size of at least 600, 200 per segment. **The “300 respondents per study, 200 per subgroup” rule-of-thumb**** is almost silly in its simplicity but works well in practice and is certainly the quickest rule to apply in a pinch**.

## Rule of Thumb for Sample Size #2: Ensure at Least 500 Appearances per Level

You have planned ahead this time around, padded your prep time, and allowed an entire 30 seconds to calculate your required sample size before you need to get that bid out the door. **Need the fastest formula possible? In that cas****e, follow our second rule-of-thumb for sample size: ensure at least 500 appearances ****per level.** The idea here is that across your entire sample of respondents, every attribute level should appear a minimum of 500 times. This calculation requires that you know or decide how many tasks (sets) you plan to include in the exercise, how many concepts (cards) will be shown in each task, and which attribute has the greatest number of levels. Once you have those figures, plug them into this formula for a quick answer:

Where ** c** is the maximum number of levels in an attribute,

**is the number of tasks in the exercise, and**

*t***is the number of concepts in a task (not including the “None”).**

*a*As an example, imagine a CBC experiment involving 3 attributes. The first attribute has 3 levels, the second attribute has 6 levels, and the third attribute has 4. In this experiment we plan to show 3 CBC concepts in each choice task, and we will ask each respondent to complete 8 choice tasks in total. Our back-of-the-envelope formula suggests that a minimum sample size would be about:

Two notes on this approach. First, 500 exposures should be considered the bare minimum in most cases. In practice, it is safer to plan for 1000 exposures per level so feel free to adjust the formula accordingly. Second, this approach was developed in a time when aggregate estimation was the only option for CBC data. It is not considered optimal for individual modeling techniques like Hierarchical Bayes that are common today. Still, as a heuristic this approach does a good job of getting you in the ballpark of a sample size estimate and many researchers still find it useful as a rule-of-thumb.

### Sample Size Rule of Thumb #3: Ask the Random Robots

This time you are not in a crunch. You have all the time in the world and you want to take a nice, well thought out approach to determining your size of sample. You have even gone so far as to complete your CBC design; attributes and levels all in place, tasks and concepts all laid out. Now you have a chance to test any sample size you like with a set of robotic random respondents. First, choose the number of respondents that you would like to use as a starting point. Next, generate that number of random responses to your CBC exercise (this can be accomplished quickly using a random number generator). Then run those responses through a logit model. Your resulting utilities will be pretty meaningless, it was random data after all, but the standard errors on those utility estimates can be useful. **The sample size rule of thumb here****:**** if those main effects standard errors are below 0.05 you probably have a large enough sample.** For two-way interaction effects and any alternative-specific attributes, keep the standard errors below 0.1.

If you are using Lighthouse Studio for your programming, then you can automate this entire process using the CBC Test Design feature. In any case, the “random robots” approach is quite robust as it allows you to tweak other settings besides respondent count in order to drive down that error. Increasing the number of tasks, adding more concepts to each task, or removing design prohibitions will all bring those estimated standard errors down and might save you the trouble of purchasing more respondents. Keep in mind that just like the second rule of thumb for sample size (back-of-the-envelope method), this approach was also designed for aggregate estimation and relies on a pooled logit model to calculate these standard errors. Consequently, the test analysis will not perfectly reflect the results that you eventually get from your individual respondent model.

### Final Thoughts About Finding a Satistically Significant Sample Size

As mentioned, there is much more to consider when determining sample size than what I have described here. These three rules of thumb are uncomplicated heuristics, extreme simplifications of problem solving that provide a useful general rule to apply to most situations. If you are new to conjoint research, please consider this a jumping off point in the discussion, not a final prescription. I will include some additional reading below for those who wish to explore the topic further. Remember that as powerful as choice-based conjoint (CBC) might be, you still cannot disregard general sampling considerations. You can’t cheat representativeness, you can’t get good estimates out of bad data, and you can’t quadruple your precision by quadrupling the length of your survey without expecting your respondents to quit or fall asleep. But maybe you can use a few of these rules-of-thumb to help you out on your next project.

### Additional Reading

*Getting Started with Conjoint Analysis: Strategies for Product Design and Pricing Research* – Chapter 7: Sample Size Issues for Conjoint Analysis

*Becoming an Expert in Conjoint Analysis: Choice Modeling for Pros* – Chapter 8: Sample Size Decisions

Quick and Easy Power Analysis for Choice Experiments