## On Interaction Effects and CBC Designs

It is easy and automatic to get efficient designs using the CBC System. As long as you do not specify any prohibitions, CBC's randomized designs are near-orthogonal, providing near-optimal efficiency for measuring main effects. However, CBC version 1's design strategies (Complete Enumeration and Shortcut Method) are often not as efficient as they might be for measuring interactions.

CBC's design strategies include the criterion of Minimal Overlap (each level is shown as few times as possible within a choice task.) Minimal level overlap within choice tasks is optimal for measuring main effects, but not optimal for measuring interactions. To illustrate this point, consider two attributes each with three levels in a minimal overlap design with three concepts per task. Each level, therefore, is shown exactly once per task. The following matrix represents the possible combinations of two variables that define a hypothetical league of nine volleyball teams.

 Men Women Mixed Seattle 1 2 3 Chicago 4 5 6 New York 7 8 9

For our minimal overlap design, Team #1 (Seattle, Men) can only be shown in a CBC task versus teams defined by cells not in the same row or column (teams 5, 6, 8, and 9). (Don't ask us to explain the rules for three-way volleyball matches!) Why is this problematic for measuring two-way interactions? If we wanted to judge how good each team was in our hypothetical league, we'd like to arrange for each to play all other teams to maximize our ability to declare a winner.

Only being able to match each team against four of the eight other teams limits our ability to learn how good each team is. For measuring interactions between Gender and City, it seems particularly useful to have teams in the same rows and columns play one another. For example, we'd like to have the Seattle women play the Chicago women. Not being able to directly compare products in the same row or column hinders our ability to most efficiently measure the interactions. But adding overlap comes at a price: it weakens the precision of main effects estimates.

Based on this observation, we have created two new randomized design strategies that will be available in the upcoming release of CBC for Windows: Random and Balanced Overlap. The Random method is as its name implies, simple random selection (with replacement). Balanced Overlap is a controlled middling position between Random (large amount of possible level overlap) and Complete Enumeration (minimum possible level overlap). Both of these new methods permit levels to be displayed more than once within the same choice task.

We created a synthetic data set to investigate the tradeoff between precision of main effects versus interaction terms under these different design strategies. (The shortcut method is so similar to the complete enumeration with respect to this issue that we've omitted it from the discussion.)

We used a design with three attributes, each with three levels. There were 500 respondents, 20 tasks each, 3 concepts per task, and no None alternative. We developed known utilities for both main effects and interactions. The main effects utilities were (1, 0, -1) representing the three levels of each of the three attributes. Additionally, we specified a two-way interaction effect between attributes 1 and 3. The interaction effects were one-fourth the size of the main effects, with values of (0.25, 0.00, -0.25). These were applied so that the sum of the interaction effects across rows and columns was 0. We added a relatively large random normal component (Z score times 3) to the utility sums before simulating respondent answers. The data below are the average of ten replicates of the synthetic data set using different random numbers and random designs.

The following table shows the standard errors and average t-values (among known non-zero effects) for main effects:

Main Effects
 Average T-value Average Std. Error Complete Enumeration 26.8 0.0161 Random 23.0 0.0183 Balanced Overlap 25.3 0.0167

As expected, the minimal overlap design (complete enumeration) has the highest precision for main effects estimates. The t-value is the effect (utility) divided by the standard error, and can be taken as a signal-to-noise ratio. For main effects, the signal-to-noise ratio is 14% lower (1-23.0/26.8) for random, and 6% lower for the balanced overlap approach relative to complete enumeration.

Below is the same information for interaction terms:

Interaction Effects
 Average T-value Average Std. Error Complete Enumeration 3.06 0.0316 Random 4.03 0.0257 Balanced Overlap 3.72 0.0277

The random approach achieves the greatest precision for interaction effects, followed closely by balanced overlap. The signal-to-noise ratio is 32% and 22% higher for the random and balanced overlap methods, respectively, relative to complete enumeration.

It is worth noting that the performance of the different design methods for estimation of main effects and interactions will vary depending on the number of attributes, levels, concepts per task and amount of variation in the data. The figures we've presented represent one such case. Even so, the findings should generalize to other cases.

These findings suggest that one should include at least some degree of overlap in the CBC design when interaction terms are of particular interest. Overlap for an attribute can be added to a design by simply using more concepts than attribute levels in tasks. Our example above represents a worst case scenario for estimating interaction effects under minimal overlap design strategies. We expect that minimal overlap strategies may be about as effective as the random approach for estimating interactions between attributes which have fewer levels than concepts per task, though we didn't investigate this specifically.

In summary, we suggest using complete enumeration (or its sister shortcut method) for main-effects only designs. If detecting and measuring interactions is the primary goal, then the random approach is favored. If the goal is to estimate both main effects and interactions efficiently, then overlap should be built into the design, at least for the attributes involved in the interaction. Using more concepts than attribute levels with complete enumeration, or utilizing the compromise balanced overlap approach would seem to be good alternatives.

More details on these two new design methods will be available in the forthcoming CBC for Windows documentation.

## Lighthouse Studio

Lighthouse Studio is our flagship software for producing and analyzing online and offline surveys. It contains modules for general interviewing, choice-based conjoint, adaptive choice-based conjoint, adaptive choice analysis, choice-value analysis, and maxdiff exercises.