CBC lets you specify a single or multiple "fixed" tasks. Fixed refers to the fact that every respondent is shown the same choice task, with the product concepts defined in exactly the same way. You must define your own fixed tasks; CBC does not design them for you. (By default, all fixed tasks are initialized to level "1" for each attribute.)
Most CBC users will opt for a (controlled) randomized design, since they are quite efficient, automatic, and permit great flexibility in analysis. Some CBC users with design expertise may choose to implement a fixed design (consisting of a single or multiple blocks), which is most easily done by importing the design from a .csv file. A fixed design can be slightly more efficient than a (controlled) randomized design in measuring the particular effects for which it was designed.
Some researchers use a mix of both (controlled) randomized tasks and fixed tasks within the same CBC study. For example, some researchers recommend putting the client's base case competitive scenario into the questionnaire as a fixed task, and also to include that fixed task during utility estimation. It is argued that this can make the market simulator's estimates for the base case even more precise, as well as variations to the base case.
Some researchers used fixed tasks for holdout observations, not to be included in utility estimation (described below).
Analyzing In-Sample Holdout Concepts
Some researchers add a few fixed tasks as holdouts to their study, for the purpose of model validation. If you have specified fixed holdout choice tasks within the CBC questionnaire, you can analyze the results by exporting the responses under the Field | Data Manager area. The traditional approach for holdout tasks has been an in-sample approach where respondents complete both experimentally designed tasks (Random tasks) as well as some additional holdout tasks that are not to be included in the utility estimation.
It isn't necessary to have many holdout sets to check the general face validity of your utilities, but if you want to make relatively fine comparisons between competing models then you should use at least five holdout tasks and preferably more. Also, if you want to use holdout choices to identify and eliminate inconsistent respondents, you need several fixed choice tasks.
If you do have several choice sets, it's useful to repeat at least one of them so you can obtain a measure of the reliability of the holdout choices. Suppose your conjoint utilities are able to predict only 50% of the respondents' holdout choices. Lacking data about reliability, you might conclude that the conjoint exercise had been a failure. But if you were to learn that repeat holdout tasks had reliability of only 50%, you might conclude that the conjoint utilities were doing about as well as they possibly could and that the problem lies in the reliability of the holdout judgments themselves.
Some researchers repeat choice tasks to achieve a measure of test-retest reliability. This type of analysis often is done at the individual level. If you plan to analyze holdout choice tasks at the individual level, you should export the data for analysis using another software program.
Our Updated View on Holdout Concepts
Historically we have recommended the use of a few fixed holdout tasks in CBC interviews (in addition to the controlled Random tasks); yet we don't use fixed holdout tasks very often in our own consulting practice. However, when we conduct academic research and publish results at technical conferences or in journals, we often include holdout tasks. Recently, we've placed more value on out-of-sample holdout validation rather than the in-sample validation that often is done using fixed holdout tasks. Out-of-sample validation involves validating the model using both choice tasks and respondents not used in the model estimation. Out-of-sample holdout validation is a stronger validation procedure than within-respondent (in-sample) holdout validation as it makes sure that the model can generalize (predict) well to new data (new choice tasks done by new respondents), especially actual purchases made by buyers in the real world. In-sample holdout validation runs the risk of overfitting the idiosyncratic characteristics of the sample data and thus overstating predictability.
Out-of-sample validation can be done without using "fixed holdout tasks" at all, but by leveraging portions of the controlled experimental design for respondents held out of model estimation. A good strategy for out-of-sample holdout tasks without having to use up extra respondent time to complete holdouts is to field a study with a limited number of versions (such as 6 or 8) and to do the extra work to implement k-fold holdout validation during analysis. Consider a situation in which you field a 12-task CBC study designed using the default "Balanced Overlap" randomized approach. If you use just six versions of the design instead of the default 300 versions, you typically will achieve a statistically efficient experimental design (make sure to use the Test Design functionality to assess this). During analysis, you can estimate HB utilities using respondents who saw versions 1-5 while holding out the choices for respondents receiving version six for validation. Then, you repeat this another five times for the other rotations of five versions for utility estimation and the remaining version for holdout validation. Then, average the holdout predictability across the six folds of the validation. Across the six-fold validation, we will have used six versions times 12 choice tasks, or 60 total holdout tasks each answered by 1/6 of the sample. (Note: Lighthouse Studio does not automate k-fold validation, so this needs to be done manually.)
After you’ve done the k-fold validation for the purposes of validating the model, you can run standard utility analysis using all versions of the design and all respondents for delivering standard reports and simulators to clients.