I have - once again - two questions for you, which I would be very happy to answer.

But first a few words about my design:

I have four attributes with three and four levels respectively. In total there are 108 different cards. I draw 36 cards with the highest information content using the Federov exchange algorithm. Then I generate the choice sets using various methods and divide them into blocks. The design consists of three blocks with 12 choice sets each. Each respondent is assigned to one of three blocks. Thus the choice sets of respondent 1 are the same as those of respondent 4, the choice sets of respondent 2 are the same as those of respondent 5, and so on. The designs are created using different R-Packages, e.g. 'support.CEs' (Mix&Match2), 'choiceDes', an own function for sampling without replacement, ...

Now I would like to test these designs with Lighthouse Studio and the simulated respondents and especially compare them with the Sawtooth methods (Balanced Overlap, Shortcut, ...).

Now the questions:

----------------------------------

1) When I insert these designs into Lighthouse Studio and test them (Legacy (OLS) Efficiency Test), I get a warning on each: „We strongly encourage you to further investigate the efficiency of your design prior to fielding this study.“.

And this, although the efficiency is very close to 1. I suppose this is because I only have 3 blocks of 36 cards each?

Example:

Legacy (OLS) Efficiency Test

-------------------------------------------------------------

Att/Lev Freq. Actual Ideal Effic.

1 1 36 (this level has been deleted) DVU

1 2 36 0.2357 0.2357 1.0000 KVU

1 3 36 0.2357 0.2357 1.0000 BEG

2 1 27 (this level has been deleted) Mix_Default

2 2 27 0.2887 0.2887 1.0000 Mix_EE

2 3 27 0.2887 0.2887 1.0000 Mix_Wind

2 4 27 0.2887 0.2887 1.0000 Mix_PV

3 1 36 (this level has been deleted) Reg_0

3 2 36 0.2357 0.2357 1.0000 Reg_50

3 3 36 0.2357 0.2357 1.0000 Reg_100

4 1 36 (this level has been deleted) Preis_0

4 2 36 0.2357 0.2357 1.0000 Preis_5

4 3 36 0.2357 0.2357 1.0000 Preis_10

The frequencies for Balanced Overlap are 3600 and 2700 respectively.

With the "Logit Efficiency Test Using Simulated Data", the standard errors of all my methods are ok and comparable to the Balanced Overlap method. Now my question is, how do I deal with this warning? For the estimation of Main Effects and 2-Way Interaction Effects 36 cards (out of 108 possible) should be sufficient? It is not quite clear to me why 300 blocks, i.e. one individual block per respondent, would be superior to the 3 fixed blocks.

The help function of Lighthouse Studio says:

“We generally recommend that you include enough versions of the questionnaire so that the number of random choice tasks times the number of questionnaire versions is greater than or equal to 80 (assuming no prohibitions, and typical attribute level specifications).”

In my case 12*3 = 36 < 80, so the number of blocks is too small?

Maybe I should ask a more general question: Why are more blocks better and how many are actually necessary?

2) Is it possible to use the so-called "Sequential Dual Response (SDR)" with Sawtooth? With this method, so-called "forced choice sets" are performed first. In my case the 12 choice sets with 3 cards each, but without none-option. The second step uses these results by displaying only the selected card of a choice set + none-option (so called "free choice sets"). In contrast to traditional dual response, these two phases are separate, i.e. first all forced choice sets and then all free choice sets. For reasons of time, I imagine this in my case as follows:

i) A respondent answers the 12 forced choice sets.

ii) Only 6 of the 12 choice sets are displayed again as free choice sets.

Is it possible to implement something like this with some programming effort? If so, how?

Thanks in advance!

Thank you for the quick answers! You are great.

The small number of blocks was/is caused by the fact that I want (or wanted) to implement the design manually on a survey platform. Right now, I'm torn between using Sawtooth after all. Therefore, question 2 is also very important for me.

With the survey platform, 12 choice sets per block must be implemented manually (without holdouts), because each block results in a new branch of a tree to which a respondent can be assigned. Also with my methods, e.g. optimal blocking using the algorithm of Cook & Nachtsheim (1989), the OLS standard errors decrease with increasing number of blocks. It is interesting to see that the number of blocks is similar to the number of respondents: At a certain number, there is hardly any change. The marginal effect of an increase becomes smaller and smaller.

3 blocks, 12 choice sets per respondent:

As you can see, all main effects as well as first-order interaction effects can be estimated. The Balanced Overlap is still better, but the difference is small.

@ Keith

Unfortunately, I was not able to follow your descriptions. :-/

Here again, what I would like to implement:

In the first phase, a respondent gets 12 choice sets with three electricity tariffs each to choose from. For each choice set, he has to choose a tariff. There is no none-option. That’s where the name "forced choice sets" comes from.

An example:

In the seventh choice set, a respondent chooses tariff 2. In the eighth choice set, a respondent chooses tariff 1. After the 12 forced choice sets, phase two starts: The first so-called "free choice set" shows only the selected answer of forced choice set seven, i.e. tariff 2 vs. none-option. The second so-called "free choice set" shows only the selected answer of forced choice set eight, i.e. tariff 1 vs. none-option.

It is not necessary to use the responses from the choice sets 7-12. A random drawing of the forced choice sets would also be possible. The important thing is that there are no more than 6 draws. With the traditional dual response, all answers (=12) would be queried again. In addition, the two phases forced/free are not separated.

I hope you understand what I mean. The question is whether this method can be implemented with Lighthouse Studio?

The concept comes from:

Christian Schlereth, Bernd Skiera (2017) Two New Features in Discrete Choice Experiments to Improve Willingness-to-Pay Estimation That Result in SDR and SADR: Separated (Adaptive) Dual Response. Management Science 63(3):829-842. https:// doi.org/10.1287/mnsc.2015.2367

The authors write on page 841:

“The separation feature ensures that context effects with respect to the no-purchase option are not a concern, because the forced choice questions do not contain a no-purchase option, and all free choice questions contain only one product alternative, disconnected from the forced choice questions. In addition, because of the strict separation of forced and free choice questions, both SDR and SADR avoid choice deferral, a major problem for dual response experiments.”