I´m currently running my CBC study and wanted to close the survey soon. Before I started running my survey, I used the Test Design Efficiency function and was able to generate standard errors below 0.05 for all main effects with my expected sample size of 100 (sample size per CBC is only modest as I have two treatment groups for which I run two seperate CBC´s).
Now I even was able to collect more data (n=150/treatment group) but standard errors with logit exceed 0.05 for some attribute levels (highest standard error is 0.06477). This only concerns attributes with more (three) levels. The remaining attributes only have two levels. Standard errors for these attribute levels are below 0.05.
As far as I can see, the higher standard errors are caused through the high proportion of tasks in which respondents selected "None". It is about 32% (treatment group 1) and 30% (treatment group 2). When testing the design, I assumed the typical None-rate of 15% would apply.
Is there any way to further reduce standard errors? What came to my mind was to
(a) try to collect more data. However, as I have two treatment groups, I might need many respondents until I can achieve standard errors below 0.05.
(b) exclude respondents, which very often chose the None option. Is this even a reasonable approach, as this also means that I delete some data, that could be used for estimation?
Are there any other possibilities?
I want to do the final analysis using HB. What would standard errors above 0.05 in logit imply for my analysis in HB? Am I even "allowed" to use and analyse the data?
Thanks so much for your help!