Fit statistic for ACA surveys

Last Updated: 09 May 2014Hits: 5678
What is the fit statistic for ACA surveys?

ACA's default OLS utility estimation reports a "correlation coefficient", which is really the R-squared (agreement) between the utilities estimated in the first sections of ACA (self-explicated priors + conjoint pairs) and the ratings made in the final section (Calibration Concepts). 

Since Calibration Concepts are really just a few observations (typically 4 to 7 ratings of purchase intent for those last product concepts shown to each respondent), if the respondent got fatigued, bored, or otherwise didn't engage very well in those last few purchase intent questions, they could get a very bad R-squared reported.  Respondents are sometimes just bad in general at giving responses to those calibration questions, so the R-squared reported is not always a good indicator of whether a respondent overall was bad or good.

If you are using the superior ACA/HB utility estimation routine (an add-on license), the fit that is reported is an R-squared from the HB regression across the Conjoint Pairs (the core part of the ACA survey).  We suspect that the R-squared reported via ACA/HB is probably a bit more reliable than the R-squared reported by the OLS routine for judging the quality of respondents.

We don't really have any specific rules of thumb on interpreting an appropriate cutoff for removing respondents based on an R-squared value, but it could be used in tandem with a few other things to possibly identify "bad" respondents, including

1) Total time to complete the interview, or page time spent on conjoint or other questions

2) Straightlining or other bad behavior in the survey