Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

low Pct.Cert but still a good model?

Hi

I have read a lot of posts in this forum regarding percentage certainty and other quality indicators of HB analysis, but there is still some confusion for me, so let me give you a short overview about my thoughts. Hopefully you'll figure out what I'm after.

On the one hand there is the survey with responses that are like they are. You cannot change those, so there is no good or bad data.

On the other hand HB-analysis provides some model fit parameter like the percentage certainty, the RLH and the avg. variance. They provide, as far as I understood,  information about how well a model fits compared to a null.model (one wehere the responses are given randomly). So more or less someone can say that a model with a better fit (means higher values on those mentioned categories) reflects a kind of pattern in the selection of the respondents (if not it would be random). But sometimes, data do not provide those clear patterns as many different opinions are legitim (e.g. in studies regarding the preference of future developments) .

Also, models with weak quality parameters could be tuned with priors, degrees of freedom (e.g. with the model explorer), cleaning of respondents (time, continuity of replies, individual RLH etc.), holdout tasks and so on. All with the goal to boost the quality indicators to a better level with the risk to overfit the model (which I still don't get when this happens).

So, when I have data with low indicator values (e.g. model pct.certainty below 0.5), cleaning the data might be the way to go. However, sometimes all those boost-processes show little effect and the quality indicators are still down.

I noticed that also if the quality indicators are down, the content could do make sense at all. So I noticed that alpha draws could still show significance in nearly all attribute levels.  

What does that mean? If significance of the attribute levels is also shown in models with  lower fit, where is the difference to data with high quality indicator values? Is there any threshold where one can say that this is a good model fit, or a bad one?

And if so, isn't a good or a bad fit also related to the field of interest? So literature says that you want to be above 0.6, better 0.65 pct.cert. But I dont understand why? Sometimes I do think it is not necessary that the simulation results are right until the 3rd decimal place as it could be more important to capture the tendencies (like population preference behavior).

I made some hands on comparison between models with a good fit (above 0.6 pct.cert) and bad ones (0.4-0.5). Actually the differences in the simulation reports where not substantial (from my point of view, but I am maybe wrong here). There was a correlation of 0.88 between the simulation matrix of a model with 0.62 and 0.48 pct. cert.

Hopefully someone can bring some light into this situation as I am sometimes struggling with those low indicator values and I really  do not  now how to deal with it.

Thanks and all the best.
Have a good night,
Boris
asked Apr 4 by bs77 Bronze (710 points)

1 Answer

0 votes
Boris,

I'm not aware of an absolute threshold one should shoot for with McFadden's rho-squared (what our software labels as Percent Certainty).  I don't know how you concluded that your model with rho-squared of 0.50 had a low fit except that it was lower than 0.60 or 0.65 you think you need.   Rho-squared  fit statistics do tend to be lower than the R-squared statistics you get in regression models, so don't hold yourself to that standard.  In short I wouldn't use rho-squared as an absolute indicator of quality
answered Apr 8 by Keith Chrzan Platinum Sawtooth Software, Inc. (73,500 points)
Hey Keith
Thanks for your reply.
I am not very sure, but I think I took that benchmark from a presentation of Bryan Orme, where he said that something like "You need to be at 0.6, but you want to be over 0.65). But maybe I am misinterpreting something. However, If that is not relevant at all I am feeling a little relieved.

So basically you say that the McFadden's rho-squared is a fit indicator, but whatever it looks like, it is more important that the results of the HB-Analysis makes sense in form of interpretation, right?

greetings, Boris
...