Testing for Managerial Significance

You’ve often seen researchers report a finding as “statistically significant.”  Even if the finding is very unlikely to occur by chance, is the difference in ratings or size of the regression effect big enough to matter a hill of beans to a decision-maker? 

Read More

We've updated the MaxDiff Analyzer!

With a new look, new home, and new features, there's a lot to love about the new MaxDiff Analyzer.

Read More

How MaxDiff Is a Better Measurement Technique than Rating Scales

At the April 2019 Quirk’s Event in Chicago, David Hengehold (P&G) and Megan Peitz (Numerious) showed how MaxDiff (best-worst scaling) often is a better survey measurement technique vs. ratings scales. David shared how P&G uses MaxDiff with great results. In Chicago, David & Megan showed an interactive MaxDiff survey demo for Ice Cream preferences, very similar to this:

Read More

Can Default Main Effects Conjoint Models Do Well with Brand/Price Curves?

One of the celebrated capabilities of CBC (Choice-Based Conjoint) is the ability to model different price curves for different brands/SKUs. People often think this necessarily involves adding interaction effects to the utility models, or by creating nested alternative-price attributes within each brand/SKU.  But, that’s not always the case…

Read More

Quick and Easy Power Analysis for Choice Experiments

Four clients have asked about this topic in the past month, so perhaps it's worth a post. It turns out that there is a quick way to use Lighthouse Studio to do power analysis for choice-based conjoint experiments.

Say for example that a client wants a choice-based conjoint experiment with four 3-level and one 2-level attributes. The client plans to show each respondent 10 tasks each with two alternatives and a none and wants to know what sizes of significant coefficients (utilities) the design will be able to detect (at a given level of confidence and power). To answer the client’s questions, follow these four steps:

Read More