Lighthouse Studio

Understanding Willingness to Pay (WTP)

 

Introduction

Researchers and their clients often look for intuitive ways to report preference for attribute levels in monetary terms (e.g., willingness to pay or WTP).  Referring to preference for attribute levels in terms of money is certainly more intuitive than interpreting utility scores on the logit scale.  However, there is much to know about proper estimation and interpretation of WTP results, so we encourage you to take the time to study this documentation.

WTP for Differences in Levels

With conjoint analysis, preferences for levels are estimated on a relative basis.  We learn how much more preferred one level of an attribute is versus another within the same attribute; but we don’t learn the absolute desirability of any single level.  For example, if we’ve included three levels of speed (low, medium, and high) we consider a reference level (such as low speed) and estimate the monetary value of the other two levels with respect to the reference level.  For example, the relative WTP estimates may be:

Low speed

N/A (reference level)

Medium speed

$50 (relative to Low speed)

High speed

$120 (relative to Low speed)

WTP Is Not Additive across Multiple Features

Most approaches, including ours, for estimating WTP for attribute levels focus on a single change in a product feature rather than a series of simultaneous feature improvements across multiple attributes.  A common error with interpreting WTP estimated one feature at a time is to assume that WTP is additive across attributes.  For example, if each of six features has a measured WTP of $50 when individually and independently enhancing a base case product, it would be extrapolating beyond the assumptions of our WTP approach to conclude that the WTP for all six features collectively added to a base case product is 6 x $50 or $300.  Simply summing WTP values fails to account for diminishing marginal returns for cumulative product improvements.  Summing WTP values also fails to consider increasing resistance due to buyers’ budgetary constraints reflected by the fact that increasing the price by cumulative amounts may very well push the utility function for price into a new region of the part-worth utility function that reflects greater sensitivity to changes in price.  

WTP analysis for multiple features taken simultaneously could be done manually; it just requires simulating the firm’s base case product versus a version of the product enhanced by multiple features (against relevant competition and the None alternative) and finding the indifference price via trial and error (or asking the software to do the hard work by using the SolveForShare() function).

A General Caution

Before we delve deeper into our routines for estimating WTP, we should emphasize that for at least three decades we have recommended that researchers resist reporting WTP.  Instead, we have recommended that share of preference sensitivity analysis be used for reporting to general audiences the value of attribute levels.  With sensitivity analysis, we report the gain in share of preference (relative to a base case) associated with each change of attribute level involved in our conjoint experiment. Even after the release of WTP routines within Sawtooth Software’s market simulator, we still recommend emphasizing share of preference sensitivity reporting.

Our approach to WTP resolves many of the concerns we’ve had over the decades regarding WTP.  The key difference that makes our approach superior to previous practice is that we estimate WTP while accounting for realistic competition.  It would be appropriate to refer to our approach as WTP, given competition.

Failure to Account for Competition

In our opinion, failure to account for competitive alternatives is the main weakness in most commonly implemented WTP approaches.  We’ve found that this omission can lead to overstated WTP estimates.  In a 2001 paper later incorporated within the book, Getting Started with Conjoint Analysis, (Orme 2001, Orme 2004) we gave an example based upon the 1960s TV show, Gilligan’s Island, illustrating how failure to account for competition can inflate WTP estimates.  The cast is marooned on the island and a boat with capacity for two passengers appears on the scene ready to sell passage back to freedom to the highest bidders.  The rich Mr. Howell appears willing to pay millions of dollars for passage for his wife “Lovey” and himself—until a second equally seaworthy boat appears offering passage to freedom for $5000.  Mr. Howell of course chooses the $5000 option.  The point of this illustration is that even though Mr. Howell is willing to pay over a million dollars, the availability and price of substitute goods in the marketplace means the firm (the boat) cannot capture this amount.  If WTP is meant to represent the amount buyers are willing to pay the firm for enhanced features given the current marketplace, its calculation should involve relevant competition and also the None alternative if it is available.

Two common approaches to WTP estimation do not consider competition or the ability to opt out and usually lead to inflated estimates of WTP:

1.The algebraic approach that computes dollars per utile and uses this to convert differences in utility between features to monetary equivalents.  

2.The two-product market simulation approach that simulates respondents choosing between just two versions of the product: one with and one without the enhanced feature.  The price for the enhanced version of the product is adjusted upward via trial-and-error until the shares are distributed 50/50.  That price difference that equalizes the shares of preference is taken as WTP.

Based on a meta-analysis of nine CBC studies we conducted, we find that on average, the WTP approach we've been describing leads to WTP estimates that are about 10% less than the two-product simulation approach and about 20% less than the algebraic approach with medians.  For three of the datasets, WTP using our approach leads to WTP estimates 50% of the algebraic approach.

Simulation-Based WTP with Proper Competitive Context

In 2001, we recommended the approach we take here with our software: a competitive set market simulation approach for estimating WTP that properly accounted for competitive alternatives in the marketplace as well as the None alternative (Orme 2001).  The approach involves first simulating market choice for the unenhanced version of the firm’s product against a rich set of competitive offerings and the None alternative.  Next, the firm’s product is enhanced with a new feature and via trial-and-error (or preferably an automated search algorithm) the increase in price that drives the share of preference for the firm back to the original base case level prior to feature enhancement is taken as the WTP.  In real marketplaces, buyers are rarely limited to just one brand to obtain enhanced product features; they can select from many alternatives to achieve the same or compensating product benefits.  Or, they can opt out.  When competition is accounted for, WTP estimates are lower and more realistic.  Our approach focuses the analysis on respondents on the cusp of choice, such that buyers who are not very interested in the product enhancement do not factor much into the WTP estimation.  It would be appropriate to refer to our approach as WTP, given competition.

Because our WTP approach can require a thousand or more sampling of competitive scenarios simulations, we recommend using the Share of Preference simulation method rather than Randomized First Choice.  In our opinion, the practicality of speed would seem to counteract any benefits we might see involving RFC's correction for product similarity when computing WTP.

Upstream Steps to Improving WTP Analysis

We should note that hypothetical bias (e.g., respondents not spending real money in questionnaires or having to live with the consequences of their choices), interviewing the wrong people, and poor questionnaire design can inflate WTP estimates.  We’ve outlined some ideas for improvement in these respects in our book, Becoming an Expert in Conjoint Analysis (Chrzan and Orme 2017).  Noisy/bad data also can lead to exaggerated WTP and steps should be taken to remove respondents who appear to be answering randomly or completely ignoring price (Allenby et al. 2014).  Sometimes data cleaning for noisy respondents and/or respondents with reversals can remove as much as 50% of the data, though in our experience it’s more typical to need to delete about 15% to 25% of the sample.  

 

Created with Help & Manual 8 and styled with Premium Pack Version 4 © by EC Software