Follow us on your preferred platform
Bryan Orme is the president of Sawtooth Software. He is the recipient of the American Marketing Association’s Charles Coolidge Parlin Award. He has published over one-hundred articles and white papers on conjoint analysis and related methods. He also authored the book Getting Started with Conjoint Analysis and co-authored the books Becoming an Expert in Conjoint Analysis and Applied MaxDiff. Bryan has also served as an ad hoc reviewer for the Journal of Marketing Research.
James Pitcher leads GfK/NIQ’s Global Marketing Science Brand Practice, crafting innovative analytical solutions to better measure the health of brands. He has spent over 15 years providing statistical advice and consultancy within the Market Research industry, working with clients across many different sectors and regions. James is an expert in conjoint analysis, brand research, pricing, consumer segmentation, and a wide range of multivariate techniques.
Bryan Orme
James Pitcher
*Note: The following material is a true and direct transcription from voiced conversation. Spelling and grammar errors should be expected.
Justin Luster: Okay with that, let me introduce one of our guests James Pitcher James, is a marketing science lead at NIQ which was recently merged with GFK. He builds innovative analytical solutions to better measure the health of brands. He has over 15 years of providing statistical advice and consultancy within the market research industry.
He's an expert in congenital analysis, brand research, pricing, consumer segmentation, and a wide range of multivariate techniques. James, did I say that all right?
James Pitcher: Yeah, sounds good.
Justin Luster: Welcome, James. Where are you stationed now? Where are you joining us from?
James Pitcher: I'm joining you from London.
Justin Luster: London.
Nice. Cool. We also have President of Sawtooth Software, Bryan Orme. He's with us as well. He's the recipient of the American Marketing Association's Charles Coolidge Parlin Award. He's also published over 100 articles and white papers on conjoint analysis and related methods author of Getting Started with Conjoint Analysis and coauthored the books Becoming an Expert in Conjoint Analysis and Applied Max Diff.
He's also served as an ad hoc reviewer for the Journal of Marketing Research. Welcome, Bryan.
Bryan Orme: Thank you very much. Glad to be here.
Justin Luster: So Bryan’s joining us from our offices here in Provo, Utah. With that, you can share your screen and take it over.
Bryan Orme: Great. When we think about. Price sensitivity measurement. We could think about using real sales data, but that's not always available to you and it sometimes has challenges and is backward looking as James is going to discuss. So with marketing research we have to try to figure out how to do it in surveys often.
And it would be really nice if we could just ask people, how price sensitive are you, what's your price elasticity and what's your willingness to pay, for a certain feature or not. But of course we can't do that. And attempts to do that have largely failed. So what we do is we create a choice experiment.
We make it look like the real world. We mimic the purchase process by showing products on the screen. that respondents could be see in the real world and could potentially choose or not choose. And rather than just showing one of these scenarios, we're going to very carefully vary the features and the prices to see across multiple scenarios where we've carefully varied the prices in an independent way and the features in an independent way so that we can see what's driving people to choose things and we can create a model you Typically a logistic regression model that allows us to build a what if simulator and predict what people will do.
And this has just worked out really well and has been refined over many years at Sawtooth Software Conferences. Next slide please. Now, you don't have to make it look text based. If you have some good web development type skills, if you have some knowledge of CSS and HTML and JavaScript within Sawtooth's platform or another platform, you can make these tasks look very realistic.
And this is an example courtesy of one of our customers, Knowledge XL, that does excellent work in this area of customizing shelf display CBC. Next. So some of the things that we've learned over the years have become best practices and common knowledge. Typically with these choice based conjoint surveys, we're showing from two to four product concepts on the screen if we're talking about durables or tech products or services, etc.
Many respondents are using small devices like cell phones. So we have to be cognizant of that and not show them too much at any one time. For CPG type research, where we're showing lots of SKUs, lots of brands on the screen, we often just show them graphically. So we can find that even on a cell phone, we could probably show 12 products at a time, maybe a 4x3 grid of those.
And if you can't do that and you really want to show a bigger screen, you might have to request that they use a larger format device to complete the survey or you might have to invite people into a central site where they sit down to computers with big screens to be able to do if you're worried about mobile screen respondents and about how much information you're showing at a time, and you're doing this CPG type work, another approach that's a little bit more advanced is to show perhaps just 10 to 15 products at a time using an evoked set experimental design and analysis.
Ahead of time, you ask people, what brands do you consider, or what package types do you consider, or whatever you need to do. To narrow it down to the 10 to 15 products or so that would be in their evoked set, and then you only show them items in their evoked set during the CBC tasks. We do recommend that you include a none alternative, as we tend to prefer, and we tend to prefer the dual response none.
That's an approach where you first ask people to pick a product, and then you ask them, given what you know about the marketplace and your budget, would you really buy this or not? Next slide. Prior to showing respondents the choice tasks, you're going to get better pricing information and better preference information.
If you prime respondents about their past purchase experiences, what they're looking for, how much they care about brand versus price versus innovation, what brand they last purchased, whether they search for deals, if they look at product reviews, etc. This is a good priming task and you can also use some of these variables as covariates in your HB estimation to slightly improve the models.
Then show about eight to 12 choice tasks to each respondent typically, which only takes about 10 to 20 seconds per scenario to complete. Respondents do these rapidly and you can do two or more choices. a CBC survey easily within a 10-minute survey because it only takes typically three to five minutes to do the CBC section.
The first one to four tasks, respondents tend to be warming up and settling into a more consistent strategy. So you'll see that respondents are tending to pay a little bit more attention to brand and a little bit less attention to price in the first few tasks. But after that, they tend to settle into a more a more realistic strategy.
We think that would more reflect what they're doing in the real world. Next, make sure to clean out your bad respondents when doing pricing research. We regularly see 20 to 50 percent of respondents who are bad actors. They're random they're because they just are not committing enough mental effort that you're catching them in a moment where they're just wanting to go fast.
Respondents have good days and bad days and sometimes you catch them on a bad day. But sometimes they're really bad actors. They're bots, they're cheaters, etc. And if you don't clean out your bad respondents, you're going to get inflated willingness to pay, your price sensitivity will be too low, it's going to be very problematic to do good pricing research.
And don't just use one method to detect whether a person's bad. I'd recommend multiple methods and take a look at multiple strikes to, to indicate that a person should be kicked out of a survey. We've got this fit statistic from hierarchical Bayesian estimation for CBC or for max diff. It's called the root likelihood or the RLH.
And we have white papers about the threshold for the fit statistic that's good enough or not for your study. Write me if you don't already know about those things. I'll be glad to share that information with you. Think about trap questions, honeypot questions, open ended questions that you look at the quality of the response and whether it looks like it's a bot answering you or a real human answering you.
And there are interesting things that can be done. For example, at the last Sawtooth Software Conference, the winning presentation came from a lady named Layla out at Numerius, who's done some interesting work with her colleagues regarding creating terrific, tricky kind of graphical looking questions that bots have a hard time answering to be able to catch bots next talking. I want to talk to you a little bit about setting up your price attribute. Now, when the prices vary a lot between premium and discount offerings. And this can happen in CPG. It can also happen in technology or durables. You might be tempted to go in and say we want to make sure that certain products are only shown at higher prices because that's what happens in the real world.
And certain products are only shown at low prices because that's what happens in the real world. So I need to set up all these prohibitions. I need to create 20 levels of price to be able to cover my full range and one product is going to be prohibited with the lowest eight levels of price and the other one with the highest level.
That's going to get yourself into a lot of trouble. Don't use prohibitions to try to customize the price ranges that are shown for certain SKUs or for certain technologies or brands, if we're talking about durables or technology products or services, etc. Rather there's a couple of approaches you can use.
For S for CPG work with lots of SKUs, it's typical to use what's called a conditional pricing display option. That's where you have five levels of price, From low to high, maybe from minus 30% to up 30%. And then you go ahead and for each SKU set in what those five prices are gonna be for each SKU or groupings of SKUs, if you wanna use what are called alternative specific designs, you might have.
Five different groupings of SKUs, some low priced SKUs, some high priced SKUs, and you might ahead of time create five different price attributes in an alternative specific design. Either of these approaches, whether we're talking about conditional pricing or alternative specific designs, allow you to customize the price range to be more realistic to the real world, which is what we want to do.
We want our CBC Experiments to look like as much as we can like the real world. So people answer and give us data that are going to map to the real world because we're consistent with that. And either of these approaches allow you to customize the prices seen without ruining your experiment due to multicollinearity in the design, both of these approaches are orthogonal approaches where the prices, although it doesn't sound like it, are really independent of the SKUs or the technologies that they, or the attribute levels that they're predicated on due to the way that we're able to code up the experiment.
It works really quite well. For modest prohibitions, You can do these corner prohibitions where, for example, you prohibit the highest level of price with showing with the lowest performance and the highest performance with showing with the lowest price. These modest corner prohibitions don't hurt you much at all and I recommend them.
Next slide. Last, and I need to turn this over to James so he has time, on price attribute estimation, we don't recommend just estimating a linear term. Use about five levels of price so you can capture some non linearity. Don't just go in and say, And constrain prices negative for everything, particularly for technology and durables products.
Do a three to six group latent class solution and see after you've cleaned out the bad respondents if there are segments that tend to avoid picking lowest prices. There's certain categories like door locks and Lasik eye surgery and home purchases. Where people react poorly to the lowest prices because to them, it signals poor quality.
So make sure you just don't go in and slap in a constraint on price negative without assessing with latent class analysis. I'd recommend if some groups really don't believe that way. For CPG, where you got lots of SKUs, I recommend that rather than estimating a single price attribute for price sensitivity, that you group similar SKUs into alternative specific designs that we estimate separate price sensitivity for different groupings of SKUs.
Next slide.
And I left you with, that was like drinking from a fire hose. Please Bryan at bryan@sawtoothsoftware.com. If you have any questions about follow-ups and want to see these white papers or have a follow-up question, I'd love to talk with you.
James Pitcher: Okay. Hopefully you can hear me. Thanks, Bryan. So I'm not going to talk about price elasticity and I'm going to start by talking about how we calculate it.
So the price elasticity of demand measures how sensitive the quantity demanded of a good or service is to a change in its price. And it's a useful measure because it helps you understand how consumers will react to price changes. It's calculated as the percentage change in quantity divided by the percentage change in price.
And you can see the general formula we have on screen here. For example, if a product's price increases by 10%. And the quantity demanded decreases by 20%. The price elasticity of demand would be negative too. No price elasticities are usually negative because usually the demand decreases as the price increases.
So how do we interpret price elasticities? So the higher the absolute value of the price elasticity, the more sensitive a product's demand is to changes in price. So if the price elasticity is between zero and minus one, we say that demand is inelastic, meaning consumers are less responsive to price changes.
If the price elasticity is less than minus one, i. e. it's more negative, we say that demand is elastic, meaning consumers are highly responsive to price changes. And price elasticity can be a really useful metric to understand which of your products are most sensitive to price changes and which are least sensitive.
And how the price sensitivity of your products compares with the sensitivity of competitor products. Now let's look at some different ways we can calculate price elasticity using conjoint data. So let's start with a very simple method. So first, we calculate the share of preference of the product when we simulate it at its highest and lowest price points.
So in this example, when we simulate the product at its lowest price tested in the conjoint design, which was in this case, 80 euros, its share of preference is 30%. And when we simulate the product at the highest price point tested, it's 30%. which is 120 euros. Its share preference is 10%. So we then calculate the price elasticity using these two price points, using the standard formula.
You can see how the calculation works through in the bottom left here. And if we follow through, we get a result of negative 1. 33. This price elasticity value means that the price sensitivity of the product is somewhat elastic. Now, the good thing about this approach is that it's nice and simple. However, there are some problems with it.
Mainly it ignores the fact that most often changes in demand vary at different price ranges. Hence, we often observe a nonlinear demand curve. So on the right here we have a typical demand curve and we can see that is a nonlinear line. We can see that the steepness of the slope changes between each price point.
Hence, when we calculate the price elasticity for each part of the curve, The price elasticities change. We can see that between 80 and 90 euros, the slope is steepest and therefore the price elasticity is greatest in absolute terms between these two price points with an elasticity of minus 2. 7. Now the absolute magnitude of the price elasticity then decreases with each section of the curve as the demand curve flattens out.
So it moves from minus 2. 3 to minus 2 to finally minus 1. 8. Therefore calculating the price elasticity simply using the highest and lowest price points misses information about the steepness of the curve throughout the entire demand curve. So we can solve this problem by simply by using a more sophisticated way to calculate the price elasticity.
First, we take the natural log of the five prices and our five shares of preference. We have an example in the table in blue at the bottom. We then find the slope of the linear regression line that runs through these five points. So on the right here, we have plotted our five log prices versus our five log shares of preference and drawn the linear regression line that runs through these five points.
And if we calculate the equation of this line, we see that the gradient of the slope is minus 2. 7. So this is our price elasticity. So we would report the price elasticity for this product as being minus 2. 7. Okay, so I'm now going to talk about a couple of ways price elasticities are sometimes misused.
It's important to understand that while the price elasticity is a useful metric to understand the price sensitivity of a product, you generally can't use price elasticities to calculate changes in demand. So let's talk through why this is. So on the right we have our demand curve for a product. And based on this demand curve, we have just calculated that the price elasticity of this product is minus 2.
7, but this price elasticity is just a general elasticity value that summarizes the elasticity across the whole demand curve, but the reality is that the price elasticity changes through different parts of the demand curve. Therefore, you cannot just use this one value to calculate what the share preference would be at each price point.
If you change the price, you should always calculate the impact of price changes on demand by running simulations and reviewing the share of preference scores. Another important thing to note is that price elasticities should not be used to determine if you should change your prices. Even if your product has a price elasticity between zero and negative one, i.e. the demand is inelastic, this does not indicate it would be best to increase your prices. Decisions on whether to change prices should be based on simulating price changes and reviewing the results, focusing on the share of preference and the revenue generation. Because even if your product is inelastic, it You may lose overall share or revenue across your product portfolio.
So price less, this is should be not used to determine how to set your prices. They should just only be used as a descriptive measure to compare price sensitivities across products and brands. Okay. So now let's move to looking at some tips for obtaining accurate price elasticities. We'll look at some best practices and some common mistakes to avoid.
So the first thing is that price elasticity is influenced by the competitive context. So as we've just discussed, the price elasticity measures the extent to which the demand of a product changes upon a change in its price. Usually when the demand of a product changes, it means that some of those consumers are switching to and from alternative products.
The extent to which consumers switch to and from alternative products will be influenced by what alternatives are available to them. Hence, what alternatives are available influences the price elasticity of a product. So generally, the more and the closer the available substitutes, the higher the price elasticity.
The more alternatives that are similar and that are available to me, the easier it is for me to switch when the product increases its prices, and hence the greater absolute price elasticity. elasticity, i. e. the more price elastic the product is. Therefore, when setting up the conjoint exercise, you should ensure to include all main competitors in the market, ideally that represent at least 70 percent of the market sales.
This ensures you have a realistic competitive context in the conjoint exercise. And then in the analysis phase, when calculating price elasticity of a product, do so in the context of a meaningful competitive scenario. Now, usually this is the current market scenario, which best represents the current market that exists in the real world.
This includes all products tested that are currently available on the market at their current price. So following on from this, Once you have your current market scenario, it's important we calculate the demand curve for the product in the correct way. We do this by varying the price of the product we want to calculate the price elasticity for.
Usually this means simulating at each price level tested in the conjoint exercise, although you can cut it up into more price points if you like. So in this example, the product was tested at five price points, 80, 90, 100, 110 and 120 euros. So we would change the price of the product to each of these prices in turn and record the share of preference of the products.
Now, what is crucial is that in each case we are only varying the price of the product we are calculating the price elasticity for. All other products remain at their current price. Their price is not changed. If we wanted to calculate the price elasticity for another product, say product two, we would set product one's price back to its current market price, and all other products would also be set to their current market price, and then we would only vary the price of product two and record its share preference at each price point.
Each price point product two was tested at, and this would give us our demand curve for product two. Now how not to calculate the demand curve would be to duplicate the product multiple times in the same scenario, each at a different price point. So this is what I've done in this rather silly example where we've repeated product one, five times.
Each product is at a different price point, one at 80 euros, one at 90 euros, one at 100 euros, and so on. And in this approach, you would then look at the share preference for each version of the product. And this would be your demand curve, but do not do this. This is not correct. Firstly, because It's a completely unrealistic scenario.
A product would never appear on a shelf five times at five different price points. It just makes no sense. Also, this was not how products were tested in the Conjoin exercise. Hopefully anyway, the same product would not have been shown multiple times in the same task to a respondent at different prices.
Therefore, this is a completely unrealistic scenario, which will give you unrealistic results. So do not do it. Use the approach I described before, where the product only appears in the scenario once and we vary its price keeping the prices of all competitors constant. So it's also important to be aware that price elasticity is influenced by anchoring and priming effects.
So price anchoring is a cognitive bias where individuals rely heavily on the first piece of information, the anchor, that they encounter when making decisions. So when a high initial price is presented, it serves as a reference point, making subsequent prices seem more reasonable or attractive by comparison, even if those prices are still relatively high.
For example, if a survey first shows participants a high price option, for example, a product at 500 euros, it might anchor their expectations, making them more likely to perceive subsequent lower price options, say, 300 euros as a better deal, even if 300 euros still was still above what they would normally be willing to pay.
Therefore, we recommend to not show prices or ask about price awareness prior to the conjoint exercise to avoid any of these anchoring or priming effects. To help with this, it's better, it's best to have the conjoint exercise as early in the survey as possible, and if possible, it's best not to show any numbers of any kind at all prior to the conjoint.
Even if they don't refer to a price, such numbers could still potentially introduce some cognitive bias. And finally, price elasticities can be influenced by the price range tested. So be mindful about how much you vary the prices of products in the conjoint exercise, as very large price intervals can increase respondent sensitivity to price changes.
This is because large price changes can have an increased psychological impact in the conjoint exercise, where the respondents overreact to price changes. So avoid last avoid large price variations, usually varying prices by something like plus or minus 20 percent works very well.
Okay. So now I'm going to talk about a study I conducted a few years ago. And this was an interesting study where we calculated price elasticities using sales data and compared them with price elasticities that we get from conjoint analysis. So At NIQ, we are lucky that we have access to point of sale data.
This is sales information collected directly from retailers, and it's a rich source of information that contains data on what products are being sold in individual stores on a weekly basis. So more specifically, we have information on the product specification and features of the products being sold, the number of units and resulting revenue of each product being sold, the price of products, what price discounts were applied to products, a product share of shelf and distribution.
Now, because we have this data available at store level and over time on a weekly basis, we can use it to build a mathematical model to estimate the price elasticity of different products. So how we build a mathematical model is as follows. So for each product, we create a separate multiplicative regression model to predict weekly sales units.
In our model, we only considered offline sales, and we typically use one to two years worth of data to do our modeling. And we control for categories, seasonality, and trend using a Lois smoothing algorithm. And then in a model, we have the following inputs. So we have the base price of the product. We have the price discounts applied to the product.
We have the base price and price discounts of competitor products. And we have the presence of competitors in store. Now, prior to modeling sales units and prices are like transform log, log transformed. And because we've applied this log transformation, the coefficient we get in our model represents the the product's base price in the model.
And this can be used as the, sorry, the base price that we have in the regression model represents the elasticity of the product. So note, we need to build a separate regression model for each product that we want to calculate the price elasticity for. So here we have the results and let's first look at the left hand side.
So in total, we compared the price elasticity of 38 products across seven technology categories. So you can see the categories listed here. They included laptops, steam irons, and TVs. And for each category, we've averaged the price elasticities for the products in that category. Now the numbers on the left in brackets show how many products we have in each category.
So the bars in orange, they show the average price elasticity scores for our sales data model. And the bars in blue, show the average price elasticity scores for our conjoined model. And we can clearly see that the price elasticities that we get from the conjoined are much higher in magnitude than what we get from the sales model.
The price elasticities from the sales model are actually very low, whereas the conjoined elasticities are greater than one in absolute terms for each category, indicating demand is generally elastic in these categories. Now let's look at the right hand side of the chart. Here we have correlated the price elasticities we get from the sales model with the price elasticities we get from the conjoint for each of the 38 products.
We can see visually there is no correlation between the two. When we calculate a correlation coefficient, we get a correlation of just 0. 1. Therefore, the price elasticities obtained from each method not only differ in magnitude, but also the relative difference in price sensitivity across the products differs as well.
So basically, they give us quite different results. Okay, so let's look at some potential reasons for differences in the price elasticities across the two methods. So the first reason is a lack of comparability across the methods. So we weren't, aren't always exactly comparing apples with apples. Now it's important to mention that when we set the conjoint exercise Up, we weren't comparing the conjoint model we were comparing the conjoint model and the sales model in various different ways.
So it wasn't like we set up the conjoint specifically to try to obtain price elasticities that would match as closely as possible the elasticities we get from the sales model. So in the conjoint exercise, we actually tested a much wider range of prices compared to the narrow price range observed in the sales model.
In the sales data we observed that the prices of most products don't actually change very much at all over a one to two year period. Now I know this study was conducted in 2019 when inflation wasn't so high. So it's important to note that the price elasticities that we get from our sales model only represent the price sensitivity within a very narrow price range.
Now in contrast, the price variations testing in the conjoint were much larger. Although we compared the elasticities of the products in the conjoint by simulating price changes within the same range as we observed in the sales data, the fact that we The larger price changes in the conjoint were tested would've triggered larger switch in responses and hence increase in the price sensitivity.
Furthermore, the conjoint has less accuracy between such a narrow price range as it wasn't designed. With this in mind, it was designed to cover a wide range of price ranges, so this is one reason why the conjoint price elasticities might be higher. Another reason is that the conjoint consisted of multiple attributes, meaning the conjoint cannot perfectly represent the products present in the sales model.
For example, a particular TV might have had a screen size of 57 inch, but in the conjoint exercise we tested levels that were 55 inch and 60 inch. Most products were not able to be perfectly specified, because we had a limitation in the number of attributes and levels we could specify in the conjoint exercise.
So as I said, the conjoint exercise wasn't set up specifically to try obtain. The price elasticity is that would match as closely as possible. The elasticity is from the sales model. A better approach may have been to use a skew price conduit where we test the exact same products present in the sales model and test them in the conjoined using the same price variations we observe in our sales model.
However, although the methodology wasn't perfect, the results are consistent with what we normally see. We do normally see that the price elasticities obtained from conjoint analysis tend to be greater in magnitude than the elasticities obtained from sales models. So let's look at some more potential reasons for the differences between the two methods.
So it's likely that the sales model underestimates the price sensitivity. We see the price elasticities are very low in magnitude. And let's look at some reasons why this might be. So I previously mentioned that the sales data in the model, we observed that the prices of most products don't change all that much over a one to two year period.
In fact, the observed price variation is so small for some products, it makes it difficult to estimate elasticities for these products. So for some products, we don't really have a particularly good model. And another potential reason for the low elasticities is that the sales model is not a controlled experiment like the conjoint exercises.
There are likely other factors that influence sales that are not accounted for in the sales model. For example, in store banners, promotional shelf displays, promotion by sales staff all heavily impact the sales of products and these weren't accounted for in the model. So the impact of price changes on sales are potentially drowned out by these other factors.
We also only considered offline sales in our model. Consumers who shop online maybe might pri may be more price sensitive, so based on the model only in offline sales may have lowered the price elasticity as well. So it does feel like the price elasticities we get from the sales model underestimates the price sensitivity as they are very low in magnitude, but is also important to remember that the price elasticities are only represent the price sensitivity within a very.
Narrow price range that we observed on the market. So whereas we think that the sales model might underestimate price sensitivities, it's likely that the conjoint model overestimates price sensitivity. And let's look at some reasons why. So the first reason is that Conjoined makes respondents fully aware of all prices, but in reality, maybe consumers don't shop around much, or maybe when price changes occur in store, people simply don't notice them, even if they do notice price differences in reality, switching to buying a different products will be more difficult than simply clicking on another product presented in a conjoined exercise.
Another reason is that Conjoint Analysis assumes all products are always available. However, in reality, only a subset of products are available in each individual store. There are therefore fewer alternatives for consumers to switch to when they're in store. And finally, Conjoint misses external factors such as in store promotions, sales staff, positioning on the shelf, which will affect the purchase decision.
making consumers less sensitive to price when in store. So for example a consumer may buy a more expensive product than they're intending to because it was easily noticed on the shelf or a member of staff recommended it to them. So in conclusion here's how I would summarize the differences between the methods.
Conjoint measures the theoretical price sensitivity outside the in-store boundaries. So you can think of it of a kind of pure price elasticity in a perfect world. But in contrast, sales models measure the in-store price elasticities which are affected by the in-store environment. The pure price elasticity as measured in the conjoint becomes corrupted and seemingly lowered by what happens in store.
So bearing all this in mind, here are some recommendations on when to use So sales data can be useful in modelling the impact of small short term tactical price changes. We've seen sales data only allows you to model price changes within a very narrow price change, and it can be useful to know But it can be useful to know how to tweak your prices in the short term from week to week.
And this can be done on a constant, ongoing basis by refreshing the model continuously. However, conjoint analysis should be used in the following circumstances. When you want to test the impact of a long term price change, that can Greater than what has previously been observed in the market. Sales data does not allow you to see what happens when a significant price change is made.
And these price changes tend to be more of a long-term change and Conjoined which provides a more pure measure of price sensitivity, which has not been influenced by the short term in store activities. It's likely a better approach. Conjoined also has the added benefits of being able to do the following.
It can test prices of new products and that have not launched yet in the market. We can calculate willingness to pay of a product or product feature, and we can also understand the price sensitivity of different customer groups. So although modeling using sales data does have its uses, it's unlikely to replace much of the pricing work that you're already doing using conjoint analysis.
Okay, I'm now going to switch gears a little bit and talk about how we can use conjoint analysis in brand health tracking, and more specifically, how we can use conjoint analysis to measure price elasticity to give us a better measure of brand health. So let's start by thinking about how brands generate revenue.
How much revenue a brand generates is a function of two things. How much they sell, the volume, and how much each item they sell costs. The price. Therefore, to measure a brand's health, we want to measure a brand's ability to do these two things. We want to measure brand preference, how many people prefer the brand, and brand premium, how much people are willing to pay for the brand.
And then we can combine these two measures of brand preference and brand premium into one overall measure of brand strength. Now, unfortunately, there are problems with traditional stated measures in measuring these two dimensions of brand choice and brand premium. We usually see that stated measures of brand preference do not align closely with market sales.
And this is problematic as brand managers want metrics that link to their market reality. Also, stated measures of brand premium are often highly correlated with stated preference, so therefore they're of little value because you aren't really measuring brand premium, you're just measuring brand preference again.
This is where Conjoint can help us, as it can provide Superior measures of brand preference using the share of preference and brand premium based on the price elasticity of brands. So we asked respondents to complete a simple brand price conjoint exercise. Respondents are asked to think about a standard product within the category and are shown brands at varying prices and asked which brand they would buy.
For example, for TVs we'd ask, imagine you were to buy a standard 49 to 55 inch UHD TV. If these were the only available options, which of the following products would you buy? And we've done a lot of validation of conjoint shares of preference versus stated preference. And we see that conjoint share preference is a superior measure for the following reasons.
So firstly, we see that conjoint share preference Come closer to market shares or volume real world market shares. Conjoint comes closer than stated measures. Also a conjoint share of preference are more stable over time. They better capture the long term trend in sales. They reflect better the probabilistic nature of buying behavior.
It's rare that someone will have a hundred percent preference for a product. And in conjoint, we obviously measure the extent to which people prefer each brand. And it provides richer data. for brands, regardless of their base size. And it's not prone to scale use bias. So we can use it for cross cultural comparisons.
We also see that the brand premium that we get from conjoined price elasticities is a superior measure to stated measures of cost. premium. We see that while traditional measures of traditional stated measures of brand premium tend to be highly correlated with stated preference, conduit brand premium scores are less correlated with preferences.
In fact, brands can have a high brand strength, even if they aren't widely appealing, if they have a high brand premium, and this will be the strategy for some brands. However, we do generally see the strongest brands tend to have both a high preference and a high premium. Now, it may sound obvious, but to increase your brand premium, you need to lower your price elasticity.
So before you can raise your prices, you first need to make consumers less sensitive to price changes. And we can use key drivers analysis. to see how you can lower your price elasticity and increase your brand premium. We often see that the key drivers of brand premium are different from the drivers of preference.
So brand preference is determined by a brand's ability to meet the category's core functional needs. So for example, providing good quality products that you can trust at an affordable price. However, to be able to justify a price premium, brands usually need to go beyond these core needs and show that they offer something different or they are a step ahead of the competition in some way.
So to conclude then, again, we see the power of conjoint analysis. We see conjoint analysis outperforms stated metrics. It really is a powerful tool and that's why we all love it so much. And for listening. Brown Health Tracking provides an exciting new use case for conjoint analysis. Conjoint analysis is already such a maturely developed methodology.
Yes, there are still some things we can still seek to improve, but we already have a very robust methodology that works well in many different scenarios. So perhaps instead of focusing so much on how we can improve our methods, we should spend more time thinking about in what other ways we can harness the power of this amazing methodology.
As a final thought then, I would encourage you all to think about, in what other new ways could we use conjoint analysis? Okay, thanks very much everyone for listening, that concludes our talk on price elasticity. I think back to you Justin, yeah?
Justin Luster: Yeah thank you so much James and Bryan, I really appreciated the webinar.
Bryan Orme: Yeah, thanks for joining us, James, and for all your experience that you're doing at NIQ and formerly at GFK. Thank you for all the presentations you've given at the Sawtooth Conference, where you've shown across lots of product categories and lots of countries, particularly in Europe, how well these methods are doing in terms of predicting market share and guiding clients to better decisions.
James Pitcher: Thanks very much for having me. It's always a pleasure.
Justin Luster: Thanks for that guys. And we'll see you next time.
Vanessa: Thanks for joining us for this episode of Research to Revenue. If you found the material helpful or insightful in any way, we'd appreciate if you'd leave us a rating. It only takes about 10 seconds and really helps us grow the podcast. Also, don't forget to subscribe if you'd like to hear more episodes like this in the future.
Thanks again for listening. See you next time.