SS Summer 1997
Invest in Yourself at the 1997 Sawtooth Software Conference (SS Summer 97)
In August, 1997, market researchers and academics from around the world will gather in Seattle to share insights, network and to socialize. That's just a few short months away! Don't miss this rare opportunity to develop your skills in quantitative market research and to rub shoulders with some of the best minds in our industry.
Starting back in 1987, the Sawtooth Software Conference set the standard for market research conferences. Not surprisingly, its format has since been imitated by many. But no other research conference for 1997 will be as relevant and useful for research practitioners interested in conjoint/choice topics and computer interviewing.
John Fiedler of POPULUS, Inc. has described the conference in these terms: "At most conferences, we can't understand the academics and we don't want to listen to sales presentations from research providers. Here, academics are understandable and providers present academic-quality papers without the info-mercials."
Here is a sampling of the topics that will be covered in August:
Early Registration Deadline: June 30
Registration is $600 until June 30, which includes breakfasts and lunches. After June 30,
registration will be $650. A registration form is included on page 7 of this newsletter.
Attendance is limited, so we suggest you register soon to avoid disappointment!
Library of Conjoint Articles Available on the Internet (SS Summer 97)
Last summer, we created a library of select articles and technical papers on our home page for others to download at no cost. We are pleased to report that our effort has not been wasted. Our library is by far the most frequented area on our home page (www.sawtoothsoftware.com). We think this reflects favorably on our users' desire to stay abreast of methods and developments in our field.
If you haven't yet visited our technical papers library, we encourage you to visit soon. Currently there are 24 conjoint-related articles available, grouped by method. We will be adding new titles in the future, so we invite you to visit often. Within each area, introductory articles are listed first. Abstracts are also provided, so you may view a summary of each article's contents before downloading.
The conjoint articles currently available are:
Sawtooth Software Products
Go back to Index
Q&A with Chairman Richard Johnson (SS Summer 97)
Richard M. Johnson is the Founder and Chairman of Sawtooth Software, Inc. Rich has spent the greater part of the last four decades working in the field of marketing research. He is widely cited in the conjoint literature, and is credited with the development of trade-off matrices and the ACA System for Adaptive Conjoint Analysis.
What is your opinion about the current relationship between academics and practitioners in marketing research? We are all aware that there is quite a gulf between academics and practitioners. Most practitioners are simply unable to understand the contents of most of the relevant journals today, and, sadly, they don't believe they're missing much.
In one sense, Sawtooth Software has profited from this gulf. We stand with one foot on each side, and one of our roles is to translate and convey information from the academic world to the practical world. To a lesser extent, we also transfer information in the other direction, by bringing academics and practitioners together with our Sawtooth Software Conferences.
I think there is an unsatisfied market for journals with important and useful articles, written in a way that seems relevant to practitioners. At our conferences we observe the dictum that every presentation must show promise of providing benefit to the least sophisticated listener, and yet contain something of interest to the most sophisticated. The journals would profit from observing that principle as well.
I think this is an important problem, not only because a lot of good academic work is not finding its way into practice, but also because academics need help from practitioners in identifying problems that are really important, as well as interesting to work on.
What do you feel are the most exciting new developments in conjoint analysis? Without a doubt, the most interesting current developments are ways to estimate individual utilities from choice studies. Many researchers think choice questions mimic product decisions that respondents make in real life, so there has been a rapid growth in the use of choice designs. However, since choice data are less efficient than other conjoint methods, it hasn't been possible to estimate utilities for individuals. Until recently, it has been necessary to do aggregate analyses, either by combining all respondents or by combining respondents into segments, as in latent class analysis. Of course, such aggregation necessarily assumes that individuals in a group are identical, which is almost certainly not true. Aggregate analyses always entail a serious risk of obscuring important differences among respondents.
Fortunately, there have been three promising developments recently. Huber and Zwerina found
that they could estimate individual utilities from efficient, individually customized choice
questionnaires. Lenk, DeSarbo, Green and Young found that individual utilities could be
estimated by a hierarchical Bayes method, as did Allenby, Ginter, and Arora. And we've been
testing still another method of estimating utilities for individuals. Our method starts with a
segmentation, such as produced by latent class, and then finds the unique weighted combination
of groups' utilities for each individual that best fits his or her choice data. All of these new
approaches find more heterogeneity among individuals than is commonly captured by
recognizing market segments, and they can use this additional information to improve
Using Utility Constraints to Improve the Predictability of Conjoint Analysis (SS Summer 97)
Conjoint analysis derives utilities (part-worths) to represent respondent preferences for product attributes. Some attributes such as price or quality have definite a priori order. Since utility estimates contain random error, and respondents are fallible, we often observe utilities which seem to violate common sense--especially those calculated at the respondent level. For example, a respondent's utilities might suggest that he prefers to pay higher prices, or desires lower quality. We sometimes call these anomalies "reversals."
Once we've identified reversals, the next step is to decide how to handle them. One school of thought suggests ignoring reversals, since they typically are ironed out in the aggregate, and they add a degree of random behavior to market simulations which may in some cases be valuable for predicting aggregate real-world behavior. After all, buyers don't always behave rationally in the real world. Another alternative is to impose order constraints. Researchers have suggested a variety of ways to impose constraints which range from simple tieing strategies to complex and computationally intensive algorithms.
The most simple way to deal with reversals is to "tie" values that are reversed. CVA's ordinary least squares utility calculator has a tieing algorithm. Non-parametric techniques such as CVA's monotone regression and LINMAP impose order constraints while solving for part-worths. Recently, a computationally intensive Bayesian method using the Gibbs sampler has been proposed for imposing order constraints on conjoint data (Allenby et al. 1995).
It is also possible to impose utility constraints across attributes. If we have prior knowledge that one attribute is more important than another for a given respondent, we can impose a constraint. However, we restrict our discussion to within-attribute utility constraints, since they are generally more applicable to most conjoint data sets.
Both full-profile conjoint methods and ACA can display utility order reversals. The bulk of opinion and research suggests that reversals are less likely to occur in ACA and that utility constraints are less likely to improve the predictive validity of ACA utilities.
Why ACA Utilities Are Less Susceptible to Reversals
The "priors" in ACA are largely responsible for the lower incidence of reversals for ACA data.
Moore et al. (1994) state, "...respondents rank order (or the researcher rank orders for the respondent) the levels of each attribute in the self-explicated stage. This rank ordering does not impose a constraint, but this information is incorporated into the regression . . . These rank orders, which are consistent with a priori reasoning, should lessen the tendency for estimated utilities to be out of order."
Summary of Findings
A number of researchers have shown that utility constraints can significantly improve the predictive validity of full-profile conjoint utilities. In some instances, constraints can also modestly improve predictability for ACA. However, the improvement is rarely statistically noteworthy.
The two tables below summarize findings for studies we are aware of that have examined these issues.
b Constrained using MORALS
c Constrained using monotonic regression
d Constrained using non-metric mathematical programming
e Constrained using tieing rule
b Constrained using monotonic regression
c Constrained using Bayesian technique and Gibbs sampler (Allenby et al. 1995)
d Constrained with tieing rule
As shown in Table 1, the average improvement in predictive validity for full-profile was 9%. Table 2 shows that constraints improve the predictive validity of ACA by an average of 2%. Of the studies in Table 2 that examined constraints with ACA, Moore et al. found the largest improvements for ACA at 3% and 4%. Regarding imposing utility constraints for ACA, Moore et al. conclude, "The small increase in the ACA validations argues against the use of this procedure with ACA."
A Dissenting Opinion
The May 1995 Journal of Marketing Research included an article by Allenby, Arora and Ginter (hereafter, AAG) entitled, "Incorporating Prior Knowledge into the Analysis of Conjoint Studies." AAG reported that prohibiting sign reversals in ACA resulted in significant improvements. AAG proposed an interesting new method using the Gibbs sampler to estimate constrained part worths. They "held out" the last three pairs from an ACA interview for external validation. AAG measured performance with a mean squared error measure using draws from the posterior distribution of model parameters, finding the median MSE for the held out pairs to be about 4.60 for standard ACA estimates and 3.52 when constraints were imposed. AAG concluded that imposing utility constraints via the Gibbs sampler had improved the quality of ACA utilities.
Another Look at AAG's Findings
Johnson and Pinnell (1995) examined the same data in terms of the more commonly accepted validation measure of holdout hit-rates. AAG provided their part worth estimates, for both standard and Bayes methods. Johnson and Pinnell found that hit rates for the held out pairs were 95% for standard ACA and 86% for Bayes (t=10.99). Constraints had actually been harmful to prediction. The data set also included four holdout choice tasks that AAG had not considered. Hit rates for those additional holdouts were 83.2% for standard ACA and 83.3% for Bayes. The difference is not significant (t=.36). Johnson and Pinnell concluded that the Bayes method for imposing utility constraints had not significantly improved the predictability of ACA utilities.
It is important to note the AAG imposed order constraints for attributes such as brand (which do not have a universal order) based upon stated preferences from the priors portion of the ACA interview. We suspect that stated preferences might not always represent "truth" for every respondent. Some respondents may have been confused by the stated preference question, thus providing bad information for use in constraints. We expect that Bayesian methods may provide modest improvement for ACA data sets when used only for constraining strong a priori attributes and look forward to more evidence of their usefulness in the future.
Suggestions for Practice
We think it is reasonable to correct reversals for attributes with strong a priori ordering no matter the conjoint method. Our full-profile system (CVA) lets the researcher prescribe order constraints under either OLS or monotone regression. The CBC system can impose order constraints only under the Latent Class add-on module.
ACA is less susceptible to reversals than full-profile methods, but reversals still can occur. The current version of ACA influences, but does not strictly constrain, utility orders. We may include such constraints in future releases. For the time being, ACA users should be aware of the issue and examine their data sets. Counting reversals by respondent can provide an additional data point beyond the "correlation" recorded in the utility file for judging respondent reliability. You may find it useful to discard the most unreliable respondents. For those cases that remain, simply tieing offending levels, if there are any, can be a simple yet effective remedy.
Allenby, Greg M., Neeraj Arora, and James L. Ginter (1995), "Incorporating Prior Knowledge
into the Analysis of Conjoint Studies," Journal of Marketing Research, (May),
Herman, Steve and Rob Klein (1995), "Improving the Predictive Power of Conjoint Analysis,"
Marketing Research, (Fall) Vol. 7 No. 4, 29-31.
Johnson, Richard M. and Jonathan Pinnell (1995), "Comment on "Incorporating Prior
Knowledge into the Analysis of Conjoint Studies," Working Paper, Sawtooth Software, Sequim,
Moore, William L., Raj B. Myhta and Teresa M. Pavia (1994), "A Simplified Method of
Constrained Parameter Estimation in Conjoint Analysis," Marketing Letters 5:2,
Orme, Bryan K., Mark Alpert and Ethan Christensen (1997), "Assessing the Validity of Conjoint
Analysis--Continued," Working Paper, Sawtooth Software, Sequim, WA.
Srinivasan, V., Arun K. Jain, and Naresh K. Malhotra (1983), "Improving Predictive Power of
Conjoint Analysis by Constrained Parameter Estimation," Journal of Marketing
Research, (November), 433-38.
van der Lans, Ivo A., Dick R. Wittink, Joel Huber and Marco Vriens (1992), "Within- and
Across-Attribute Constraints in ACA and Full Profile Conjoint Analysis," Sawtooth
Software Conference Proceedings, 365-79.
Herman, Steve and Rob Klein (1995), "Improving the Predictive Power of Conjoint Analysis," Marketing Research, (Fall) Vol. 7 No. 4, 29-31.
Johnson, Richard M. and Jonathan Pinnell (1995), "Comment on "Incorporating Prior Knowledge into the Analysis of Conjoint Studies," Working Paper, Sawtooth Software, Sequim, WA.
Moore, William L., Raj B. Myhta and Teresa M. Pavia (1994), "A Simplified Method of Constrained Parameter Estimation in Conjoint Analysis," Marketing Letters 5:2, 173-81.
Orme, Bryan K., Mark Alpert and Ethan Christensen (1997), "Assessing the Validity of Conjoint Analysis--Continued," Working Paper, Sawtooth Software, Sequim, WA.
Srinivasan, V., Arun K. Jain, and Naresh K. Malhotra (1983), "Improving Predictive Power of Conjoint Analysis by Constrained Parameter Estimation," Journal of Marketing Research, (November), 433-38.
van der Lans, Ivo A., Dick R. Wittink, Joel Huber and Marco Vriens (1992), "Within- and Across-Attribute Constraints in ACA and Full Profile Conjoint Analysis," Sawtooth Software Conference Proceedings, 365-79.
Ci3 Tech (SS Summer 97)
Notes on the Ci3 Coder
One of the nice features of the Ci3 CODER is the ability to start coding openends for a study even before the study is completed. The default option for creating a coding file is to only include data not yet previously put in a coding file. If you put data from the first wave into a coding file, you don't have to worry about duplicate data or effort once the second wave of data comes in. Simply accumulate the second wave of data into your main data file, and then run the Prepare Coding Files option. Only the data from the second wave will be put in the new coding files.
Customizable Buttons in Version 2
Windows interviews for Version 2 can include clickable buttons at the bottom of the screen, such as Next, Previous and Help. If you don't like the words we put on the buttons, you can change them. Users who conduct interviews in languages other than English have found this invaluable. But even if your interview is in English, you still may want to customize.
For example, if you'd prefer to use "Go Back" instead of "Previous," you could use the following commands in the "pre-questionnaire" section:
INTERNAT PREVIOUS=&Go Back ENDINTER
The "&" symbol tells Ci3 to make the letter "G" the hot key (and to underline the letter) which bypasses the mouse.
IF Logic for Multiple Response Questions
For questions with multiple responses, the IF logic evaluates all the answers. For example, for a SELECT-type question named BRANDS, the statement:
IF (BRANDS = 5)will check all the responses to the BRANDS question to see if 5 was answered. This same feature can lead to some confusion as well. If you have the statement:
IF (BRANDS ‹› 5)
and intend to determine if 5 wasn't mentioned, this logic won't work. IF (BRANDS ‹› 5) checks to see if any of the responses to the multi-part question BRANDS are not equal to 5. So if a respondent chose items 1, 5, and 2, this statement would look at the first response (1), see that it isn't equal to 5, and the above statement would be true. To determine if 5 wasn't mentioned, you should use the following logic:
FLAG = 0 IF (BRANDS = 5) FLAG = 1 IF (FLAG = 0) ....etc.
Direct Link with SPSS(TM)/WinCross(TM)
In Version 2, two of our export options automatically include question text and value labels in the resulting data file. [The other options output this information to a labels file.]
Take the following Ci3 question, for example:
LIST GLIST North South East West ENDLIST Q: GEOGRAPH T: Where do you live? I: SHOWLIST GLIST 10 30 16 2 LOC 10 4 2 SELECT 4 1 1 0
In Ci3, we select the SPSS for Windows EXPORT option and check the "Export Labels" box. The following output from SPSS shows that the question text and value labels are automatically included. (We asked for a "Frequencies" output in this example.)
01 Jun 97 SPSS for MS WINDOWS Release 6.0 Page 1 File: test GEOGRAPH Where do you live? Valid Cum Value Label Value Frequency Percent Percent Percent North 1 10 50.0 50.0 50.0 South 2 3 15.0 15.0 65.0 East 3 5 25.0 25.0 90.0 West 4 2 10.0 10.0 100.0 ------- ------- ------- Total 20 100.0 100.0 Valid cases 20 Missing cases 0Ranking Questions in Version 2
Several users have seen the "ranking" question in our Windows demo for Version 2 and have asked how we programmed it. This is accomplished simply by including a DELETE (DLA) instruction prior to a SELECT instruction. For example:
LOC 6 6 1 DLA SELECT 6 6 6 0
When the respondent clicks the first item in the list a "1" will be placed in the box next to that item. After ranking another item, a "2" is put in the box next to the second item selected, and so on, until after the fifth item is selected. A "6" is automatically put in the box next to the remaining item not yet selected.
© 2013 Sawtooth Software, Inc. All rights reserved.