SS Spring 1996
Sawtooth Software was founded in 1983, shortly after small computers first burst upon the scene.
PCs had a bright future in the collection and analysis of marketing research data, and in the last
decade we have
produced many successful software products for interviewing and data analysis.
We believe our Ci2 and Ci3 products have come to be the most widely used interviewing
software in the world
today. Ci2 and Ci3 are used at conferences and trade shows, in mall intercept facilities, and just
else that people are interviewed.
Conjoint Analysis is a popular marketing research technique used to investigate what features a
new product should
have and how it should be priced. Our ACA System was recently found to be the most
frequently used method for
conjoint analysis in Europe, and we believe it enjoys similar status in the US and elsewhere.
Users of our software include Fortune 1000 companies in consumer and business-to-business
government agencies, market research firms, and universities. These organizations use our
software for product
and pricing research, social policy inquiries, epidemiological studies, academic investigations,
and opinion polling.
Because our people have been pioneers in computerized data collection as well as the
development of advanced
data analysis techniques, our products have set standards in their fields. Sawtooth products are
not only powerful,
but also easy to use. They make sophisticated research methods accessible to researchers who
rather than specialists in statistics.
Go back to Index
What Is Sawtooth Solutions? (SS Spring 96)
This is the new publication of Sawtooth Software. Last year, part of our company was "spun
off" to become Sawtooth Technologies, Inc., which now serves our former CATI users, remains
a sales representative for our Ci3 product, and continues to publish Sawtooth News.
Sawtooth Solutions contains information about computer interviewing, conjoint analysis,
other analytic methods for marketing research. We try to provide material that you'll find
genuinely useful. We hope you will eventually want to drop whatever you're doing when
Sawtooth Solutions arrives and check it out.
This initial issue is going to licensed users of our products and others who have contacted us in
the last two years. But to continue receiving it, you must let us know that you want to. Just
call to let us know, or fax or mail the section on the back page that contains your address, with a
correction if appropriate.
We're also experimenting with a fax-delivered version of Sawtooth Solutions. It consists
abbreviated version, delivered by fax, to notify you of additional material you can obtain from
our Internet home page. Fax delivery could inform you sooner about the availability of
information such as product updates, bug fixes, suggestions for use of our software, or new
Go back to Index
Staying Out of Trouble with ACA (SS Spring 96)
Though we've been told that ACA is remarkably easy to use, we talk nearly every day with an
ACA user who has run into a problem of some kind. We thought it might be helpful to list the
problems responsible for the most frequent customer support calls. Each could be the subject for
an essay, but we'll spare you, and just list the problems with a few words of explanation for each.
Using too many prohibitions: ACA lets you specify that certain combinations of
levels shouldn't occur together in the questionnaire. But if you prohibit too many combinations,
ACA won't be able to produce a good design, and may fail altogether. You can present
combinations of levels that do not exist in the market today, and including unusual combinations
can often improve estimation of utilities. Prohibitions should be used sparingly.
Calculating importances using average utilities: When possible, attribute importances
be computed for each individual and then averaged, rather than calculated using average utilities.
When based on average utilities, an attribute that is important to everyone, but about which
people disagree, can turn out to appear unimportant. (See an accompanying article, "The Basics
of Interpreting Conjoint Utilities.")
Doing complex conjoint studies by phone: Conjoint questionnaires are often difficult
respondents, who must keep many things in mind at the same time. ACA has been used
successfully in many phone studies, but it's best when the subject matter is simple and the
interview is short. We suggest you limit phone studies to 10 or fewer attributes, three or fewer
levels per attribute, and two attributes per paired concept.
Reversing signs of ordered attribute levels: If you already know the order of preference
attribute levels, such as for quality or price, you can inform ACA about which direction is
preferred and avoid asking respondents those questions. But you can also misinform ACA about
the preferred levels, which can lead to data almost impossible to salvage. To avoid this
situation, take the interview yourself, making sure that the questions are all reasonable (neither
member of a pair dominates the other on all included attributes). Also, answer the pairs section
with mid-scale values and then check to make sure the utilities are as you expect them.
Using ACA for pricing research when not appropriate: There are three aspects to this
(1) All "main effects" conjoint methods, including ACA, assume that every product has
the same sensitivity to price. This is a bad assumption for many product categories, and
CBC may be a better choice for pricing research, since it can measure unique price
sensitivity for each brand.
(2) When price is just one of many attributes, ACA may assign too little importance to it.
In a Sawtooth News article, Jon Pinnell reported that it may sometimes be appropriate
increase the weight that ACA attaches to price. This is particularly likely if the author
includes several attributes that are similar in the minds of respondents, such as Quality,
Durability, and Longevity. If redundant attributes like these are included, they may
appear more important in total than they should be, and other attributes, such as price,
may appear less important than they really are.
(3) It is not a good idea to use ACA's (or CBC's) "correction for product similarity" with
quantitative variables such as price. Suppose there are five price levels, and all products
are initially at the middle level. As one product's price is raised, it can receive a "bonus"
for being less like other products which more than compensates for its declining utility
due to its higher price. The result is that the correction for product similarity can lead to
nonsensical price sensitivity curves.
Using unequal intervals for continuous variables: If you use the ranking rather than the
option, ACA's prior estimates of utility for the levels of each attribute have equal increments.
That works well if you have chosen your attribute levels to be spaced regularly, for example with
constant increments such as prices of .10, .20, .30, or proportional increments such as 1 meg, 4
megs, or 16 megs. But if you use oddly structured intervals, such as prices of $1.00, $1.90, and
$2.00, ACA's utilities are likely to be biased in the direction of equal utility intervals.
Including too many attributes: ACA lets you study as many as 30 attributes, with up to
levels. But that doesn't mean anyone should ever have a questionnaire that long! Many of the
problems with conjoint analysis occur because we ask too much of respondents. Don't include n
attributes when n-1 would do!
Including too many levels for an attribute: Some researchers mistakenly use many
the hope of achieving more precision. ACA can only study 5 levels in detail, and when there are
more than 5 levels, ACA must make assumptions about the others. With quantitative variables
such as price or speed, you will have more precision if you measure only 5 levels and use
interpolation for intermediate values.
Abuse of unacceptables: ACA lets you include an "unacceptables" section, in which
respondents are permitted to identify features so unattractive that products with those features
would never be considered. Those attribute levels are excluded from the balance of the
interview. Unacceptables provide a way to shorten interviews that would otherwise be too long,
but respondents are too willing to discard levels as "totally unacceptable." We suggest avoiding
use of unacceptables. (Ask us for a copy of an article on unacceptables by Noreen M. Klein).
Wording of null levels: Some attributes have levels of "present" and "absent." One
"absent" level will serve only as a contrast to the "present" level, rather than having a negative
intrinsic effect. One approach is to avoid using the word "absent" altogether, representing that
level with a neutral symbol such as a dash or a period.
Interpreting simulation results as "market share": Conjoint simulation results often look
much like market shares that people sometimes forget they are not. Conjoint simulation results
seldom include the effects of distribution, out-of-stock, or point-of-sale marketing activities.
Also, they presume every buyer has complete information about every product. Researchers who
represent conjoint results as forecasts of market shares are asking for trouble.
Not including adequate attribute ranges: It's usually all right to interpolate, but usually
to extrapolate. With quantitative attributes, include enough range to describe all the products you
will want to simulate.
Imprecise attribute levels: We assume that attribute levels are interpreted similarly by all
respondents. That's not possible with "loose" descriptions like "10 to 14 pounds," or "good
Attribute levels not mutually exclusive: Every product must have exactly one level of
attribute. Researchers new to conjoint analysis sometimes fail to realize this, and use attributes
for which many levels could describe each product. For example, with magazine subscription
services, one might imagine an attribute listing magazines respondents could read, in which a
respondent might want to read more than one. An attribute like that should be divided into
several, each with levels of "yes" and "no."
Misinterpreting a specification of zero in simulations: ACA's simulator lets you specify
product's value for an attribute as zero, meaning: "don't include this attribute for this product."
But to use a specification of zero correctly, you must also use zeros for the other products.
Researchers sometimes assume in error that zero means "not available."
Assess the impact of product line extensions inappropriately: All logit-based
have trouble with products that are very similar to one another. If two products are especially
similar, most conjoint simulators will give them more share than they deserve. This presents a
problem for trying to assess the impact of a line extension, which probably shares many
characteristics with a current product. One way to approach this problem is to use ACA's
"correction for product similarity." Another is to "fool" the simulator by including two versions
of every product, where the product with the line extension has two somewhat different products,
but others each have two identical entries.
Insufficient Memory: ACA Version 4 requires 550K of free memory for questionnaire
authoring, although less memory is required for interviewing.
An Important Difference Between ACA Versions 3 and 4: Some users of Version 3 have
disturbed when upgrading to Version 4, when finding that "correlation" values suggest that
utilities predict responses to the calibration concepts less well. Much of this difference is due to
a change in how goodness of fit is reported. In Version 3 we reported the correlations between
predictions and actual responses, and in Version 4 we report the squares of the correlations.
Thus, a correlation of, say .7 would be reported as an r-squared of .49 in Version 4. There are
many other differences between Versions 3 and 4 as well, which are documented in the "ACA
V4 Technical Paper" which may be downloaded from our Internet home page
Go back to Index
How Many Questions Should You Ask in Choice-Based Conjoint? (SS Spring
When planning a choice-based conjoint study, one must decide how many choice tasks to give
each respondent. This is an important issue because we know that if the interview is too long,
respondents can get fatigued or bored, and their answers may be of little value. But, we are
motivated to collect as much data from each respondent as possible to maximize the impact of
each dollar spent on field work.
At the AMA's 1996 ART Forum, we reported results of a project undertaken to shed light on this
question. We re-analyzed data from 21 commercial CBC studies, to see how results would
depend on the number of tasks respondents are given.
Data sets were contributed by Dimension Research, Griggs-Anderson Research, POPULUS,
IntelliQuest, McLauchan and Associates, Mulhern Consulting, and SKIM Analytical, as well as
several end-users of CBC data. The studies included a wide variety of product categories
ranging from beverages to computers and airplanes. They involved field work done in several
countries and languages. The number of attributes ranged from three to six, and the number of
choice tasks ranged from 8 to 20. The numbers of respondents ranged from 50 to 1205, and
altogether they contained approximately 100,000 choice tasks.
Because these data sets were not designed for methodological purposes, most did not include
holdout tasks that could be used to assess predictive validity. Consequently, our analysis has
centered around the topics of reliability and internal consistency. Here are the main findings:
How many choice tasks should you ask each respondent? You can usually ask at least
choice tasks without degradation in data quality. Within that range, there is no evidence of
increasing random error, and later tasks provide data at least as reliable as earlier tasks.
How much information is contributed by multiple answers from each respondent?
Although there is no disputing the value of sample size, considerable gains can also be made
from increasing the number of tasks per respondent. Within the ranges we studied, doubling the
number of tasks per respondent is about as effective in increasing precision as doubling the
number of respondents.
Is there a systematic change in respondents' answers as the interview progresses? Do
brand or price become more important? Do respondents become more or less likely to
choose the "none" option? Yes to all three. Brand becomes less important, and price more so,
and respondents are more likely to choose "none" as the interview progresses. These systematic
effects are what limit the number of tasks each respondent should be given, rather than
anticipated increases in random noise.
Should you ask for just the first choice for each set of concepts, or is it useful to ask for
second choices as well? Second choices provide more information at less cost, but they are
biased. We advise asking only first choices.
How long does it take respondents to answer choice questions? How long is an
with a certain number of tasks likely to take? Choice-based conjoint interviews go quite
quickly. Average response times ranged from about 40 seconds for the first task to 13 seconds
for the last. Even for 20 tasks, the longest average interview time was about 7 minutes.
We were surprised by some of these findings. There are three main things that we've learned
from this analysis:
1) Before doing this study we were more concerned about burdening respondents with long
questionnaires than we needed to be, though it still appears that very long interviews may
produce distortions in brand/price tradeoffs.
2) We had been impressed by the efficiency of asking for second choices, without adequate
recognition of the bias inherent in their use.
3) We had incorrectly suspected respondents often chose "none" to avoid difficult tasks, rather
than because the offerings weren't attractive.
Fortunately, none of these surprises consists of bad news, and we think there is good reason for
the enthusiasm with which choice-based conjoint analysis has been accepted by the market
A copy of the complete study can be downloaded from the Technical Papers section of
our home page on the Internet.
Go back to Index
The Basics of Interpreting Conjoint Utilities (SS Spring 96)
Users of conjoint analysis are sometimes confused about how to interpret utilities. Difficulty
most often arises in trying to compare the utility value for one level of an attribute with a utility
value for one level of another attribute. It is never correct to compare a single value for one
attribute with a single value from another. Instead, one must compare differences in values.
The following example illustrates this point:
Brand A 40 Red 20 $ 50 90
Brand B 60 Blue 10 $ 75 40
Brand C 20 Pink 0 $100 0
It is not correct to say that Brand C has the same desirability as the color Red. However, it is
correct to conclude that the difference in value between brands B and A (60-40 = 20) is the same
as the difference in values between Red and Pink (20-0 = 20). This respondent should be
indifferent between Brand A in a Red color (40+20=60) and Brand B in a Pink color (60+ 0 =
Sometimes we want to characterize the relative importance of each attribute. We do this by
considering how much difference each attribute could make in the total utility of a product. That
difference is the range in the attribute's utility values. We percentage those ranges, obtaining a
set of attribute importance values that add to 100, as follows:
Range Percent Importance
Brand (B - C) 60 - 20 = 40 26.7
Color (Red - Pink) 20 - 0 = 20 13.3
Price ($50 - $100) 90 - 0 = 90 60.0
For this respondent, the importance of Brand is 26.7%, the importance of Color is 13.3%, and the
importance of Price is 60%. Importances depend on the particular attribute levels chosen for the
study. For example, with a narrower range of prices, Price would have been less important.
When summarizing attribute importances for groups, it is best to compute importances for
respondents individually and then average them, rather than computing importances using
average utilities. For example, suppose we were studying two brands, Coke and Pepsi. If half of
the respondents preferred each brand, the average utilities for Coke and Pepsi would be tied, and
the importance of Brand would appear to be zero!
Users of ACA or CVA may download a module named IMP.EXE from our Internet home page
(http://www.sawtoothsoftware.com) that will read files of individual utilities from ACA or
CVA and create a file of individual attribute importances, and print the average importances.
This should help you in determining and reporting attribute importances.
Go back to Index
The CCA Challenge (SS Spring 96)
We've been puzzled by something about our CCA System for Convergent Cluster Analysis.
CCA users have told us that they think CCA is our best product, and yet
its sales have been disappointing.
One of CCA's advantages is that it provides built-in assurance that its solutions will be
reproducible, a feature we believe is unique. Another is that, because it computes several
solutions and reports only the best one, its solutions tend to be of high quality. We keep hoping
we can find some way to get the word out about how well it works.
So, we offer the CCA challenge. We're confident that if you compare CCA with another
clustering procedure, you'll prefer CCA's cluster solution. Send us a data set that you've
clustered by your usual method, and we'll send you back the CCA solution free of charge. Your
only obligation is to compare the two solutions and tell us which you prefer. Call us at (360)
Go back to Index
Ci3 Version 2 for Windows(R) 95 (SS Spring 96)
Ci3 Version 2 for Windows 95 is now available. This new version brings the Windows "look
feel" to Ci3, for both author and respondent.
For authoring, the advantages are:
A Windows interface, with tool bar, buttons, and single-click access to Editing,
Compiling, and Testing.
Extensive context-sensitive help. You may never open the manual again.
A built-in editor that provides detailed help in syntax. If you forget an instruction's
syntax, a click and a keystroke will give you an on-screen display of
needed information. Of course, you can still use your own editor if you prefer.
No more DOS-induced limitations on memory.
You can create alternative versions of an interview for Windows 95 and DOS.
For respondents, interviews can have the "look and feel" of Windows, including:
Graphics and Sound.
Windows controls, with radio buttons, sliders, etc.
Text can be in different fonts and sizes.
Ci3 v2.0 requires Windows 95 or Windows NT(TM) for authoring. Interviewing computers can
run Windows 95, Windows NT, Windows 3.x with Win32s(R), or DOS.
Ci3 v2.0 does not currently permit interviewing on computers running Windows 3.x without
Win32s, but that will be available as a free upgrade to users of Version 2.
Despite its greater power, Version 2 is priced like the DOS version. The upgrade to Version 2
costs approximately 40% of the price of the corresponding DOS version.
Go back to Index
The Sixth Sawtooth Software Conference (SS Spring 96)
From 1987 through 1992 we held several Sawtooth Software Conferences. Many of our
colleagues have commented that those conferences were enjoyable and valuable, and have asked
us to sponsor another. We will resume that tradition next year. The sixth Sawtooth Software
Conference is scheduled for August 20-22, 1997, in Seattle, Washington.
The general topic will again be Computer Interviewing and Analytical Methods for Marketing
Research. As in the past, we will require papers in advance, and we will issue a proceedings
shortly after the conference.
August of 1997 is a long way off, but there's a lot of work involved in staging an event of the
quality we desire and you expect. We'll issue a call for papers, but in the meantime we'd be
delighted to receive any suggestions. And please mark your calendar!