Some Highlights from the Sawtooth Software Conference 2010 – Day 1

[Editor’s Note: Research Access contributor Nico Peruzzi is on-site at the Sawtooth Software conference this week, and sends us this dispatch.]

The weather is pretty grey in Newport Beach, CA, but that’s good because we can focus on the talks.  Day 1 is workshop day, and here are some highlights of the two that I attended.

Adventures in Advanced CBC Applications

Presented by Bryan Orme, President of Sawtooth Software and David Lyon, Principal at Aurora Market Modeling, LLC, this workshop covered lots of different advanced applications of Choice-Based Conjoint (Discrete Choice Modeling).

Here are just a few highlights of this four-hour workshop:

  • Alternative-Specific Designs give the flexibility to show unique sets of attributes based on the context (e.g., if asking about choices for short-range travel, when “drive my car” is shown, “parking fee $8.00” is also shown.  However, when “ride the bus” is shown, “picks up every 20 min.” and “75 cents per one-way trip” are shown.  These designs make the exercise more realistic.
  • Prohibitions (restricting attribute levels that can be shown in combination) can be a red flag for the way you have set up your attributes and levels.  General rule: if the respondent will be confused by a combination, use a prohibition; however, if it’s simply “we don’t offer this combination”, then you don’t have to use a prohibition.  Remember that prohibitions reduce the efficiency of your design and you’d need more sample to overcome this issue.
  • Condition Pricing (varying the price points shown conditioned on the presence of a level of a certain attribute) only makes sense is levels of price are in some sense consistent across conditions: typically the same $ difference or same % difference).
  • Hierarchical Bayes is general the best analysis method for these scenarios – certainly better than Aggregate Logit – and this is true of even basic CBC analyses.
  • Summed Pricing extends the idea of conditional pricing and attaches incremental price changes to each level of each attribute.  It’s important to add +/- X% to summed prices so that price is not confounded with levels.  Doing so helps partial out the effects of each level; so that the utilities are not “conditional” on the price.  +/- 30% was suggested as the random variation to be applied.
  • Bundling was discussion, and it can get complicated, but the idea is to create a custom design that mimics the purchase process as much as possible.  Modeling and simulation in this case is custom.
  • Volumetric CBC asks respondents to declare a “quantity” of each choice that they would buy, versus making a “discrete” choice as to which one they would buy.  Volumetric CBC appears better at revealing near-term spikes in market share, whereas, discrete choice is better at showing a long-term equilibrium for a product (it is more stable over time).
  • “Evoked Set” CBC takes a reduced set of attribute levels forward into the task (e.g., start with 14 levels, but respondents will only answer about 7).  Takes some fancy data-processing work.
  • Build-Your-Own (BYO) is task where respondents see all attributes and choose which level of each that they prefer.  BYO can be a good “training task” for respondents, especially if they are going into a complex choice task.  BYO is built into Sawtooth Software’s Adaptive Choice-Based Conjoint.
  • Menu-Based Choice (MBC) appears to be what will become the newest addition to the conjoint family.  Design is very customized and respondents have an opportunity to choose various items from a “menu” to build their preferred product.  Counts analysis can work for now, but Hierarchical Bayes (HB) analyses will likely form the basis of models being developed to manage this type of data.  However, early research suggests that HB doesn’t give as much lift over Aggregate Logit in menu-based choice as it does in regular CBC.

Research for Solid Pricing Decisions

Presented by David Lyon, Principal at Aurora Market Modeling, LLC, this workshop presented an overview of commonly used pricing research.  The workshop was organized in terms of direct questioning techniques and trade-off methods.

Direct Questioning Techniques:

  • Willingness to Pay: “How much are you willing to pay for this?”.  At best, for a radically new product, gets you near the ballpark.  Don’t pre-list answer choices – wipes out upside possibility.  Plot % willing to pay at a certain price.  Overall, pretty weak technique.
  • Monadic Designs: split sample into groups and present different price to each.  Best to add a buy-response question (“Would you buy it?”, or better yet, a less-variable measure of purchase intent such as an intent scale or allocation/likelihood.  Use large samples and match cells carefully.
  • Sequential Monadic: ask initial, fully-disguised monadic question, then follow-up with “What about this price?”, “What about that price?”.  Sometimes down low to high price, or high to low, or “Gabor-Granger” (random order).  Problem = no way to disguise focus on price in follow-ups, and results in consistent over-estimation of price sensitivity = not realistic.  Huge biases here.
  • Van Westendorp Price Sensitivity Meter: four questions: At what price would you find the product… “too cheap”, “cheap”, “expensive”, “too expensive”?  Curve crossing analysis shown to be unrealistic; try plotting % of respondents who fall in the “normal” range (between cheap and expensive) against prices.  Ok for early exploration; view with skepticism.
  • Newton-Miller-Smith variation of Van West: add 2 questions: at [cheap price], likelihood to buy?, at [expensive price], ditto.  Translate likelihood scale into purchase probability, average probability curves over all respondents.  Problem = most of us can’t do translation to probabilities.

Trade-Off Techniques:

  • Ratings-Based Full Profile Conjoint: present profiles, have respondent rate or rank them.  Problem = systematically underestimates price sensitivity.  More concrete attributes like price are under predicted, whereas more emotionally laden attributes are over estimated.
  • Price-Only Choice-Models: allows showing different prices for different brands, allows different price utilities for different brands, and avoids systematic underestimation of price effects.  Think of each product’s price as a separate “attribute” in the conjoint sense.  Can fractionalize the design so each respondent does not need to do the whole design.  Usually modeled at aggregate level, but using HB would be better.  Issue – cannot simulate with products deleted from or added to the basic set we designed around.  Pretty face valid that we are testing price, but experience shows that price sensitivity is realistic.
  • Discrete-Choice Modeling: (Choice-Based Conjoint): a whole talk in and of itself, but here are a few highlights.  Add other attributes to price-only to provide more realism to the task and decrease bias toward oversensitivity.  Using Multinomial Logit introduces the “red bus – blue bus” problem (independence of irrelevant alternatives: IIA) – means that if we have 4 products with shares: 40%, 30%, 20%, and 10%, and say we cut the price for the first product so that its share increases to 50%, IIA takes the loss of 10% and distributes it proportionally away from the other 3 products (i.e., we now have, 50%, 25%, 16.7%, and 8.3%).  Logit does this without “thinking”, no matter what the data might say.  Use HB instead to Aggregate Logit to solve this problem.

Speak Your Mind