Highlights from the Sawtooth Software Conference 2010 – Day 2

The sun came out in Newport Beach on Wednesday, but with 10 presentations there was no time to play. Here are some highlights on each of the talks.

What Drives Me? Developing a Conjoint-Based Recommendation Engine for Individual Vehicle Consideration (Ely Dahan, UCLA, Claremont, Princeton)

  • Individual-level conjoint engages the respondent in an adaptive, engaging exercise with the goal of helping them gain personal insight and get a recommendation.
  • Used the example of a car recommendation engine in development for Edmunds.com.
  • Uses pre-existing market data, incorporated with individual choices.
  • Has good promise for the area of consumer search.

Analyzing Consumers’ Screening Rules by Means of Virtual Online Shops (Soren Scholz & Reinhold Decker, Bielefeld University, and Beate Sarnowski & Marie Schuir, Interrogare)

  • A virtual online shop was developed to help overcome some of the limitations of traditional conjoint approaches when studying complex purchase decisions.
  • Their tool picks up on non-compensatory decision making behavior (note that Adaptive Choice-based Conjoint does as well).
  • Good predictive validity (hits on hold-out tasks).
  • Can be combined with conjoint techniques to improve measurement.

The Success of Choice-Based Conjoint Design Among Respondents Making Lexicographic Choices (Keith Chrzan, John Zepp, & Joseph White, Maritz Research)

  • A lexicographic choice model involves a sequential choice process – like non-compensatory decision making (setting rules or cut-offs, must-haves and unacceptables).
  • Research shows between 20% and 66% of respondents use a lexicographic choice process.
  • Be careful with minimal overlap designs – designs with overlap can be more informative.
  • Best-Worst Discrete Choice Analysis (where not only a best is chosen from a consideration set, but a worst is, as well) helps deal with lexicographic responders.
  • The idea is that if someone is going to make a cut-off type of choice, we’re going to lose information about other attributes, so we need to do things to get them to answer about secondary attributes. Note that Adaptive Choice-Based Conjoint does this well.

Menu-Based Choice Modeling Using Traditional Tools (Bryan Orme, Sawtooth Software)

  • The idea of selecting from menus in a choice exercise makes sense given all the times in the real world that respondents do so (restaurants, configure a computer, cars, telecom/internet/phone bundles, single or multiple drug therapies).
  • These designs are custom built, and currently, are custom analyzed.
  • Counts analysis won’t cut it; volumetric allocation, serial cross-effects, and exhaustive alternatives models are being used.
  • Exhaustive alternatives shows promise – has the benefit of being a single model, but can have a large number of parameters to estimate. Be careful of overfitting.
  • Aggregate Logit interestingly does as well as HB in some tests.

Analyzing Pick n’ Mix Menus via Choice Analysis to Optimize the Client Portfolio (Chris Moore & Corrine Moy, GfK NOP)

  • Moving from showing respondents a portfolio of “fixed” products for the consumer to choose from to a portfolio of features that consumers can pick and choose from to design their own product.
  • Focus on identifying a set of features that should be offered, how consumers want to customize their product, the premium (if any) consumers are willing to pay for a customized product, and how to price each feature to simultaneously increase customer value/revenue/profit.
  • Serial cross-effects model appears to be very robust with excellent holdout validation.
  • It’s important to include LOTS of sample (think N=1000).
  • Be careful about overloading respondents with too long/difficult a task.

An Empirical Test of Bundling Techniques for Choice Modeling (Jack Horne, Silvo Lenart, Paul Donagher, & Bob Rayner, Market Strategies International)

  • Some are using the term “Build Your Own” for Menu-Based Choice.
  • The “Reservation Price” is David Bakken’s idea of the highest price an individual consumer is willing to pay for a given product.
  • Key assumptions for bundling research:
    • When offered individually, consumers will buy products that are less than or equal to their reservation price and not buy products that are greater than their reservation price.
    • When responding to fixed bundles, the reservation price for the entire bundle (not the individual component products) will determine purchase.
    • This bundle reservation price may or may not be the sum of the reservation prices for the individual items making up the bundle.
    • Single Brand BYO asks respondent to build product for a single brand; Market BYO does the same for multiple brands; Fixed Bundling is like discrete choice (show products – which one prefer).
    • These methods force respondents to react to different marketplaces, and they produce different results.
      • Single Brand BYO: have fewest products chosen, often times only one.
      • Market BYO: larger predicted take rates of individual products.
      • Fixed Bundles: price sensitivity curves are flatter; larger predicted take rates of individual products.
      • Revenue and product penetration are maximized using Marker BYO or Fixed Bundles
      • Single Brand BYO and Market BYO also reach those individuals interested in single products who may not be interested in purchasing Fixed Bundles. In this way, the BYO methods can compliment Fixed Bundles.
      • When there is no brand, Single Brand BYO may be a good alternative for measuring generic willingness-to-pay and take rates.

Anchoring Maximum Difference Scaling Against a Threshold: Dual Response and Direct Binary Responses (Kevin Lattery, Maritz Research)

  • Following up MaxDiff questions with additional questions to tease apart respondents who have the same item importance rankings but could have gotten there by different means.
  • Louviere’s Indirect Method asks a question after every MaxDiff screen: “Considering just the 4 features above, which of the following best describes your views about which features are very important for your ideal [product]?” Response set = All 4 of these features are very important, None of these 4 features are very important, Some are very important, some are not.
    • Awkward and additional respondent work; appears to be reintroducing some scale/disposition bias, and if results used in segmentation, could dominate the segmentation. The more items shown per task, the more likely the outcome will appear indeterminate.
    • Direct Method is asked only once at the end of the MaxDiff exercise: “Please tell us which of the features below are very important for you ideal [product]?” Response set = all MaxDiff items (or a couple pages with randomized sets if item list is very long.
      • Much quicker; respondents more critical (less items very important); some context dependency for which items are chosen.

Directing Product Improvements from Consumer Sensory Evaluations (Karen Buros, Radius Global Market Research)

  • Discussed the issue that consumer product evaluations on taste, smell, texture, color and other sensory perceptions lack specificity.
  • Penalty analysis is the traditional approach, however, it can yield conflicting directions for product improvement (e.g., overall flavor too strong, and overall flavor too weak)
  • Tried to understand product perceptions using minimally verbal scales (bi-polar), have respondents rate product on multiple dimensions and use Latent Class regression to derive respondent-level coefficients measuring the impact of attributes on purchase intent.
  • Issues = multicollinearity, sensory attribute interactions, lacking chemists on staff (had to opt for perceptions).
  • Jury still out on effectiveness.

Policy Implications on the Diffusion of Alternative Fuel Vehicles: An Agent-Based Modeling Approach (Rosanna Garcia, Northeastern University, and Ting Zhang (Xi’an Jiaotong University)

  • Used agent-based modeling to model the interactions between technology push, regulatory push and market pull on eco-innovation.
  • Used conjoint data (discrete choice) to gauge market pull.
  • Used NetLogo to do agent-based modeling (ABM).
  • Found the use of conjoint data a great way to instantiate and validate ABM.

The Impact of Respondents’ Physical Interaction with the Product on Adaptive Choice Results (Bob Goodwin, Lifetime Products)

  • Wanted to determine the potential impact of respondents’ physical interaction with the product on the precision of adaptive choice-based conjoint (ACBC) results.
  • Split-sample ACBC studies of folding banquet tables and chairs were conducted using online and mall-intercept field methods. Market simulation results were then validated using actual product sales and market share distributions.
  • For tables, “touch and feel” less important than previously thought. In-person concentrated more on leg style, online more on size/shape.
  • For chairs, mall “butt test” didn’t look that important; however, the “mesh” chair was impossible to validate as it had no sales data. Only very minor differences in attribute importances.
  • Concluded that cost-benefit for in-person tests not justified, as online method had less prediction error for both tables and chairs.
  • Note, however, that new, innovative, or unfamiliar products may need to be demonstrated in person.

Using Eye Tracking and Mouselab to Examine How Respondents Process Information in CBC (Martin Meissner, Soren Scholz, & Reinhold Decker, Biefeld University)

  • Asks whether adding eye-tracking or mouselab data to choice-based conjoint (CBC) could improve modeling.
  • It was somewhat challenging to match up eye tracking data directly to attributes. And mouselab introduces a different cognitive process, as the image of the attribute level only appeared when a respondent moused over it, and disappeared when not moused over.
  • Found that added eye tracking data did improve models, especially when the intensity of the tracking to each attribute was quantified and incorporated into the model.
  • Concluded that eye tracking data can be used to qualitatively analyze how respondents approach purchase decisions, cross-check and validate CBC utilities and importances, check whether the relevance of attributes and way of information processing changes during interviews, and significantly improve choice models.
  • Also learned that only a minority of respondents consistently apply simple decision heuristics, like focusing on price only, repeated measurements do not seriously harm information processing, warm-up tasks facilitate a more holistic evaluation of alternatives, and the design of the task affects respondent information acquisition behavior (more complexity changes behavior).
  • Issues = adding person-specific data can lead to overfitting the model. The cognitive processes are still somewhat unknown – need fMRIs to really tell. Eye tracking is currently expensive to implement and adds a lot of time to the task.
Advertisement

Speak Your Mind

*