Highlights from the Sawtooth Software Conference 2010 – Day 3

For the morning sessions, the main conference was joined by the attendees of the conjoint analysis in healthcare conference that is running here in parallel.

The Value of Conjoint Analysis in Healthcare for the Individual Patient (Liana Fraenkel, Yale University of Medicine, VA Connecticut Healthcare System)

  • Looked at conjoint as a way to elicit patient preferences in low certainty situations.
  • Research shows that eliciting patient preferences can have some positive effects.
  • Conjoint analysis is a natural fit, given the trade-off approach.
  • Used Adaptive Conjoint Analysis to provide individual-level interviews.
  • Also looked at using MaxDiff, although it appeared that some respondents had difficulty with the best-worst trade-off, therefore tried a best-only approach.
  • Pros = works at the individual patient level, can handle lots of info, can provide immediate feedback, trade-offs are like real life, discourages rating all features equally
  • Challenges = hard to get independent attributes, hard to specify levels of attributes, have ranges of levels, have dominant attributes, can be difficult for respondents.
  • Reality = patients so rarely are given choices that they don’t know how to react, MD buy-in, discordance between patient preferences and what MD things should happen.

Tailoring Treatment Based on Preference Values (Marsha Wittink, Univ. of Rochester School of Med, Univ. Of Pennsylvania School of Med)

  • Goal was to use conjoint to identify which attributes of treatments are most important to help design better interventions.  Focus here was on patients with depression.
  • Used Hierarchical Bayes to calculate individual level utilities, and also used latent profile analysis to look for unique groups.
  • Did find unique groups preferring different treatment modalities.
  • Plan to use data to assess whether these preferences are better predictors of treatment uptake than other demographics.  Could also use to tailor treatments.

Conjoint Design Effect on Respondent Engagement (Paul Johnson, Western Wats)

  • Looked at CBC tasks with 20 vs 30 cards, and also at Adaptive CBC – looked at placement of conjoint task before or after other survey questions.
  • No differences in the way respondents answered other questions based on type or order of conjoint.
  • Purchase intent higher with ACBC task – might put respondent more in the mood to buy.
  • Respondents did NOT speed through the rest of the survey after doing the conjoint first.
  • Time spend on conjoint task longer for ACBC.
  • Few to no differences in measures of consistency, hit rates, or error levels in model.
  • ACBC did best job of predicting a winning holdout concept.
  • Other benefits of ACBC: get explicit rules of non-compensatory rules made by respondents, get more stable model estimates with smaller N’s.

Sales Promotions in Conjoint Analysis (Marco Hoogerbrugge & Eline van der Gaast, SKIM Analytical)

  • Looked at the best ways to present price promotions for testing in conjoint analysis.
  • Ideas of ways to display:
    • Original (gross) price + % or $ off.
      • When model, need to assign levels to final price.
  • Original (gross) price NOT shown, promotion (net promoted price) only thing shown.
  • Original (gross) price + promotion price shown and highlighted.
    • Could model just net price, or keep original price as main effect, or include interactions between original and promotion price in model.
    • Modeling would benefit from data about purchase behavior – more external data.
    • Note that you’ll need to model differently based on which approach you take.

How Many Questions Should You Ask in CBC Studies? – Revisited Again (Jane Tang & Andrew Greenville, Vision Critical)

  • Past (and more recent) research
    • Johnson & Orme (1996): ask up to 20 tasks; in later tasks brand becomes less important, price more important, and more likely “none” choice.
    • Hoogerbrugge & van der Wagt (2006): increase in hit rate after 10-15 tasks is small; complexity of study more influences hit rate.
    • Markowitz & Cohen (2001): HB hit rates not greatly enhanced by increasing sample size; more choice sets better than more sample.
    • Suresh & Conklin (2010): complex survey design leads to lower respondent engagement; more complex attributes leads to choosing “none” more often and more price reversals.
    • Hauser, Gaskin & Ding (2009): Non-compensatory rules used when more time pressure, more products, and more familiarity with the category.
    • Current study looked at 3 conjoint tasks: 6 cards with 3 options, 15 cards with 3 options, and 15 cards with 5 options.
    • Conclusions:
      • Increasing the # of tasks gave limited improvement in prediction ability, but at cost of slight deterioration in sensitivity and consistency.
      • Simplifying behavior occurs in later tasks – more likely so when respondent familiar with the category.
      • Increasing complexity of task (showing 5 vs 3 options) doesn’t help anything.
      • The balance is this: sometimes, individual level models don’t converge when have small # of tasks, so need to ask more tasks to get precision, but as do so, reliability goes down – need a balance .

The Strategic Importance of Accuracy in Conjoint Design (Matthew Selove, USC, John Hauser, MIT Sloan)

  • Looked at what happens when we have “noise” in a sample versus “heterogeneity”
  • Noise leads to less differentiation in product decisions, whereas heterogeneity encourages differentiation.
  • Compared a “well” and “poorly” design conjoint study.
  • Poor designs lead to more noise, which leads to inconsistency in the conjoint task and less ability to validate to hold-outs.
  • Need to accurately estimate randomness.  If you have no hold-outs to validate against, Louvierre recommends tuning model to an exponent of 0.4.

Product Portfolio Evaluation Using Choice Modeling and Genetic Algorithms (Chris Chapman, PhD & James Alford, PhD, Microsoft)

  • With conjoint data, we know how to optimize a product, but what about a product line?
  • Took CBC and ACBC data, derived individual-level partworths using HB model, iterated using a genetic algorithm to fit many portfolio preference models, and inspected.
  • Took 1080 possible products (based on 9 attributes with 2-7 levels) and found that after 6-8 products in the portfolio there was no more increase in share of preference.
  • Also asked which products appeared in a large number of winning portfolios – found a couple “new” products this way.
  • Other findings: CBC had much noisier price data than did ACBC; ACBC has more stakeholder face validity, smaller sample sizes needed, and better respondent engagement.

The Impact of Covariates on HB Estimates (Keith Sentis & Valerie Geller, Pathfinder Strategies)

  • Note: this presentation and the next were probably the most controversial and shocking to most of the conference participants.
  • Method: estimate HB partworths with and without a covariate, compare the quality of the two sets of partworths; do this for several different covariates, one at a time.  Looked at 3 classes of covariates: demographic variables, category behavior variables, and attitudinal variables.  Measures of fit and predictive accuracy included: RLH, Hit Rate, Holdout Likelihood, and MAE (error).  Measures of partworth variability included: importance spread and standard deviation ratio.  Ran across 5 datasets.
  • Conclusion: NO lift in predictive accuracy by using covariates; did see some increase in partworth variability.
  • Discussion: should covariates be in our tool kits?  This paper says NO, but why might we want to include them?  Carefully chosen covariates can provide insights by subgroup.

Added Value through Covariates in HB Modeling (Peter Kurz, TNS Infratest Forschung GmbH, and Stefan Binner, bms marketing research + strategy)

  • Since HB assume one single multivariate normal population, can get “shrinkage” of respondents  toward the population mean, and therefore segment differences could be reduced.  So, how about adding covariates into the upper level model?
  • Method: looked at 10 commercial studies, 30,000+ conjoint interviews, B2B, B2C, worldwide; looked at natural (demographic), segmentation membership, and intention or past behavior data as covariates.
  • Conclusion: they ran all kinds of models, from standard HB to HB with covariates to Latent Class, etc., and HB with covariates did better only on 2 studies.
  • Recommendations: ensure sufficient sample size, use standard HB, use hold out tasks.
  • Discussion: If have clearly defined clusters, covariates could help; more heterogeneity can help with market share projections, line extension and optimization decisions, estimates if willingness to pay, and it helps IIA (red bus-blue bus problem).

Regarding these last two papers, everyone really wanted to believe the covariates could helpful, but the evidence argued against improved predictive accuracy by using covariates.

Advertisement

Speak Your Mind

*