Day 4 is a half-day, but it had some very serious papers and great discussion – the marketing science community is alive and well here!
Modeling Demand Using Simple Methods: Joint Discrete/Continuous Modeling (Tom Eagle, Eagle Analytics of California)
- Discussed 4 approaches volumetric modeling (find out not only “what”, but “how many” of something respondents prefer.
- Examined Regression Models, Choice Models, Economic Models, and Joint Discrete/Continuous (D/C) Models, with a focus on the latter.
- D/C Models estimate in two stages: first fit an allocation Multinomial Logit model, then fit a general linear volume model using predictions from stage 1 as independent variables.
- Compared all methods on 4 relatively simple datasets.
- Summary of comparisons: Choice Modeling performs as well or better than the joint D/C models.
- Conclusions: Joint D/C volume modeling is a valid approach to modeling complex volume models. Because all these techniques occur on the back-end, you can design your study the same way and try out different approaches once data is collected.
Recent Developments in PLS Modeling: An Application for Customer Loyalty and Retention (Stuart Drucker, Drucker Analytics)
- This talk was about predictive analytics, specifically key driver analysis
- The way NOT to solve this problem is using stepwise regression (TAKE NOTE: lots of people still do this!).
- Discussed Partial Least Squares (PLS) Regression and PLS Structural Equation Modeling (PLS-SEM). In these approaches, factor analysis is web with regression analysis into a unified (more confirmatory in the case of PLS-SEM) framework. Better manages common issues including multicollinearity.
- Conclusion: The decision of which model to accept is a philosophical one, depending on whether the ultimate focus is estimating a system of effects (including Customer Satisfaction and the Key Drivers) – use PLS-SEM, or if the focus is maximizing the explained variation of Customer Retention – use PLS.
A Head-to-Head Comparison of the Traditional (Top Down) Approach to Choice Modeling with a Proposed Bottom Up Approach (Don Marshall, TVG, Siu-Shing Chan, Univ. of Pennsylvania, and Joseph Curry, Sawtooth Technologies)
- Huge effort involving lots of people to set up this experiment.
- Based on Jordan Louviere’s recent assertion (2009) that when using HB to measure preferences that we were just capturing respondent inconsistency and that we needed to stop modeling the way most people currently are.
- Compared current gold standard (Hierarchical Bayes (HB)) where individual preferences are influenced by group averages (called the “Top-Down” approach), to the “Bottom-Up” approach that examines individual preferences independent of group averages.
- In Top-Down, respondents see different choice sets and choice the best, with a dual-response none follow-up, and are shown fewer screens.
- In Bottom-Up, respondents all see the same choice sets, they choice top choice, last choice, and whether all/some/none are acceptable, and are shown more screens.
- Conclusions: while the design and analysis criteria for bottom-up continue to evolve and improve, this analysis provides no compelling reason to recommend bottom-up over top-down at this point. Interview length and completion rates favor top-down.
HB-CBC, HB-Best-Worst or No HB at All? (Ralph Wirth, GfK Group) – Note: this paper won “best paper” for the conference.
- Concerns have been raised regarding CBC-HB, and this paper used Monte Carlo Simulations and 4 real-world datasets to find out if the concerns were justified.
- The Best-Worst idea in CBC is gaining interest – the idea is to show profiles and have respondents choose not only their most, but also their least preferred option. Compared this approach to standard CBC-HB and also a Louviere approach which asked for most preferred, least preferred, and an in-between choice, and did NOT use HB for analysis.
- Conclusions: no model was consistently superior based on fit.
- The Louviere approach is worth considering when data conditions are good and/or the focus is on share prediction rather than prediction of individual choices. Purely individual estimation makes it much simpler than HB approaches but seems detrimental when data conditions are sparse.
- The HB approach has good overall performance also under sparse data conditions, there is no negative influence of individual-specific error variances, and the results suggest that the use of additional preference information from worst-choice leads to better estimations, as Best-Worst CBC-HB is consistently superior to standard CBC-HB.
- Discussion: lots of opinions here, but some main take aways are that:
- HB will not fall out of use anytime soon, as it appears to perform well under a number of situations.
- We need to model (simulate) against real-world outcomes whenever possible.
- Best-Worst CBC is emerging as something to keep exploring, however, there is potential to get into a lot of trouble during the analysis, and there is no commercial tool that provides a solution (other than doing completely custom-programmed analytics).
- If you are involved in doing conjoint analysis, or other varieties of research that seek to understand preferences and choice behavior, this conference is a must attend. The top minds in the field are all here and they are pushing the boundaries to achieve better measurement of choice behavior. The conference occurs every 18 months, and information can be found at www.sawtoothsoftware.com.
- Note that Jordan Louviere, a key discussant at the conference, was the recipient of the AMA 2010 Parlin Marketing Research Award. See his interview in the 9/30/10 edition of Marketing News.
- Plus, the food was really good