Maximum Difference Scaling Provides Better Importance Data

balance scale

On October 28, Survey Analytics provided a glimpse behind the curtain into a not so new technique for getting into the mind of the respondent. We have long used rating scales to get at what consumers and prospects see as important, yet this approach is fraught with peril. This is where maximum difference scaling comes into play.

MaxDiff as it is known avoids much of the concern with rating scales, primarily the tendency to rate everything as important. Common rating scales leaves themselves open to a lack of variance; in other words, respondents often find it difficult at best to adequately describe which attribute is more important than another. This can be seen in the use of “straight lining” where respondents simply give all attributes the “most important” rating.

MaxDiff scaling (also known as best/worst) avoids this by presenting respondents with a series of tasks where they are asked to select the most important and the least important attribute from a list of 3 to 5 attributes. All attributes are exposed to the respondent approximately an equal number of times.

The number of tasks should allow for each attribute to be exposed 3 to 5 times. A common rule to follow is 3K/k where K is the total number of attributes and k is the number of attributes displayed in each task. If you are comparing 20 attributes and allow for 4 attributes per task you would need (3 * 20)/4 = 15 tasks.

This technique does not allow for straight lining; instead, as its name indicates, it generate a maximum amount of variation. The example below compares a traditional method (top 2 box) with data generated from a MaxDiff exercise, highlighting the percentage of time the attribute was rated as most important.

SurveyAnalytics MaxDiff example

Some systems that employ the MaxDiff technique also allow for individual level scores to be created through Hierarchical Bayes analysis. These individual scores can be used as input into other segmentation or factor analysis.

Maximum difference scaling not only provides greater differentiation, but also avoids the cultural bias associated with patterns for answer rating scales. The technique removes brand halo effects, and offers better predictive validity. It can be used anywhere a market researcher would use a matrix rating scale. While MaxDiff does take longer for respondents to complete, this is not necessarily a bad thing – respondents often speed through matrix (grid) questions, resulting in lower data quality.

In summary, maximum-difference scaling provides a more robust option for eliciting what consumers consider to be important.

Greg Timpany directs the research efforts for Global Knowledge in Cary, North Carolina, and runs Anova Market Research. You can follow him on Twitter @DataDudeGreg.


Speak Your Mind