Order Bias Is a Larger Source of Error Than You Think

Two hands, shuffling a deck of cards, seen from the dealers point of view.

Last week I was writing a questionnaire for a client using their survey software account, and I was chagrined to discover that it lacked the ability to randomize the display of items in a choice list. This is a common capability of all modern survey software applications, including QuestionPro, Survey Analytics, Google Consumer Surveys, and more. Not randomizing your choice lists can introduce significant error into your results, as the impact of order bias is greater even than the margin of sampling error.

As an example, the GSS, way back in 1984, showed respondents in a face-to-face interview a card, then asked them “The qualities listed on this card may all be important, but which three would you say are the most desirable for a child to have?”

  1. … has good manners (MANNER)
  2. … tries hard to succeed (SUCCESS)
  3. … is honest (HONEST)
  4. … is neat and clean (CLEAN)
  5. … has good sense and sound judgment (JUDGMENT)
  6. … has self-control (CONTROL)
  7. … he acts like a boy or she acts like a girl (ROLE)
  8. … gets along well with other children (AMICABLE)
  9. … obeys his parents well (OBEY)
  10. … is responsible (RESPONSIBLE)
  11. … is considerate of others (CONSIDERATE)
  12. … is interested in how and why things happen (INTERESTED)
  13. … is a good student (STUDIOUS)

The top three choices chosen most often were Honest (selected by 66% of respondents), Judgment (by 39%), and Responsible (34%).

Unless you reversed the order of the choices on the card.

In which case, the top three choices were Honest (48%, a 17-point decrease), Judgment (41%, a 2-point increase), and Considerate (40%, a 15-point increase).

In fact, there was an average of a ±6.5% difference across the 13 items because of response order bias, as much as the margin of sampling error at 95% confidence level for a probability survey of U.S. adults with 230 respondents. But this may even understate the situation: the average difference was ±11.6% for the six items that ended up in the top 3 and bottom 3 items of the list (the other items were in the middle in both versions of the card).

In analyzing this data, Jon Krosnick and Duane Alwin in the paper “An evaluation of a cognitive theory of response order effects in survey measurement” found that choices presented earlier in the list were disproportionately likely to be selected. Summarizing past research showing similar findings, they report two reasons for this primacy effect:

  1. “Items presented early may establish a cognitive framework or standard of comparison that guides interpretation of later items. Because of their role in establishing the framework, early items may be accorded special significance in subsequent judgments.
  2. “Items presented early in a list are likely to be subjected to deeper cognitive processing; by the time a respondent considers the final alternative, his or her mind is likely to be cluttered with thoughts about previous alternatives that inhibit extensive consideration of it. Research on problem-solving suggests that the deeper processing accorded to early items is likely to be dominated by generation of cognitions that justify selection of these early items. Later items are less likely to stimulate generation of such justifications (because they are less carefully considered) and may therefore be selected less frequently.”

And, yes, this has been replicated when doing online surveys.

Accordingly, when doing web surveys, always randomize when appropriate: for multiple choice questions, as opposed to scale questions, randomize the order of the choices whenever they have no logical or inherent order. Most modern survey software applications also allow you to anchor a “None of the above” to the bottom of this list for select-all-that-apply questions.

(For dropdowns with long lists that respondents will skim, such as alphabetical lists of states, provinces, and countries, there is no need to randomize the order. Doing so would only confuse respondents.)

Randomizing choice lists is one of the easiest ways at your disposal to greatly improve the quality of your survey data.

Jeffrey Henning, PRC, is president of Researchscape International, which provides “Do It For You” custom surveys at Do It Yourself prices.  He is a Director at Large on the Marketing Research Association’s Board of Directors. You can follow him on Twitter @jhenning.

Related posts:

  1. [INFOGRAPHIC] Know the 4 Types of Survey Error
  2. 4 Kinds of Survey Error: Sampling, Measurement, Coverage and Non-Response
  3. How to Plus or Minus: Understand and Calculate the Margin of Error
  4. Mobile Research Overcomes Fading Affect Bias
  5. All Hail the Humble Median
Advertisement
  • Christina

    Thank you for this article and highlighting the importance of order bias – something that (provided the software allows it) seems to be easy to fix.

    Reading this: “by the time a respondent considers the final alternative, his or her mind is likely to be cluttered with thoughts about previous alternatives” I wonder whether there is any research on how the number of items influences answer choices? Put differently: When does a list become too long? Is there an “ideal” length?

    Thanks.

  • Jeffrey Henning

    I don’t think it’s possible to exercise too much control over the length of such lists; they typically represent ranges of possible choices and try to error on the side of being comprehensive. So, whatever the list size, randomize!