Evaluating the Effectiveness of Respondent Quality Control Checks

seal of approval

On January 30, 2014, Quirk’s magazine presented a timely webinar on the evaluation of respondent quality control measures. Keith Phillips of Survey Sampling International presented his research outlining a method for testing the effectiveness of commonly used quality control questions and metrics.

Keith highlighted three points that are necessary to achieve data quality: 1) representative sample, 2) solid survey design appropriate for the intended audience, and 3) participants that are willing and able to give honest answers. Any of the three can lead to a data quality failure. He pointed out that data quality will suffer if the questionnaire is poorly designed, if the length of the interview exceeds 20 minutes, if the topics are boring, and if respondents are untruthful. The first three reside in the hands of the researcher, with the fourth being on the part of the respondent. Yet it is the researcher’s onus to identify those respondents who are less than truthful.

A bit about the data set: it included a representative sample of 2,100 US adults. The survey contained 12 quality control questions and 3 non-question measures. The last 3 QC checks (quality-control checks) included measures for straight lining, speeding, and providing nonsense answers to open-ended questions. The questionnaire included benchmark questions to large-scale national surveys for comparison purposes.

The presenter raised an interesting hypothesis that people may become disengaged in a moment, but that does not mean they are disengaged throughout the process. A single quality check may inadvertently eliminate an otherwise useful respondent.

Keith presented several different types of QC checks. The misdirect is a common QC question inserted into surveys. For example, this trap might provide a lengthy intro and then take a left turn and misdirect the respondent by telling them to select a specific response which is counter to earlier instructions. Misdirects are useful for identifying poor-quality respondents, but they come at a price in that they capture a significant number of otherwise quality respondents.

Other types of QC checks presented include: conflicting answers (“Do you own a dog?” followed by “How many dogs do you own?” with responses of 0 flagged); straight-line grid checks (looking for the same answer in all grid questions); fake names or red herrings (inserting an obviously false response option); open-end checks (nonsensical answers to unstructured questions); speeding (comparing a respondent’s response time against similar respondents); low-incidence items (asking if anyone has been to a distant location where very few people travel to); and grid checks that require a respondent to select a specific response.

At the end of the day, Keith called out misdirects as being the worst performers in that they capture both good and poor quality respondents. He did not call for abandoning QC checks, but suggested researchers use 3 or 4 checks and exclude only those participants that failed all of them.

Greg Timpany directs the research efforts for Global Knowledge in Cary, North Carolina, and runs Anova Market Research. You can follow him on Twitter @DataDudeGreg.


Speak Your Mind