4 Kinds of Survey Error: Sampling, Measurement, Coverage and Non-Response

family feud error

There are 4 generally-accepted types of survey error.  By survey error, I mean factors which reduce the accuracy of a survey estimate.

It’s important to keep each type of survey error in mind when designing, executing and interpreting surveys.  However, I suspect some of them are more ingrained in our thinking about research, while others are more often neglected.

Imagine if we interviewed 100 researchers and asked each of them (“Family Feud”-style) to name a type of survey error.

Which type of survey error do you think would be mentioned most frequently?  Which type would be most overlooked?

Here is my predicted order of finish in our hypothetical example.

Note for the “Feud”-challenged:  Number 1 represents the most commonly named type of error in our hypothetical survey of researchers, while number 4 represents the least commonly named.

1. Sampling Error.

My guess is that sampling error would be the most commonly named type of survey error.

In a recent Research Access post, “How to Plus or Minus: Understand and Calculate the Margin of Error,” I explained the concept of sampling error and gave 3 ways of calculating it.

Sampling error is essentially the degree to which a survey statistic differs from its “true” value due to the fact that the survey was conducted among only one of many possible survey samples.  It is a degree of uncertainty that we are willing to live with.  Even most non-researchers have a basic understanding, or at least awareness, of sampling error due to the media’s reference to the “margin of error” when reporting public survey results.

2. Measurement Error.  

I believe measurement error would be the second most frequently named type of error.  Measurement error is the degree to which a survey statistic differs from its “true” value due to imperfections in the way the statistic is collected.  The most common type of measurement error is one researchers deal with on a daily basis:  poor question wording, with faulty assumptions and imperfect scales.

3. Coverage Error.

Coverage error is another important source of variability in survey statistics; it is the degree to which statistics are off due to the fact that the sample used does not properly represent the underlying population being measured.

There was generally more concern about coverage error in the past; these days, the combination of increasing internet penetration and fast/easy/cheap online survey panels has made it possible to accurately represent many target populations.  Concern about coverage error is still an important conversation; however, it is being discussed more in academic and thought-leadership circles than by the average day-t0-day research practitioner.

4. Non-Response Error.

My guess is that non-response error would be the least named type of error in our hypothetical survey.  Telephone survey houses historically have routinely made 20 or more call-backs to households that do not answer the telephone.  This practice has dwindled due to a combination of the expense of conducting so many call-backs, and the dramatic growth of online surveys, where it is just easier to replace non-responders with fresh sample.  It is also not considered acceptable in an online context to conduct scores of follow-up emails; that would get the sender sent to a blacklist post haste.

Related articles

Enhanced by Zemanta
Advertisement
About Dana Stanley

Dana is the Editor-in-Chief of Research Access.

Comments

  1. Kerry Butt says:

    You give short shrift to coverage and non-response error. It may be true to say that “the combination of increasing internet penetration and fast/easy/cheap online survey panels has made it possible to accurately represent many target populations”, but the lack of a single modality that reaches (virtually) the entire populace (as used to be the case for landline telephone) represents a huge challenge for the MR industry. Also, you imply in your section on non-response error that it’s OK to simply replace a non-responding element. This is not the case. The 20+ callbacks that you refer to were made because the thinking at the time was that a sample element should be replaced only if absolutely necessary. MR today does not seem particularly concerned with sample management.

  2. Kerry, thanks for your comment. Regarding your first point, I think it’s true that it’s still a problem to represent some populations online — but not all. For many types of surveys, an online sample does not represent a signficant problem with coverage error. As to your second point, my intent was to describe what typically happens in practice and not to imply that it is correct.

  3. Nicely written Dana – but I was expecting a conclusion – namely – whether you yourself agree with the order that you would expect to see!

  4. Tieming Lin says:

    Dana, thanks for the thought provoking article. I guess the bottomline of #1,2 and 4 types of survey error you described has to do with sample representativeness. I want to share my observation about non-response issue from years of practice:

    Making the 20+ call backs in the good old days was due to the requirement by ‘law of randomness’ in survey sampling design, but that was still questionable. Among the 20+ call backs, it’s often the case there were still non-responses from refusal. You can’t simply replace non-response with new samples since you’d break the law of random selection. Were those ‘not interested’ – which should be counted as base for estimation, or were those simply not qualified – which should not be counted? We mostly didn’t know. I guess this flaw lies in the very practice of using survey tool itself. I think the assumption may have changed from the old days when you had to acaknowledge non-response bias to present day online and social media era when non-response means ‘not interested’, and count those in your sampling estimate.

Speak Your Mind

*