Online Sample Quality – Pure Oxymoron

UK riot police run through flames

I attended a CASRO conference in New Orleans back in late ’08 or early ’09. The topic was ‘Online Panel Quality.’ I’ve often thought about that conference: the speakers, the various sessions I attended. I recall the session about satisficing, which at the time was being newly introduced into the MR space (the word itself goes back decades): I thought that was an interesting expression for a routine occurrence. Mostly, however, I remember the handwringing over recruitment techniques, removing duplicates, digital fingerprinting measures, and related topics du jour. And I remember thinking to myself, for 2 days non-stop: “Are you kidding me? Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.”

Allow me to explain where I’m coming from. My academic training is in I/O Psychology. Part of that training involves deep dives into survey design. Taking a 700-level testing & measurements course for a semester is a soupcon more rigorous than hearing “write good questions.” For example, we spent weeks examining predictive validity, both as a measurement construct, and also how it has held up in courtrooms. More to the point, when you’re administering written IQ tests, or psych evals, or (in particular) any written test used for employment selection, you are skating on thin ice, legally speaking. You open yourself up to all kinds of discrimination claims. Compare writing a selection instrument that will withhold a courtroom challenge with writing a CSAT or loyalty survey. Different animals, perhaps, but both are Q & A formats. A question is presented, and a reply is requested. However, the gulf in education in constructing MR type surveys is visible to anyone viewing the forest in addition to the trees.

An MR leader in a huge tech company said something interesting on a call I remember vividly. He asked: “When is the last time you washed your rental car?” The context here pertained to online sample. And he was one of the few, very few really, that I’ve encountered in the last 12 years I’ve been in that space, who openly expressed the problem. The problem is this: why would you ever wash your rental car? Why change the oil? Why care for it at all? You use it for a day, or a week, and you return it. Online respondents are no different. You use them for 5 minutes, or 20, and return them. If we actually cared about them, the surveys we offer them wouldn’t be so stupefyingly, poorly written.

I’ve seen literally hundreds of surveys that have been presented to online panelists. I’ve been a member of numerous panels as well. Half of these surveys are flat out laughable. Filled with errors. Missing a ‘none of the above’ option. Requiring one to evaluate a hotel or a restaurant they’ve never been to. Around a quarter consist of nothing but pages of matrices. Matrices are the laziest type of survey writing. Sure, we can run data reductions on them and get our eigenvalues to the decimal point. Good for us. And the remaining quarter? If you’re an online panelist, they’re simply boring. Do I really want to answer 30 questions about my laundry detergent? For a dollar? Ever think about who is really taking these surveys?

Sidebar: Do you know who writes good surveys? Marketing people using DIY survey software. Short and to-the-point surveys. Three minutes. MR practitioners hate to hear it, or even think about it, but that’s reality. I’ve seen plenty of these surveys by ‘non-experts.’ They’re not only fine, but they get good and useful data from their quick hit surveys.

Since you’ve made it this far, time to bring up the bad news. I’ve been accumulating a lot of stories the last 12 years. I’ll share a few. These all happened (but I’m not identifying any person or firm).

  • Having admin rights to a live commercial panel, I found a person with 37 accounts (there was a $1 ‘tell a friend’ recruitment carrot). I also found people with multiple accounts and a staggering number of points, to the point of impossibility.
  • The sales rep who claimed to be able to offer a “bi-polar panel” and sold a project requiring thousands of completes of respondents with a bi-polar or schizophrenic diagnosis.
  • The other sales reps I know personally (at least 5) who make $20,000 to $30,000 per month selling sample projects. Hey, Godspeed, right? Thing is, not a one could tell you what a standard deviation is, let alone the rudimentary aspects of sampling theory. Don’t believe me? Ask them. Clearly, knowing these items are not a barrier to success in this space. Just a pet peeve of mine.
  • Basically, this entire system works via highly paid deli counter employees. “We can offer you 2 pounds of sliced turkey, a pound and a half of potato salad, and an augment of coleslaw, for this CPI.” Slinging sample by the pound, and letting the overworked and underappreciated sample managers handle the cleanup and backroom top-offs.
  • The top 10 global MR firm who finally realized their years-long giant tracker was being filled largely with river sample, which was strictly prohibited.
  • Chinese hacker farms have infiltrated several major panels. I know this for a fact (as do many others). You can digital fingerprint and whatnot all day long, but they get around it. They get around encrypted URLs. Identity corroboration. You name it: they get around it.
  • The needle in a haystack B2B project that was magically filled overnight, the day before it was due.
  • Biting my tongue when senior MR execs explained to me their research team insists on 60-minute online surveys, and they’re powerless to flush out their headgear.
  • Biting my tongue when receiving 64-cell sampling plans. The myopic obsession with filling demographic cells at the exclusion of any other attributes, such as: Who are these respondents? You’re projecting them out to non-panelists as if they’re one and the same?
  • Watching two big global panels merge and scrutinize for overlap/duplicates, stretching across 12 countries. USA had 18% overlap, the rest (mostly Europe) had 10%. Is this bad? No idea. Maybe it’s normal.
  • Most online studies are being at least partially filled with river sample (is anyone surprised by this?).
  • Infiltration of physician panels by non-physicians.
  • Visiting the big end client for the annual supplier review and watching them (literally) high-five each other as to who wrote the longest online survey. The “winner” had 84 questions. We had performed a drop-off analysis, which fell on deaf ears.
  • A team of interns inside every major panel, taking the surveys, guessing the end client, and sharing that with the sales team in a weekly update.

Lastly, and for me the saddest of my observations, are the new mechanics of sample purchasing. The heat and light on sample quality that peaked about four years ago has been in steady decline. In the last couple years, sample quality is simply assumed. End client project sponsors assume their suppliers have it covered. The MR firms assume their suppliers have it covered. And the sad part? The sample buyers at MR firms, and I’ve seen this countless times, do not receive trickle-down executive support for paying a bit more for the sample supplier who actually is making an effort and investment to boost their sample quality, via validation measures for example. There are exceptions to this, or were, in the form of CPI premiums, but no widespread market acceptance to pay a buck or three more. In fact, the buying mechanics are simple, get 3-4 bids, line them up, and go with the cheapest CPI, assuming the feasibility is there. This happens daily, and has for years. And by cheaper, I’m talking 25 cents cheaper. Or 3 cents. That’s what this comes down to.

So chew on this: Why would a sample supplier pour money down the quality rabbit hole? Quality is not winning them orders. Margin is. Anyone working behind the scenes has also seen this movie, many times. Incidentally, there’s nothing wrong with buying on price; we all do this in our daily lives. The point is this: if you’re going to enforce or even expect rigorous sample quality protocols from your suppliers, then give your in-house sample buyers the latitude to reduce your project margins. I won’t hold my breath on this, but that’s what it takes.

I could go on but more is not necessarily better. This is the monster we’ve created: $2 and $3 CPIs has a ripple effect. How can a firm possibly invest in decent security architecture, with prices like this? How can we expect them to? If you’re buying $2 sample, why not go to the source and spend 50 cents?

Now that I’ve thoroughly depressed you, one may wonder, is there any good news? I remember telling my colleague 5 years ago ‘if a firm with a bunch of legitimate web traffic, like Google, ever got in this racket, they would upend this space.’ I didn’t think that would actually happen, but there you go (that one may still be depressing to some). I also believe that ‘invite-only’ panels give the best shot at good, clean sample. When you open your front door to anyone with a web connection, and tell them there’s money to be made, well, see above. More recently I’ve become a convert to smartphone-powered research. Many problems are removed. It has its own peculiarities, but from a data integrity perspective, it’s hard to beat. Lastly, and I could do a whole other riff on this one: when we design surveys with no open-end comment capture, you’re hoisting an ‘open for business’ sign to fraudulent activity. Yes you can add the ‘please indicate the 4th option in this question’ but both bots and human pros spot red herrings like that. It’s much more difficult to fake good, in-context open ended verbiage. Yes it takes a bit more work on the back end, and there are many solutions that can assist with this, one in particular. And the insights you can now share via this qual(ish) add-on is a nice change of pace relative to the presentation of trendlines and decimal points.

That’s all for now. Thank you for reading.

Scott Weinberg is the founder of Tabla Mobile, which provides mobile research consulting. He is the immediate past president of the Upper Midwest Chapter of the Marketing Research Association.


Speak Your Mind