AAPOR 2014 on Nonresponse and Data Science

Anaheim convention center

Last week the American Association for Public Opinion Research (AAPOR) held its 69th annual conference in sunny (and hot) Anaheim, California.

My biggest takeaway from this year’s conference is that AAPOR is a very healthy organization. AAPOR attendees were genuinely happy to be at the conference, enthusiastic about AAPOR, and excited about the conference material. Many participants consider AAPOR their intellectual and professional home base and really relished the opportunity to be around kindred spirits (often socially awkward professionals who are genuinely excited about our niche). All of the presentations I saw firsthand or heard about were solid and dense, and the presenters were excited about their work and their findings. Membership, conference attendance, journal and conference submissions and volunteer participation are all quite strong.

At this point in time, the field of survey research is encountering a set of challenges. Nonresponse is a growing challenge, and other forms of data and analysis are increasingly en vogue. I was really excited to see that AAPOR members are greeting these challenges and others head on.

Survey Nonresponse

As survey nonresponse becomes more of a challenge, survey researchers are moving from traditional measures of response quality (e.g., response rates) to newer measures (e.g., nonresponse bias). Researchers are increasingly anchoring their discussions about survey quality within the Total Survey Error framework, which offers a contextual basis for understanding the problem more deeply. Instead of focusing on the across-the-board decline in response rates, researchers are strategizing their resources with the goal of reducing response bias. This includes:

  • Understanding response propensity: Who is likely not to respond to the survey? Who is most likely to drop out of a panel study? What are some of the barriers to survey participation?)
  • Looking for substantive measures that correlate with response propensity: Are small, rural private schools less likely to respond to a school survey? Are substance users less likely to respond to a survey about substance abuse?
  • And continuous monitoring of paradata during the collection period: Developing differential strategies by disposition code, focusing the most successful interviewers on the most reluctant cases, or concentrating collection strategies where they are expected to be most effective).

Nonresponse propensity modeling, a process which is surely much more accessible than it sounds, has really evolved into a practical and useful tool that can help any size research shop increase survey quality and lower costs.

Big Data and Data Science

Another big takeaway for me was the volume of discussions and presentations that spoke to the fast-emerging world of data science and Big Data. Many people spoke of the importance of our voice in the realm of data science, particularly with our professional focus on understanding and mitigating errors in the research process. A few practitioners applied error frameworks to analyses of organic data, and some talks were based on analyses of such organic data. AAPOR 2014 also sponsored a research hack to investigate the potential for Instagram as a research tool for Feed the Hungry. These discussions, presentations and activities made it clear that AAPOR will continue to have a strong voice in the changing research environment. The reports from task forces and the initiatives from both the membership and education committees reinforced AAPOR’s ability to be right on top of the many changes afoot. I’m eager to see AAPOR’s changing role take shape.

“If you had asked social scientists even 20 years ago what powers they dreamed of acquiring, they might have cited the capacity to track the behaviors, purchases, movements, interactions, and thoughts of whole cities of people, in real time.” – N.A.  Christakis. 24 June 2011. New York Times, via Craig Hill (RTI)

AAPOR is a strong, well-loved organization, and it is building a bright future from a solid foundation.

Casey Tesfaye is a professional survey researcher at the American Institute of Physics. She is in her final year of the Masters in Language and Communication program at Georgetown University. She blogs regularly at Free Range Research.

Advertisement

Comments

  1. Great topic. Have gotten in some intense discussions on how to handle “missing data”. With survey work its difficult enough to try to understand what you are not surveying. For “Big Data” analysis it will be exponentially more complicated as often you may be putting together data sets that have been under very different levels of scrutiny and maintenance. One month the missing data may represent a null value, the next month the root cause could be a back-end systems upgrade that took everything offline. Another month it a team was disbanded. Will be a major issue.

Speak Your Mind

*