The bell curve, the normal curve, the Gaussian distribution – by whatever name it goes, it enjoys a special place in our culture – and in our education system. It is typically assumed that data fall out according to a normal distribution. We are normally very focused on this most standard of distributions.
The reality is, however, that there are many real-life cases in which a normal distribution would be quite abnormal.
A study earlier this year in Personnel Psychology by Ernest O’Boyle Jr. and Herman Agunis showed that across many job performance variables, measures are distributed non-normally. Interestingly, they found that a power law distribution was much more common.
Full disclosure: While I’m sure Personnel Psychology is a fine and quite interesting journal, I do not normally read it – I was tipped off to the study by NPR.
I suspect that further investigation will replicate this debunking of normality across other domains.
What does this mean for research?
Quite simply, it means you shouldn’t follow the easy route and assume the construct your data are intended to measure is distributed normally. Give it some thought. Have a theoretical basis for your assumption before you plow ahead in the selection of measurement and analysis tools.
For example, in the case of correlation, the Pearson’s r assumes a normal distribution, while the Spearman’s rho does not.
And then there’s the t-test, which may be the most widely abused statistic in market research. It is standard output in many market research data analysis packages, but it assumes normally distributed data.
It comes down to this: don’t assume everything is tied up all neatly in a bow. Get your hands dirty in your analysis, and you and your client will be better off for it.