Failure to replicate
August 31, 2015
The not-so-big new last week was the NYT article with the intriguing title, “Many Psychology Findings Not as Strong as Claimed, Study Says,” a rehash of this article in Science. In case you missed it, the bottom line is that findings from roughly two-thirds of studies in peer-reviewed journals could not be replicated.
This should not surprise us. Way back in 1987 a group of British researchers compared results published on various cancer-related topics and “found 56 topics in which the results of a case control study were in conflict with the results from other studies of the same relationship.” And, I expect all of us can think of a few things we do that used to be healthy but now are not. And vice versa.
What might all of this mean for MR?
On more than one occasion Ray Poynter has warned us about publication bias and the fact that results from just one study do not always point to truth. The former is cited as one potential culprit in the Science article, and the latter ought to be obvious to anyone calling him or herself a researcher. But is there something even more troubling at the heart of the problem, something that MR has in spades: questionable sampling practices.
Psychologists are notorious for their use of relatively small convenience samples and the belief that randomizing sample members across treatments cures all. I did not look at all 100 studies in the Science article, but the handful I looked at all collected new samples ranging in size from 60 to 220. Students and people in the street were popular choices. If this is indicative of all 100, I am shocked that only 60 didn’t replicate. Then again, I drew a very small convenience sample.
For the most part, MR avoids small samples, except in qualitative research where we generally are smart enough to characterize the results as “directional” at best but seldom representative. Good for us. But we are knee-deep in convenience samples these days. A little less certainty in what we claim they represent wouldn’t hurt.