Last week, Lab Land put out a Twitter poll, touching on the cognitive distortions that make it difficult to do high-quality science. Lots of people (almost 50) responded! Thank you!
We had to be vague about where all this came from, because it was before the publication of the underlying research paper. Ray Dingledine, in Emory’s Department of Pharmacology, asked us to do the Twitter poll first, to see what answers people would give. Dingledine’s paper in eNeuro is now published, so we can explain.
The paper is titled “Why Is It So Hard To Do Good Science?” Basically, Dingledine argues, our cognitive biases get in the way. eNeuro summarizes the take-home message this way: “Improving experimental design and statistical analyses alone will not solve the reproducibility crisis in science.”
When designing their experiments, Dingledine says, scientists need to take account of “the law of small numbers”—the distortions random variation can introduce when sample sizes are small – along with other cognitive biases.
In the 1960s and 1970s, psychologists Daniel Kahneman and Amos Tversky demonstrated that people tend to engage in “fast thinking” — relying on preconceived notions and emotions — when making decisions in the face of new information. In his update of this research, Dingledine found that scientists of all career stages are subject to the same biases as undergraduates when interpreting data.
The findings reinforce the roles that two inherent intuitions play in scientific decision-making: our drive to create a coherent narrative from new data regardless of its quality or relevance, and our inclination to seek patterns in data whether they exist or not. Moreover, we do not always consider how likely a result is regardless of its P-value. Read more