A push for reproducibility in biomedical research

Editor’s note: guest post from Neuroscience graduate student Erica Landis.

Neuroscience graduate student Erica Landis

Evidence is increasing that lack of reproducibility, whatever the cause, is a systemic problem in biomedical science. While institutions like the NIH and concerned journal editors are making efforts to implement more stringent requirements for rigorous and reproducible research, scientists themselves must make conscious efforts to avoid common pitfalls of scientific research. Here at Emory, several scientists are making greater efforts to push forward to improve scientific research and combat what is being called “the reproducibility crisis.”

In 2012, C. Glenn Begley, then a scientist with the pharmaceutical company Amgen, published a commentary in Nature on his growing concern for the reproducibility of preclinical research. Begley and his colleagues had attempted to replicate 53 published studies they identified as relevant to their own research into potential pharmaceuticals. They found that only 6 of the 53 publications could be replicated; even with help from the original authors. Similar studies have consistently found that greater than 50 percent of published studies could not be replicated. This sparked a period of great concern and questioning for scientists. It seemed to Begley and others that experimenter bias, carelessness, poor understanding of statistics, and the career-dependent scramble to publish contributes to a misuse of the scientific method. These factors contribute to what is now called the reproducibility crisis. In April 2017, Richard Harris published Rigor Mortis, a survey of the problem in preclinical research, which has kept the conversation going and left many wondering what the best solution to these issues could be. To combat the reproducibility crisis, Harris argues that funding agencies, journal editors and reviewers, research institutions, and scientists themselves all have a role to play.

For its part, the National Institutes of Health (NIH) developed new requirements for reproducibility and transparency in research proposals submitted to its institutions to ensure researchers conduct their studies with rigor and care. These guidelines include authentication of key biological and chemical resources involved in experiments. “This is just the scientific method.” says Stacy Heilman, Ph.D, Emory Pediatrics Assistant Professor and Co-Director for Pediatric Research Operations, Grants Education & Cores.

Stacy Heilman, PhD and Ray Dingledine, PhD both quoted on reproducibility efforts

In her role leading the K-Club, a grant writing support group, Heilman helps Emory’s early career scientists both understand the need for rigor and transparency and satisfy that need in their funding proposals. In her opinion, these requirements are simply a return to good science which we have drifted away from. In addition to validating experimental materials, Heilman points out the importance of statistics and communication skills in solving the reproducibility crisis. Understanding the application and interpretation of statistics in research, Heilman says, is the “crux” of effective science. From proposing rigorous statistics and experimental care scientists must implement changes themselves in their own work to effect change.

Funding is not the only area of science that needs to improve for rigor and transparency to be recovered in preclinical research. Scientists themselves must ask each other to raise the standards applied to experiments and publications. Ray Dingledine, PhD, Professor of Pharmacology, recently gave a talk explaining how he felt scientists contributed to the reproducibility crisis within their own labs. First, he says we place too much importance on an arbitrary standard of statistical significance, the p value. Like Heilman, Dingledine sees proper statistics as a key part of the scientific method and as a common error in current published studies. In addition, the field of academic research prioritizes high impact publications, which encourages researchers to pursue quick-to-publish, exploratory experiments over more time- and money- intensive experiments which are more likely to be reproducible. Finally, Dingledine argues that human nature works against us, putting too much confidence in small numbers and early results which can lead us to incorrect conclusions. These ideas complement those that Begley, Harris, and others have proposed. Dingledine leaves his audience with advice following the advice of psychologist Daniel Kahneman; think more slowly. By being more careful and thoughtful about how scientists design their experiments and apply statistics to interpret their results, scientists can improve the chance their work will be upheld by future investigators, Dingledine says.

Be Sociable, Share!

Posted on by Quinn Eastman in Uncategorized Leave a comment

About the author

Quinn Eastman

Science Writer, Research Communications
qeastma@emory.edu
404-727-7829 Office

Add a Comment