Murky research methods, which essentially distort the truth, may be very common in the field of psychology, a study suggests. While subtle, these questionable research practices pose a great threat to scientific integrity.
The study is based on a survey e-mailed to 5,964 psychologists at major U.S. universities. They were able to provide their answers anonymously.
Of the 2,155 who responded, half admitted to selectively reporting only those experiments that delivered the results they wanted.
The findings, published in the journal Psychological Science, also revealed:
- 43 per cent admitted to excluding data after looking at the potential impact on the results.
- 22 per cent stopped collecting data earlier than planned because they got the result they were looking for.
- 35 per cent reported an unexpected finding as if it had been expected all along.
“These practices are very damaging … because they vastly increase the likelihood of finding a positive result when there isn’t anything really going on,” said George Loewenstein, one of the authors of the study and a professor of economic and psychology at Carnegie Mellon University in Pittsburgh.
If these results are the ones most likely to get published, it creates a skewed, or distorted, body of research over time. “The literature becomes populated with false results – with results that are not true,” Prof. Loewenstein added.
He suspects that many researchers do not appreciate that what they are doing is wrong. And if some act this way, others will feel compelled to follow suit just to stay competitive in the “publish-or-perish” world of academic research. “It becomes a race to the bottom,” he added.
The consequences can be far-reaching. Published research is often used as a basis for public policy and determining the best practices for patient care. Furthermore, previous studies often shape the direction of future research.
“Scientists may try to build on the initial findings, but they can’t replicate them. So a lot of time is wasted,” Prof. Loewenstein said.
“Some psychologists will be unhappy with us publishing this [survey] because they will feel we are airing their dirty linen,” he said. “The solution, in our view, is not to sweep this under the rug, but we need to clean up our collective act.”
Leslie John, a study co-author at Harvard Business School, suggested that psychologists should consider instituting a system in which journals only accept articles for publication if the study was registered before it began with details about how it would be carried out. This safeguard would prevent last minute changes to a study’s design or the disappearance of a failed study from the research record.