University of Salzburg researchers analyzed the results of 1,000 randomly-selected published psychology articles from 2007 and found strong evidence of publication biases, according to a study in PLOS One.
Theoretically, the researchers wrote, the size of positive outcome effects and the number of people involved in a study should be independent of each other. However, their analysis “found a strong negative correlation… That is, studies using small samples report larger effects than studies using large samples.”
They also found that about three times as many studies just barely reached having statistical significance for positive outcomes, compared to the number of studies that just barely failed having positive outcomes of statistical significance. “This indicates that it is the significance of findings which mainly determines whether or not a study is published,” they wrote.
“This pattern of findings allows only one conclusion: there is strong publication bias in psychological research,” wrote the researchers. The researchers did not know the exact reasons for their findings, but they proposed various possible solutions and concluded, “Publication practice needs improvement. Otherwise misestimation of empirical effects will continue and will threaten the credibility of the entire field of psychology.”