Does the psychology literature reflect reality? Thousands of studies have been published claiming this effects that and that affects this, and so on. But how many of these effects are actually real? A 2021 paper by Anne Scheel and colleagues looked at this question, and came to the disturbing conclusion that a large share of them might not be real. They might, in other words, be null effects masquerading as true (or ‘statistically significant’) effects.
The paper used a clever method. To understand it, you have to be aware of the two main sources of bias in the psychology literature (as well as the scientific literature more broadly).
The first is selective reporting, also known as the ‘file drawer problem’. Essentially, it’s much harder to get a null result published than it is to get a positive result published. Null results are regarded as ‘boring’ and ‘uninformative’ – not the sort of thing editors want filling up the pages of their vaunted journals. (This is despite the obvious fact that it’s often very useful to know when something isn’t true.) Not only that, but null results can ruffle people’s feathers. If a distinguished academic publishes a paper claiming that such-and-such is true, and then some other academic publishes his own paper showing the opposite is true, the first one might get rather perturbed. (Academics can be extremely petty, presumably because the stakes are so low.)
The second source of bias comes under the heading of questionable research practices or QRPs. These are things like: not making your data available for other researchers to check; tweaking your analysis until you get a positive result (‘p-hacking’); running many analyses but only reporting the ones that give positive results; and forming hypotheses after analysing the data (‘HARK-ing’).
In recent years, some journals and researchers have sought to address these two sources of bias through what are called ‘registered reports’. A registered report is an academic paper with two key features: it tests hypotheses that have been pre-registered via a time-stamped protocol posted online; it is submitted to a journal and accepted for publication before the data have been collected and analysed (i.e., entirely on the basis of the hypotheses and proposed methods). In virtue of these two features, registered reports are immune from both selective reporting and questionable research practices.
Returning to Scheel and colleagues, they compared the percentage of articles with a positive result in a sample of registered reports and a sample of standard reports (i.e., ordinary academic papers). To be specific, they checked whether the first hypothesis tested in each article was deemed by the authors to have been supported. Did they write something like, ‘Our first hypothesis was confirmed’, in other words.
What did they find? The results are shown below:
![](https://dailysceptic.org/wp-content/uploads/2025/01/Registered-Reports-1-1024x980.jpeg)
As you can see, the first hypothesis was supported in 96% of standard reports but only 44% of registered reports – a huge gap. Now, registered reports are more likely to constitute replications of previous articles, so they might be less likely to find support for their hypothesis for that reason alone. However, even when the authors excluded replication studies from both samples, there was still a massive difference of 46 percentage points.
This suggests that up to half the effects reported in psychology might not be real; one in every two studies could be making false claims. While Scheel and colleagues’ study has some limitations like any other, their findings suggest that selective reporting and questionable research practices are absolutely rampant. And in case you’re wondering, yes, they did pre-register their own hypotheses.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Cue a seminal paper from 20 years ago, authored by biostatistician John Ioannidis, who drew back the curtain on “Why Most Published Research Findings Are False…”
https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020124&type=printable
“…There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias.”
While 60 years ago, the one and only Richard Feynman made some typically pithy observations on psychologists and psychologising…
https://youtu.be/EYPapE-3FRw?t=335
…Sixty years of hot air later, what’s changed?
Physical sciences remain a different ball game. Thermodynamics, Maxwell’s Equations and Quantum Mechanics are self-evidently valid – otherwise you wouldn’t be reading this online and I wouldn’t have been typing it on a modern-day digital device.
Psychologist can make exactly zero reliable predictions.
The whole field is based on retrofitting explanations to observations.
Which isn’t to say the entire field is bogus. It’s more like a craft than a science. Just like a musician can become skilled and effective through lots of practice, so psychologists and the manipulators governments employ can become effective. But it ain’t much of a science, that’s for sure.
Yes just think ( if you’ll pardon the pun ) who would have ever thought that someone like sir kneel could rise up to be a so called PM ! Let’s face all politicians are dodgy but Wankier tops the lot !!
Economics is a branch of psychology – does the economics literature reflect reality?
This suggests that up to half the effects reported in psychology might not be real
Small sequence of attempts to restate this in semantically identical ways:
1) This suggests that at most 50% of the effects might not be real.
2) This can make people believe that at most 50% of the effects might not be real.
3) This can make people believe that at most 50% effects might be real.
4) This can make people believe that at most 50% of the effects are credible.
5) This can make people believe that they are willing to believe in at most 50% of the effects.
What precisely is making people believe something about their willingness to believe something supposed to be?
Is this article real?
Could someone carry out a similar study of ahem, “climate science” ( yes I know What laughs) papers?