‘Experts’ haven’t exactly covered themselves in glory during the pandemic. Their pronouncements concerning things like lockdowns, masks and herd immunity seem to be more correlated with swings in Twitter sentiment than with any fundamental changes in scientific evidence.
We’re used to seeing graphs like this one, which show the actual course of the epidemic deviating rather substantially from what was predicted. And where there is some correspondence between data and forecasts, this is usually because the forecasts included so many different ‘scenarios’ that one of them had to be right.
Famously, Neil Ferguson said it was “almost inevitable” that cases would reach 100,000 per day after some restrictions were removed on 19th July. What cases actually did over the next 10 days was fall by nearly 50%.
(If Ferguson tells you it’s “almost inevitable” that he’ll meet you on time, it’s probably best to bring a book, or delay your own arrival by half an hour.)
Okay, so the ‘experts’ aren’t very good at predicting where cases or deaths will be a few weeks hence. But they’re surely better than the rest of us. And since some information is better than no information, we shouldn’t dismiss them entirely – right?
A study published earlier this year did find that experts (defined as “epidemiologists, statisticians, mathematical modelers, virologists, and clinicians”) were more accurate at forecasting the UK’s death toll in 2020 than were random members of the public.
In April of 2020, Gabriel Recchia and colleagues asked 140 experts, as well as 2,000 members of the public, to guess how many people in the UK would die of COVID by 31st December. Each participant was asked to give a ‘75% confidence interval’ for their guess.
The correct answer (which can be debated, of course) fell within the 75% confidence interval for 10% of non-experts and 36% of experts. So the experts did better, but less than half of them were even close.
A more recent study reached slightly different conclusions. Earlier this year, the epiforecast group at the London School of Hygiene & Tropical Medicine hosted a forecasting competition in which they invited members of the public to predict weekly case and death numbers in the U.K.
The competition ran from 24th May to 16th August. Both experts and non-experts were eligible to compete, experts being those who declared themselves as such when they signed up (so we’re presumably talking about epidemiologists and people with a background in forecasting).
What did the researchers find? In this case, the self-declared experts performed slightly worse than the non-experts, although neither group did especially well.
Why did the two studies reach different conclusions? I suspect the answer lies in the composition of each study’s non-expert group. In the first study, the non-experts were random members of the public, whereas in the second, they were laymen who chose to take part in a forecasting tournament.
The psychologist Philip Tetlock has gathered a large amount of evidence that, when it comes to quantitative forecasting, experts aren’t any better than well-informed laymen (even if they do have an edge over the man on the street).
I suspect the non-experts who took part in the Covid forecasting tournament were the kind of well-informed laymen that Tetlock identified in his research. After all, you’d have to be pretty geeky to find out about such a tournament in the first place.
Overall, the evidence suggests that no one’s particularly good at forecasting the epidemic. Where the ‘experts’ do have an advantage is in making their predictions appear scientific.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.