As 2022 ends, the yearly death totals are rolled out by various media outlets as a source of concern.
The Times reports 1,000 excess deaths each week as the NHS buckles, with 656,735 U.K. deaths in the last year. Using the pre-Covid five-year average, it notes, “50,000 more people died last year than normal”.
The BBC reported that 9% more people died in 2022 than in 2019. Within hours of the reports of the deaths, they concluded the “data indicates pandemic effects on health and NHS pressures are among the leading explanations”.
No need then for painstaking epidemiology, assessment of confounders and determination of causation as opposed to an association. The BBC has the answer. But does it?
One of its explanations is the “lasting effect of pandemic”. Pointing to a “number of studies that found people are more likely to have heart problems and strokes“.
All three studies cited are retrospective. The first is a review of those admitted to the hospital in 2020 (mean follow-up 140 days). The second is a review from 2020, comparing U.S. individuals in the Veteran Affairs program who were predominately white males with a positive Covid test with matched controls from 2017. The third was much the same, reporting excess risk in the four months after a positive test.
It is hard to infer that these studies inform lasting effects; the data are from 2020 (before vaccines were introduced), in selected populations (e.g., those admitted to hospital), with short-term follow-up (e.g., four months), in individuals infected with different variants to now, which as one of the reviews points out, “it is possible that the epidemiology of cardiovascular manifestations in COVID-19 might also change over time”, and finally biases due to the study type and selection of controls.
Underlying the BBC statements is the failure to separate the effects of SARS-CoV-2 infection (which the cited studies analyse) from those of restrictions. Why does the BBC conflate two separate possible causes?
But that’s not all. The BBC piece states, “some of this may be contributed to by the fact many people didn’t come in for screenings and non-urgent treatment during the peak of the pandemic”.
This refers to a pre-print paper by Dale et al. on the adverse impact of Covid on cardiovascular disease prevention and management in England, Scotland and Wales. The article reports that 491,203 fewer individuals initiated antihypertensive treatment from March 2020 to the end of May 2021 than expected, and estimated, based on the assumption that none of these received treatments, there could be 13,659 additional cardiovascular disease events (note, not deaths).
We have already reported that the NHS’s data show no decline in prescriptions for any cardiovascular drug since 2019. And we also discussed the Number Needed to Treat (NNT), which for five years of statin treatment in primary prevention is 138 to prevent one death.
The NNT for hypertension treatment, which treatment is for five years to prevent death, is 125. It, therefore, takes time for the effects of the undertreatment of hypertension to become apparent.
So let’s assume that the 500,000 in the Dale et al. paper were never treated. The NNT of 125 means we might expect to see 4,000 extra deaths from hypertensive-related diseases over the next five years.
This does not undermine the importance of blood pressure control. In the U.K., hypertension is the leading modifiable risk factor for heart disease.
The British Heart Foundation estimates 15 million adults have high blood pressure, of which roughly half are not receiving adequate treatment – a third (five million) are undiagnosed.
Furthermore, your risk of death is not just related to the identification of hypertension. Your age, the level of blood pressure and comorbidities make a substantial difference to the risk of death. None of this is accounted for in the retrospective analyses that dominate current research outputs.
The media cycle is designed to produce headlines: find the highest excess estimate and then find some reasons to explain it – job done. Yet, we previously showed estimates of non-Covid excess mortality using historic averages lead to 50-100% higher estimates than methods incorporating trends.
Asking for opinions only fuels speculation about what might be causing the excess. The days of ‘what we think…’ have to end. Evidence-based medicine is hard; it requires high-quality, detailed epidemiological studies. For example, it took a prospective study by Richard Doll to determine the effect of smoking on doctors – one of the largest risk factors for cardiovascular disease death – that followed-up participants for 50 years.
In August, we listed eight non-mutually exclusive causes that require investigating. We’re not sorry for repeating this, but we should test all of these before drawing conclusions.

Ascertainment of causation requires serious work, not inferences from studies taken out of their context or headline bait.
Dr. Carl Heneghan is the Oxford Professor of Evidence Based Medicine and Dr. Tom Jefferson is an epidemiologist based in Rome who works with Professor Heneghan on the Cochrane Collaboration. This article was first published on their Substack blog, Trust The Evidence, which you can subscribe to here.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
If by some chance they built an AI chatbot that didn’t give the “right” answers you can be sure it would get tweaked until it did so.
I prefer my poetry to be the product of a human mind: –
https://poets.org/poem/mask-anarchy-excerpt
I doubt it has just picked up its biases from the web; the concern in AI circles has been to remove alleged “bias” (i.e. anything that’s not fashionably left wing) from AI, which by default is rather more inconveniently reality-based than they would like.
In fact it says it has no web access, it is limited to the texts that it has been trained on.
It just crashed on me :O
I’d say it’s a disappointment after the (cherry-picked?) conversations with GPT-3 that I’ve seen on YouTube. Stilted, poor general knowledge, lots of boilerplate text, clearly a relative of Microsoft’s “Clippy” assistant.
Politics is learned not innate.
There is no true AI, just clever programming – which includes all the bias and characteristics of the programmers – giving the appearance of independent thinking.
Just think of it as a computer model, then think Covid and climate doom.
People are too easily seduced when they hear ‘computer’, ‘expert’, ‘science’ and things they don’t understand.
There’s no clever programming involved here. Unless your definition of clever includes Create a program with unpredictable behaviour so nobody can accuse you of having made error when implementing it.
Agree that there is no true AI. Nearly always in today’s news when referring to AI they don’t understand they are actually writing about machine learning (ML). With machine learning you set the parameters that usually representative of success and let the machine build its own algorithms to find the fastest and most efficient way to satisfy the objectives. Generally there is no bias in the ML engine because it is pure programming of the kind where 2+2 = 4 and not 2+2 = 4+r where r = reparations for racism or climate or transphobia. So actually I slightly disagree that it inevitably includes the biases of the programmer. The bias is introduced either when the objectives are programmed or due to the input data being processed. E.g. if they are there, which in the pure ML engine, it is unlikely they will be, they are explicitly and consciously put there and usually that will be at the stage where objectives are defined. The problem of processing biased data sources is also a problem. As the old data processing saying goes “Garbage in, garbage out.”
The Open AI engine has a set of preprepared objectives and when using for example the text interface we are essentially “subclassing” those objectives. I wouldn’t be completely surprised if there is bias built into the objectives we are subclassing – though presumably if there is it will be visible since it is an open project. Though I understand coding, I haven’t looked at this project in particular so can’t say.
Let’s be clear, there is a good argument for some level of built in protection that is a grey area but can be labelled political. Look at the Google images ML project a few years back where the ML mistakenly labelled the selfies of a New York based black man “Gorilla.” He showed it in a tweet with, I think the caption was something like “Seriously Google!” Well done to him for handling it with resigned humour and not anger. You could argue in pure mathematical pattern matching terms it was without malice and not an error. But it was funny and the extent to which it was funny is also the extent to which it is overstepping a social bound and was a social error. So the problem is the perceived need to manage such grey areas represent many many thin ends of potentially very large wedges. Herein lies the danger.
What happened to AI-driven cars which were invariably becoming the future? Killed enough people that applying the nonsense to fields where grievious errors have less severe consequences and the intended audience is more credulous became necessary?