The U.K. Health Security Agency stopped publishing Covid modelling data this January. However, finding alternative labour for the out-of-work modellers only took a month.
The UKHSA’s Chief data scientist Dr. Nick Watkins said the country is living with Covid thanks to vaccines and therapeutics, and the data is “no longer necessary.”
The agency is now turning to models to understand the impact of Influenza H5N1 – Bird Flu – if mammalian transmission is established. Models will estimate the prevalence of the outbreak using different surveillance approaches, reasonable worst-case scenarios and the impact of public health measures, including border measures, on containing an outbreak. Hmmm, sounds familiar.
Information from the Government’s Independent Technical report – we wrote about this in December – stresses the importance of modelling.
Epidemiological modelling has been an important tool throughout the pandemic to interpret data to support understanding the situation, and to provide scenarios to develop awareness of the potential impacts of different options for policy choices.
But hold on – it’s not just one model; you need multiple models to generate the truth!
Modelling is considerably more robust when more than one model (ideally a minimum of three) is considered and a consensus is built and agreed across a broad community.
There you have it: consensus is all you need. Evidently, it introduces quality assurance and seemingly lowers the risk of “spurious results”.
Ah, but there are some limitations, as the Technical report informs us:
As the Omicron variant emerged in South Africa in November 2021, it was impossible to tell whether its early apparent decreased severity would be replicated in the U.K.
However, by December 2021, the models were overly pessimistic and far from reality. So on December 15th, we wrote ‘We’re almost certainly overreacting’ in the Telegraph.
The South African data reported fewer patients in intensive care, less-severe diseases and shorter hospital stays. With some medical knowledge, it wasn’t difficult to determine whether the South African data would be applicable elsewhere. Seemingly, the U.K. modellers were the only people who couldn’t understand the generalisability of the data.

The technical report also says that “data will always be lacking in the early phases of an epidemic or wave with a new variant, and this, in particular, was a significant limitation for epidemiological modelling early in the pandemic”.
Despite these limitations, Government machinery relies on models and is fixated on their use – the glean of prophetic insight afforded by fortune-telling is too irresistible for those in power.
But those in the know will recall the track record of modellers’ mistakes and their erroneous predictions: BSE in 2000 and a worst-case scenario of 136,000 vCJD deaths; foot-and-mouth disease in 2002, which saw millions of animals slaughtered based on modelling; 2005 bird flu models that led to the stockpiling of antivirals and then in 2009, swine flu models that predicted the worst case scenario of 65,000 deaths – the actual death toll was less than 500 in the U.K. with an infection fatality ratio one-tenth of that forecast.
However, if the Health Security Agency is modelling bird flu, perhaps it should start by revisiting the 2005 models. In 2005 Neil Ferguson told the Guardian “that up to 200 million people could be killed”. The Department of Health considered anywhere up to 700,000 deaths could occur in the U.K.

As science is cumulative, scientists should accumulate the science and keep non-science out. Therefore, in our next post, we’ll turn to those 2005 bird flu predictions to analyse what they foretold.
Dr. Carl Heneghan is the Oxford Professor of Evidence Based Medicine and Dr. Tom Jefferson is an epidemiologist based in Rome who works with Professor Heneghan on the Cochrane Collaboration. This article was first published on their Substack blog, Trust The Evidence, which you can subscribe to here.
Stop Press: Britain should be stockpiling more antivirals and PPE for bird flu as it is “essential” to start preparing for the worst-case scenario, according to Professor Peter Openshaw, who sat on two SAGE committees during the pandemic.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
It’s deja vu all over agin.
In order to arrive at an accurate prediction of the future, we’ll consult astrologers, haruspexes and augurs for input and then pick-and-chose from their predictions in whatever way we want until a consenus on how the future will become has emerged!
In modern pseudo-tech speak Modelling is considerably more robust when more than one model (ideally a minimum of three) is considered and a consensus is built and agreed across a broad community.
When trying to predict the future in three different ways leads to three different results, at least two of them must be wrong. As nobody knows which two are wrong, the only sensible course of action is to discard all three as there’s no point in basing decisions on information that’s at least 2/3 wrong.
Sometimes, I yearn for the times when superstitious people trying to influence others with their gobbleygook ended up being burnt by the church.
The climate change liars carefully selected 187 models predicting ‘global warming’ versus CO2 emissions from the many which came out of their modelling apparatus.
They were all wrong by some margin when compared with observation from the real World.
What I find puzzling, is that no matter how many times the ‘experts’ and their prediction are wrong, the nitwits in charge and other useful idiots still believe whatever else they come up with.
I think what you mean JXB, is that the nitwits in charge are perfectly happy to have a set of results that they can use to fill their pockets with for absolutely no benefit to the citizens.
If there are at least 187 substantially different ways to do the same, namely, simulate the weather in the future, this clearly communicates that the people who came up with all of these have absolutely no clue how what they’re trying to simulate actually works.
They were alll wrong by some margin is nothing but pseudo-learnt sounding way of saying They’re all wrong.
In fairness, one set of model results produced a curve below all the others, although still generally higher than recorded temperatures (themselves manipulated to exaggerate warming).
Which set of results, then, have been least infected by GangGreen lunacy?Why, those from a big Country with lots of coal, oil, gas to sell.
The Russian model. Go figure.
“When trying to predict the future in three different ways leads to three different results, at least two of them must be wrong.”
Why aren’t all three wrong?
It’s perfectly possible and actually, even very probable that all three are wrong. But that’s a tangential point when trying to highlight the complete idiocy of this statement: When three different computer programs generate three different answers to the same question, two of them must necessarily be wrong. Hence, using all three to come up with a meta-result which is then used for actual decision making has – at the very best – garbled the one accurate result.
That’s a classic example of throwing sand into people’s eyes by making something more complicated despite this exact process actually guarantees that the combined output won’t make any sense.
And if all three are wrong?
“All models are wrong, some are useful.” George Box
Exactly so
Mathematical models are easily demolished by one inconvenient problem.
If mathematical models have any predictive power then some bright spark would have applied them to the stockmarket and, he or she, would have quickly become the richest person on planet earth.
The fact that this has not been done tells you everything you need to know about the ‘power’ of these models.
As I already wrote last time: People are applying this to the stock market all the time and fool themselves into believing that the outputs make sense despite they’re nowadays even forced to include warnings to the contrary when advertising stuff like investment funds.
Have you not heard of Long-Term Capital Management with its Nobel prize winners?
Meanwhile…I like this graph of Joel Smalley’s. It looks nothing whatsoever like lockdown-fanboy Ferguson’s predictions either;
”In England & Wales during spring 2020, over a thirteen-week period, there were almost:
52,000 ‘COVID’ deaths
Mainstream Media reported each and every death daily as justification for the most illegitimate denial of British civilian liberties in history.
In England & Wales during winter 2021, after the introduction of a miracle “vaccine” and after such a massive depletion of the vulnerable/susceptible population, over a thirteen-week period, there were almost:
70,000 ‘COVID’ deaths
In winter 2023, with conspicuously few ‘COVID’ deaths, over a thirteen-week period, there were almost:
55,000 ‘winter’ excess deaths”
https://metatron.substack.com/p/all-deaths-are-equal-but-some-deaths
And the true number is about 10% and people who were going to die anyway.
Non-deterministic models are as useful as tits on a bull.
Arguably if the variations are not significant then it’s not a huge issue. The bigger issue, IMO, is that the models are not re-checked against reality and revised/ditched when it turns out that they are rubbish at predicting what actually happens.
Arguably if the variations are not significant then it’s not a huge issue.
I think you’re misunderstanding the term. Non-deterministic means the output cannot be predicted based on the input, ie, that this is essentially a random process driven by its inputs in an unknown (and unknowable) way. This means one can as well resort to guessing or rolling dice to generate outputs.
But you can keep running them until the “required results” are delivered.
I know what deterministic means. My point was that if the result each time is within a small margin of the other results, the randomness is arguably not that important. I just think that it’s not the most glaring problem with modelling – the bigger issues are not sufficiently questioning the assumptions around the inputs and most damningly not revising the model based on its abject failure to predict anything.
I know what deterministic means. My point was that if the result each time is within a small margin of the other results, the randomness is arguably not that important.
That’s Ferguson’s point: Run it mutiple times and average the results, it’s meant to be stochastic, anyway! And this doesn’t make any sense because non-deterministic means the output cannot be predicted based on the input. This is literally the equivalent of rolling a dice three times to ‘predict’ an unknown number between 3 and 18 and then boldy claiming that this is a valid method for predicing numbers because The average error is only about 1/3!
Let’s not forget that one thing modellers can do well, is to buy bigger, much more expensive computers to run their fanciful programmes (based on multiplying wild assed guesses together and applying lots of tweaks and fudges).
These huge increasingly fancy and expensive supercomputers, paid for by you, dear readers, and using more and more fossil fuels to run ’em, have been very effective in getting completely false results much faster.
The MET Office loves them.
They are perhaps the reason why the forecast for tomorrow is doubtful, that for four days is almost certainly wrong. Selwyn Froggatt with his pinecone and piece of seaweed did better.
The standards of the computer are irrelevant when the code is written by rank amateurs
https://www.telegraph.co.uk/technology/2020/05/16/coding-led-lockdown-totally-unreliable-buggy-mess-say-experts/
GIGO
I find stochastic models very useful. After all they should contain all your deterministic models, each with their own probability of occurring.
“…the impact of public health measures, including border measures,..”
I don’t think birds can read.
Ha Ha. Good point. Well, they”ll keep out the birds that travel by lorry, plane and ship. Any measure that reduces the opportunities for the virus to spread has got to be worth it. Isn’t that what we’re told by public health ‘experts’?
Can someone please permanently unplug Ferguson’s computer.
I suggest unplugging the guy himself. He’s probably just an AI chatbot.
That doesn’t seem to be the attraction for his married ‘friend’. They don’t call him Professor Pantsdown for nothing.
Next time the Guvmint wants a ‘projection’ from him, it should be strictly on the ‘cock on the block’ basis. Guvmint with (for once) hatchet in hand.
IMHO, the attraction for his married ‘friend’ is that he’s one of her business contacts. Practically, the difference between incel and professor of something having a fullfilling albeit somewhat limited private life is one of spending power.
:->
[This is a conjecture based on absolutely no real information save my general distrust in people.]
Neil Ferguson – paid to lie
Stand in the Park
Make friends & keep sane
Sundays 10.30am to 11.30am
Elms Field
near Everyman Cinema & play area
Wokingham RG40 2FE
Hi LS just one question why 10:30 ours starts at 10:00? I’m always getting flak for being late!!!
If computer modelling had been used in Salem to determine who was a witch, I’m pretty sure all the women and half the men in town would have been burnt at the stake.
Plagiarising J K Galbraith – there are only 2 types of computer modellers, those who are wrong and those who don’t know they are wrong
That’s why they should my “suggestion” above.
Exactly
Two words for this lot of fear mongering clowns….
…. OFF!!!!
As a retired dairy farmer, I find it incomprehensible that this numbskull Ferguson should be ever believed. What he cost dairy farmers in anxiety and reputation with BSE and the deaths of 100s of thousand of perfectly healthy cattle with the F&M debacle. This was followed by the Bird Flu and Sine Fever which was believed by Bliar and then Brown and since then 5 more PMs beggars belief that Governments are so stupid.
One has to wonder why the Governments believed Imperial College predictions over the Oxford University and people like Prof Sunetra Gupta who had been the forerunners of the unit that was created at Imperial College.
As soon as I knew on 3rd week of March 2020, that Ferguson was involved then I emailed my Tory MP and gave him lenthy reasons as to why this was a mistake and it would costly and wrong. The base figure Ferguson used was wrong which means that the outcomes become extrapolated to an advanced and ridiculous degree which is what happened in all his previous predictions and sure enough, it happened again.
This man ought to be charged with fraud but treachery would be a good charge for which he so richly deserves.