You may recall that we undertook to review the 100 models forming the backbone of the UKHSA’s latest offering: the mapping review of available evidence. Remember, UKHSA did not extract nor appraise the evidence as it does not have the resources. This drew expressions of mirth among our readers. We agree it’s a bad joke – a very bad one – considering this ‘evidence’ is what the UKHSA states justified restrictions that led to stories such as Pippa Merrick’s, which unfortunately are not the exception. Earlier versions of the justification were a bad joke, too. What follows is no better.
Diligently, as promised, we downloaded the 100 papers defined as “models” by UKHSA (please do not ask Hugo Keith KC what is meant by that term).
Of each, we are asking the following questions:
- What is the non-pharmaceutical intervention (NPI) being assessed (e.g. is it an NPI, and is it defined and described?) and in what setting? (e.g. community, hospital, homes etc.)
- What is the source for the effect estimate? (to model its effects, you need a source of data, i.e., what does it do?)
- What is the size of the effect? (such as risk reduction of SARS-CoV-2 infection)
- What is the case definition? (how did they define a case of COVID-19?)
Straightforward, we thought.
Anything but, we are finding out.
First of all, the papers are full of jargon, as they are mainly written by mathematicians, or at least that is what they say they are. Secondly, most of them come to the same conclusion: lockdown harder, do as I say, or you or your auntie (or both of you) will die.
The most disconcerting answers we are getting are those to the second question: what is the source for the effect estimate?
In a classical model, you start with describing the problem, in this case, the number of cases and complications in a population, transmission patterns and perhaps age breakdown. If your second part is about how to stop or slow down the spread, hospitalisations, deaths and so on, to model the ‘how to’ in a credible way you need facts about what you are modelling is supposed to achieve (say distancing). Which, if introduced in this or that setting, is likely to diminish the risk of infection by Z%. The numerical estimate for Z should be surrounded by a range of probabilities (confidence intervals), giving the boundaries of probabilities that the observed effect (Z) in reduction of SARS-CoV-2 infection lie within X and Y around your point estimate of Z. So you then take Z and stick it in your model to see what effect Z would have and then you can use X and Y to play ‘what if’.
The crucial word is ‘credible’ because these models (are they projections, scenarios, predictions, or scenarios upon which predictions can be projected – ask Hugo Keith KC for a simple answer) have been used to change people’s lives. Or maybe some of them were retrofits to justify something already done by the Robert Maxwell school of ethics.
Credible would mean an estimate from one or preferably more well-designed studies with a protocol and clear case definitions. As the focus is the U.K., the data should come from the U.K. or at least a similar setting.
Well, here is an example of the sources of ‘parameters’ used in one quite well-publicised model:
Of the 11 assumptions underlying the model, eight are unsourced; one comes from a systematic review without infectious case definition, one from an economic model, and one from a case-control study.
Extraordinary, you will say: this seems to be the universal method known as BOPSAT (a Bunch Of People Sitting Around a Table). Yes, it is, except that the model, in fact, was about mass community testing for SARS-CoV-2 by lateral flow devices (LFDs) with not a shred of non-pharmaceutical interventions in sight. LFDs are tests, not interventions that can slow or stop the spread of anything.
And these are some of the minor problems we face, so it takes time. Perhaps we should ask Mr. Keith for help?
Dr. Carl Heneghan is the Oxford Professor of Evidence Based Medicine and Dr. Tom Jefferson is an epidemiologist based in Rome who works with Professor Heneghan on the Cochrane Collaboration. This article was first published on their Substack, Trust The Evidence, which you can subscribe to here.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Society is being ruled (ruined) by geeks who crunch numbers without a shred of common sense as to whether or not their interpretation is valid – or not. While science is being debased by garbage in, garbage out computer modelling, the politicians lap it up because they can say their policy decisions are based on ‘the science’. Whether it is climate change or viral pandemics as long as science is controlled by vested interests it matters not a jot that credible scientists don’t get to express their opinion.
A more realistic interpretation of “modelling” is that those asked to produce the models are given very strong nods and winks, along with appropriate brown envelopes, indicating the sort of results that are required.
Scientism. The dogma of the Technocrat. Relationship with the real world? None
I think you’ll find that a lot of the epidemiological models are merely Garbage Out. At least I’ve read a lot of papers that are.
Casino’s make money because they win 51% of the time and the punters win 49% of the time. Over long periods they clean up because of this simple equation.
If computer models that were even slightly accurate at predicting the future then some bright spark would have applied it to the biggest and potentially most lucrative casino of all, the Stock Market, and become the richest person on earth.
The fact that they haven’t done this tells you all you need to know about computer models.
Whilst your example is interesting to consider, you are not considering a “real or proper” computer model. In many areas of Engineering we use computer models, for example to design a bridge. We need to calculate the forces in many items of the structure under many conditions of loading. The answer must be tolerably accurate because failure would be disasterous, and very expensive.
To make the model we use several inputs, basic mechanics, material strengths, expected loads, etc. Once modelled we build the structure and then fix sensors to all the parts which have been calculated and measure the actual values. Then if they are not exact, go back and adjust the model until they are. The adjustments are not arbitary constants to make it right, they must be based on the inputs only. A few rounds of this on various projects and you have a trusted model, which is known to be accurate. Actually the most accurate are probably for electronics design, but they are highly technical, so the simple description above.
Models of the kind you describe, are not built like this, they are largely guesswork, and there is never any “go back and adjust the model” to give correct answers, and they are full of arbitary constants, known as guesses.
The idiot man from Imperial College, seems to think he is a modeller. He has used his models to guide the Government for many years, and they have ALL (note that) been wildly wrong in prediction. In other words he hasn’t a clue about disease propagation, be it Covid, BSE, Foot and mouth, and the rest. He also doesn’t even understand how to model something, but that is to be expected with an “expert”.
Sadly The Real Engineer as attractive as it may seem, an engineering approach is a specific extremely narrow application of reductionist science which cannot work outside the tight confines of engineering.
I explained why here and here.
Reductionism always fails in the face of complexity.
Imagine what would happen to your bridge design if you could not test it thoroughly before putting it to use.
And look at what happened to the ‘Blade of Light’ footbridge in London connecting St Pauls and the north side of the Thames to the Tate Modern on the south side. The oscillations caused by footfall and resonance were completely unexpected despite all the design and testing and ‘feedback loops’.
Engineering is a limited tool and its lessons cannot be applied universally to overcome complexity in all its manifestations.
In the context of “real computer models”, I presume you are aware of the fact that it is impossible to prove anything but the most simplistic of software applications will work as intended?
So the software you use to prove your designs cannot itself be proven to be entirely reliable.
In software engineering software – particularly complex software – is constantly being maintained to eliminate deviant performance – ‘bugs’ – as they are discovered first in testing and then later during live use.
Let us hope that no bridge design will ever fail because of the computer software used to model it.
But as your examples demonstrate, even the bridge in the modern day is not put into live use without some form of monitoring and feedback to ensure it is operating within its expected parameters of operation.
I assume you know nothing of Engineering, or software. Taking software first, one only gets software that has bugs because it has not been tested properly. This is not eased by poor computer coding languages like C or C++, which have no useful type checking leading to stupid errors. However programmers like them because anything can be an integer one line and the same variable say text in the next. Software testing takes, in my experience of 40 years, takes at least 10 times as long as writing it. Thus no one can be bothered! Let the customers find the bugs is widely the mantra.
My bridge example was deliberately much simplified, and your understanding is therefore misled. The point of structural design is that we have been doing it since Brunel, and understand every first principle of the job. We do not monitor every design for compliance with the model because we have vast experience of the modelling and it is essentially completely correct.
Now to confines and reductionism. If one wishes to produce a predictive model, it is essential that the situation to be modelled is completely defined from a scientific point of view. I suggest that your interest may be climate models, or weather models. These processes cannot be treated as reductionist candidates for a number of reasons, and the model failures are obvious. These modellers take a very complex chaotic situation and start from the wrong place, not the underlying science. They assume that CO2 controls temperature, but we have much solid evidence that it does not. They assume that there are positive feedback mechanisms operating with no proof or even the slightest evidence. They choose arbitary values for these feedbacks (there is no useful data available) to produce desired outputs, and then wonder why the predictive value of the models has never been remotely correct. There is a Russian model which is better in this respect, but it doesn’t have any (or very little) positive feedbacks! Bad modelling is simple, proper modelling is difficult and time consuming. One works, the other will always be wrong. Simple enough?
“I assume you know nothing of Engineering, or software. Taking software first, one only gets software that has bugs because it has not been tested properly.”
Sadly you are wrong. When I taught Masters students in Software Engineering program proving was problematic and it remains so. It is impossible to prove any software will work as designed to do.
It is similarly impossible to thoroughly test most software. That is why even the software people use on their computers and on their phones is constantly being updated to correct ‘bugs’.
If you knew anything about software you would know that and I would not be wasting my time now educating you.
Sadly your understanding of software, science and engineering is blinkered at best and the views you express are sadly wrong.
But who am I to tell you. And despite my doing so I fear it will not make any difference.
Would the treatment of patients be better if computers did not exist.
Discuss.
No codes, no programmes, no models, no algorithms, no instant and universal protocols, medical bureaucrats having to hard graft, no BLAST sequencing, no models of “virus”, no genetic engineers – and no mRNA jabs.
On the other side, hard graft in understanding more about how and why the immune system works, how “disease” actually spreads and which treatments work and doctors being able to use their own initiative without being beholden to health management consultants.
The Inquiry has been captured and is beholden to the computer which results in the mantra of “things would have been better if we’d locked down sooner and harder.”
The next manufactured pandemic = interesting times ahead.
Basic modelling says that flattening the curve results in a longer epidemic with lower acquired immunity so when you lift measures you just get another wave.
‘Dear patient,
Due to the rise in COVID cases in our area. We must encourage you to wear a mask when coming to the surgery. Help us to protect all of us’
That’s a text message I received yesterday.
All this bloody nonsense is starting again, they haven’t learned a damn thing.
Would you want to see a ‘doctor’ who is that scientifically illiterate.
All the guff like masks comes from the DoH. They are clueless and should be sacked. We cannot afford them, and they have all the power over Practice Managers who simply parrot the circulars, in case of sacntions for not doing so.
There is a fundamental scientific reason why modelling does not work.
Complexity.
Classical reductionist science fails when confronted by complexity.
Classical reductionism is exemplified by the classical “The” scientific method – testing x against y whilst [theoretically] holding all other variables invariant throughout the testing.
The “The” scientific method fails abysmally when all other variables cannot be held invariant.
It has only ever worked in specific cases in the “exact” sciences like inorganic chemistry and in some branches of physics.
The problems complexity brings are manifold.
It is impossible to test to try to establish a theory because the results will include results which do not conform to the theory even if they include results which do – this is typical in the biological sciences which commonly have many variables in living organisms which cannot be identified let alone controlled and held invariant.
This means most science theories are invalidated [falsified] by all the aberrant results occurring alongside the conforming results – and it only takes one – the one white raven or the one black swan.
Another problem is it is impossible to predict outcomes. One can at best only say what might be the probability of a particular outcome but not predict what the outcome will be.
This is a consequence of the probabilistic nature of most scientific theories [ie. those outside the theories taken to be established in the exact sciences].
It is only possible to forecast and that is based on probability.
However, forecasting has a major problem. It is the ‘prediction horizon’. For time sensitive forecasts – which are most of the ones we are interested in – the forecast becomes in effect exponentially unreliable with time.
The typical example is a weather forecast. We can only look so far ahead before forecasts become so unreliable as to be useless.
And there is worse to come.
Professor Philip Tetlock’s 20 years of research proves that expert forecasts are less reliable than forecasts based on the outcomes achieved by dart throwing chimpanzees.
So we cannot rely on experts for their opinions about what is going to happen. As Professor Daniel Kahnemann demonstrated, we a lulled into a false sense of security by experts because the media wheel them out to explain why events which have already happened did happen. The explanations are so convincing we are led to think the experts can then tell us what is going to happen when they are proven to be the worst people to ask.
Some people Tetlock has proved have great skill in making forecasts but none of them are experts in the specialist fields in which they make their forecasts.
JK Galbraith, an economist of some repute, said there only two types of forecasters; those who were wrong and those who didn’t know they were wrong.
Another, previous example of the dangers of modelling was the financial crisis of 2007/8.
How on earth can the mental incompetents are are supposedly in charge of the country be unaware of these recent examples of the uselessness of modelling in general and in Ferguson’s case gross errors in his previous utterings, to choose such a useless tool?
Never mind if it all goes wrong we’ll only screw up the economy and f**k up the education of a generation of children.
Iconoclast, please read my post on models. Whilst you are correct to talk about these “social” models, Engineering ones are different and you depend on them a lot. Your computer in front of you was designed using models of unbelieveable complexity to make the chips, the PCB etc. It is not complexity that is the problem in a general way, it is that the model (particularly climate models, LOL) are not adjusted or written to produce accurate results via a feedback loop from reality. Simple!
Hi,
It might help if you review what I wrote in my immediately prior comment.
I am not addressing “social models”. I am addressing every branch of the sciences in which classical reductionism does not work and indeed, cannot work.
Forecasting weather is a science and not a “social model”. It is also not like a human manufactured complex machine. The biological sciences are not “social models”. These are sciences in which the variables cannot be controlled to carry out reductionist experiments.
Drug trials are a typical example. They are carried out on living organisms. No two are the same. [A possible exception is the artificially maintained and cultured single ‘pure’ cell line which is never found in nature]. Every ‘test’ in the trial is on one example from an heterogenous population – no two examples of which are identical.
In contrast, classical reductionism relies upon homogeneity.
A motor car engine is complex. An aircraft is complex. But your feedback loops are from tests carried out on identical examples. Each example is manufactured to tight quality control standards to ensure each machine conforms as closely to manufacturing specifications as modern science and engineering can ensure.
The testing carried out on such machines identifies problems and deviations from expected performance which results in modifications to the design of the machine to eliminate or control and prevent those problems and deviations.
The constructors of such machines learn what the operating tolerances need to be to ensure the machines are not driven into operating conditions which result in chaotic performance leading to operating failures. So operating tolerances are set.
The chips in a computer have for example temperature and humidity tolerances which if exceeded can result in deviant operating performance.
This is also an example of the failure of the use of scientific theories alone to predict performance of complex machines.
Testing is nearly always essential to get what you call “feedback” so that the design can be modified and the operating conditions of the complex machine established.
Only after design, build and repeated testing is the final design put into manufacturing and then machines are built which are identical to within the manufacturing tolerances.
Even after a design goes into manufacture individual components may continue to be altered with the experience of the performance of the machines which have previously been sold and used.
So your “feedback loops” in engineering are part of a process of testing near identical homogeneous ‘models’ of a particular design.
That is the reductionist approach and it only works if it is possible to ensure each machine tested is built to a known design thereby controlling all the variables. Establishing the operating conditions by testing and feedback ensures uniform performance of each model of each machine built to that design.
However, it is not an error proof process. Production models of aircraft or any other machine can still fail during live use in real life.
So the consequences of complexity are still not eliminated even when each machine is as closely identical to every other one as is possible.
Why do the batteries of some EVs burst into flames and not others?
So your “feedback loops” in design and manufacture are a specific limited example of keeping artificially all variables as invariant as human ingenuity is able to do when there are many inexact sciences in which none of that is possible.
I have not touched upon quantum mechanical effects in the foregoing explanation which, if one holds to the probabilistic versions of quantum theory [which are the dominant ones at present] then probabilistic effects cannot be entirely excluded. There is always a probability of the unexpected – perhaps why some EV batteries burst into flame?
Another excellent skewering of the Rona approved narrative by Messrs Heneghan and Jefferson. Long may you continue to pull aside the veil of secrecy. The line about the ‘Robert Maxwell School of Ethics’ sums it up perfectly.