A reader who describes himself as a “normal person” has tried to make sense of Imperial College’s notorious March 16th paper. He doesn’t have much luck. Imperial College really needs to be a bit more transparent about the assumptions it used in its model and how it reached the figures of 510,000 dead if we “do nothing” and 250,000 dead if we stuck with mitigation. How can voters make up their own minds about whether the Government was right to lock down the country unless “the science” is set out in a way that lay people can understand?
Have you read any of Imperial College’s papers about COVID-19? Probably not. Nor had I. But we’ve all heard about them in the news. I decided to sit down and read the one that contained the advice to lock down the UK.
I’ve written this from the perspective of a “normal person”. I’m not a professional statistician, though I know a bit about it. Nor do I write computer software. In fact I’m a professional historian, which means that above all else I ask questions. I also worked in secondary education for a decade, where I was continually subjected to predictive modelling that was always wrong and always based on a vast number of assumptions that ignored reality. I became used to dismantling what we were presented with, and what I saw many of my colleagues accepting at face value. I wasn’t satisfied with just accepting Imperial College’s modelling for COVID-19. I wanted to understand it. In particular, I wanted to know why they had predicted 510,000 deaths in the UK from COVID-19 and recommended the lockdown we are now stuck in. What I found myself doing was sinking into a quagmire of assumptions, one piled on top of another, and figures cited without any coherent explanation. At every stage this progresses the predictions to one more level away from reality.
This is what I found out.
In Ferguson’s team’s “Report 9: Impact of non-pharmaceutical interventions (NPSs) to reduce COVID-19 mortality and healthcare demand” (published 16th March 2020), a series of recommendations is made about how practical public health measures could reduce the spread of the disease. I’ll call this paper Ferguson20.1
Like all predictive modelling, a number of assumptions were made, assumed, or implied. Predictive modelling is – in part – based on past observations from which a projection of the future is derived. The practice is widespread in commercial and educational contexts. It is usually expounded in ways that are incomprehensible to most people and narrow in perception and approach.
It’s important to bear in mind two key options considered in Ferguson20: mitigation or suppression of the effects of the virus.
Ferguson20’s prediction is that “optimal mitigation policies”, such as isolation of suspect cases, quarantining of their households and social distancing of the most-at-risk would still result in “hundreds of thousands of deaths”. No figures are supplied at this stage but they appear later, spread over the “2 years of the simulation”:2 approximately 258,000 if the health system was not overwhelmed, a reduction of 49% from 510,000 with no policy interventions.3 The recommendation was therefore to resort to suppression through what we now know as the lockdown, and that that would need to last for “18 months or more”.
Ferguson20 appears to assume from start to finish that:
- The reproduction rate of the disease as measured on the Ro index is constant at Ro = 2.4, which they call their “baseline assumption”. We’ve all heard this unit of measure on the news and it refers to the rate at which one infected person infects others. There’s nothing fixed about an Ro number: it’s estimated on the basis of a number of factors (and assumptions).
- That every human being is equally susceptible to being infected by the disease and transmitting it at the same rate.
- It is also projected that ultimately 81% of the UK population will be infected. Ro, together with the average generation times between original and transmitted infections and the proportion of the population that remains susceptible to the virus (which gradually reduces) determines the compound daily growth rate of the infections.
None of these is actually stated, but the reader is led to believe these are the case since any alternative is not acknowledged beyond saying that “much remains to be understood about its transmission”. Instead, more concern is expressed about how nations and people respond to lockdown measures, including “spontaneous changes in population behaviour”. It is interesting here that blame for failure of any lockdown is therefore expressly transmitted in advance to those obliged to carry it out. This leads to the paper’s conclusion, which is essentially a disclaimer.
As we now know, this ominous warning could also have been usefully applied by himself to the principal author of Ferguson20.
It is worth adding that the Report did acknowledge the potential social and economic costs, and warned that the most vulnerable cannot be completely protected.
The Assumptions Made
In order to get to 510,000 deaths in the UK, Ferguson20 made the following assumptions:
- The Transmission Model was based on mathematics that created a hypothetical population in which the disease circulates. This was used “to generate a synthetic population of schools” and also something similar for workplaces. This was in part based on previous influenza outbreaks and clearly assumed that everyone within this population was equally liable to infection, with one third occurring in homes, one third occurring in schools and workplaces and one third “in the community”. This has been referred to as the “SimCity model”.
- Ferguson20 assumed that all infected individuals were infectious, and that asymptomatic individuals were 50% more infectious than symptomatic individuals. They also assumed that infected individuals would subsequently be immune “in the short term”. (Later on, page 15, they acknowledge that there are “very large uncertainties around the transmission of this virus” but this caveat does not seem to have affected their calculations or assumptions.)
- Infection was assumed to result in exponential growth every 6.5 days in each country.
- It appears to have been assumed that everyone in each country formed part of an aggregate susceptibility to infection, apart from considerations of geographical separation and of household size. In other words, no account was taken of any other factor such as natural resistance, genetic predisposition, blood group, age, ethnicity, race or the existence of other medical conditions. These of course could not be computed at the time but that – and the ignoring of them – could not affect their direct relevance to the accuracy of the assumption. None of these is mentioned even as technical possibilities.
- Infectiousness was not distinguished by age, but age was recognized to be a “non-uniform” factor in hospitalization and fatality. Overall, they predicted 4.4% of the infected UK population would be hospitalized, of whom 30% would require critical care and of those 50% would die.
- The disease is implicitly assumed to maintain a constant progression towards near-universal infection with hospitalization and fatality at constant rates in different age bands.
- The disease is also assumed to remain of constant potency and impact.
As we can see therefore, Ferguson20’s advice was predicated on a wholly artificial depiction of disease circulating in a population on a purely mathematical basis. It could not possibly take into account the plethora of actual factors that would determine the true course. The result was their prediction of impending catastrophe which seems to have been founded on the assumption that the great bulk of the population would be infected, and of whom two-thirds would be symptomatic. This was in spite of the fact that we know diseases do not have a universally similar impact on the population. This has become painfully apparent with the latest revelations that black Britons are dying at a rate which is twice that of white Britons.
A key headline figure in Ferguson20 is the prediction of 510,000 deaths based on the Ro figure of 2.4. This is, in fact, only one of several predicted “do nothing” death totals for the UK, derived from different Ro figures (a range of 2.0–2.6). I was puzzled by the lack of a clear explanation for how 510,000 had been arrived at, since this was a key news-grabbing figure when the whole crisis erupted and the lockdown started. Preventing 500,000-odd deaths was a driving force behind the government’s decision to enforce a national lockdown.
It’s important to add here that the projected 510,000 deaths does not take into account the possibility – even probability – that some of that group would have died during the two-year period from other causes. Of course we now know that underlying health conditions are playing a large part in mortality, with the confusing blurring of causes of death being recorded either as “from” or “with” COVID-19. Professor Ferguson is on record as conceding more recently that this could have applied to as many as two-thirds of the victims within 2020 alone.
[Note: The “2 years of the simulation” is important. The average death rate in the UK is about 9.4 per thousand, or around 625,000 per annum. Over two years, therefore, around 1.25 million will die as a matter of course. It will not be until at least two years have passed that we will know how many deaths from or with COVID-19 will amount to an increase over the deaths that would have been expected anyway, whether that is the 510,000 deaths from Ferguson20’s projection or the actual number.]The basis for projecting 510,000 deaths is what has already been discussed on this site. Ferguson20 used a “stochastic, spatially structured individual based simulation”, explained in a 2005 paper by Neil Ferguson and others based on an influenza outbreak in SE Asia.4
Although the term “stochastic” has been described on this site as a scientific word for “random”, it’s actually a Greek word the original meaning of which was “being skilled at aiming at something” or, better, “an educated guess”. As an amusing aside, scientists rarely seem to know the Greek origins of the words they use, which are often quite humbling.
Now, I had a read of that paper. It didn’t help me understand the predictions in the 16th March 2020 paper. For a start the method used in 2005 “did not model disease-related mortality” because they were only interested in inhibiting the spread of the SE Asian influenza outbreak rather than limiting deaths.
The answer is via a route which Ferguson20 simply did not explain. It provides a rounded Infected Fatality Rate (“IFR”) of 0.9% (p.5; the actual IFR figure they worked from seems to have been 0.943%), which is represented by the 510,000 projected deaths. They also assumed that 81% of the population would be infected because they estimated an Ro of 2.4. This means that 510,000 would die out of 54.12 million (81% of the UK population of 66.8 million) if one uses an IFR of 0.943%. This population figure is for mid-2019 from the ONS, but the real figure is probably slightly higher for 2020.5
No UK population figure appears in Ferguson20 and this calculation is never actually demonstrated either.
The figure of 510,000 is inclusive of those dying outside ICUs but that point is not clearly made. Indeed, it is obscured by the next paragraph proceeding to discuss the overwhelming of ICU capacity.
It’s important to understand that there are two paths through Ferguson20’s figures to projected deaths. One is the number of deaths of those admitted to ICUs. This number is smaller than the headline figure of 510,000 which covers all projected deaths (for example, those that occur in the home). Ferguson20 predicted that 4.4% of infected cases would be hospitalized. This was not based on any method of knowing how many people in total would be infected. Ferguson20 could only assume that up to two-thirds of the cases would be recognizably symptomatic. This is less than the 81 percent they predict actually would be infected.
So, let’s start with the assumption that two-thirds of the UK population (c.44.53 million out of c.66.8 million) is infected. Using ICL’s own percentages, that means 1.96 million being hospitalized at some point, of whom 30% or 588,000 would require intensive care, 50% of whom (294,000) will die.6 Crucially, this only refers to those dying in ICUs. As we have seen, they estimated that an extra 215,000-odd would die outside ICUs.
It’s worth adding that the 4.4% comes from “a subset of cases from China”. The 30% figure appears to have come from a single source (credited as a personal communication). The 50% is attributed to non-referenced “expert clinical opinion”.
However, Ferguson20 proceeded to estimate that without a lockdown in fact 81% of the UK population (54.11 million) would be infected, derived from an assumed Ro of 2.4. Applying the same calculation means that 2.38 million people will be hospitalized, of whom 714,225 will be admitted to ICUs with 357,112 dying in ICUs.
Ferguson20 did not actually reproduce any of these calculations, so we don’t know what he thought the figures would have been. Nor did it cite the UK population size used. And it’s only at this stage that the 510,000 projected deaths first appears, without explanation.7 It’s necessary to go through the process I have outlined above to find how it was calculated.
Now, just to add to the mounting complications, it’s been suggested elsewhere that Ferguson20 got its figures wrong because they had “downscaled” Chinese hospitalization rates with an IFR of 1.23%, leading to the proposition that Ferguson20’s 510,000 should in fact have been 661,402.8
So I read the original paper – which we’ll call Verity20 – that provides the 1.23% IFR.9
And what do I find? The 1.23% is actually for a Case Fatality Rate (“CFR”), which is not the same as an IFR. A Case Fatality Rate is measured against known cases of a disease. An IFR includes the CFR but tries to incorporate an allowance for asymptomatic and otherwise undetected infections. Since those are, by definition, unlikely to be fatal it’s no great surprise that a CFR proportion of deaths is larger than an IFR. The Verity20 paper in fact estimates an “overall IFR estimate for China of 0.66%” (p.2). If that was applied to the UK then Ferguson20’s 510,000 prediction comes down to 357,000. But I am straying.
We can therefore bypass the 661,402 and stick with Ferguson’s 510,000 rather than confusing the issue further. But it’s worth bearing in mind the difference between an IFR and a CFR.10
Ferguson20 proceeded to estimate the effects of various interventions involving both the nature of the intervention and the extent of compliance.
The Impact of Interventions
A complex range of tables follows which itemizes predictions dependent on the extent of lockdown measures and the time in which they are in place.
This section is filled with waffle and caveats. They concluded that epidemic suppression is “the only viable strategy” through population-wide social distancing and home isolation, together with school and university closure for maximum effect. They reject mitigation as an option on the basis that it would overwhelm ICUs leading to “in the order of 250,000 deaths” (this is the 258,000 mentioned elsewhere by them).
They concluded that a range of interventions be imposed in countries able to implement them, and that they would need to be in place for “several months” to prevent a second wave – a figure that does not seem to have been discussed or mentioned by the government. They model the repeated imposition of a lockdown for two-thirds of the time until the end of 2021 as being necessary,11 at which time this pattern would need to be continued in the absence of vaccination or an effective drug being available at scale.
They believed school and university closure to be more effective than household quarantine. Elsewhere, they state their assumption that children “transmit as much as adults.”
Part of the argument about applying a limited-term lockdown is that Ferguson20 also assumes that until a vaccine arrives in eighteen months or more “these policies will have to be maintained” to prevent “a rebound in transmission”.
In spite of all this, the final conclusion is a truly remarkable one and it represents a strange twist from the whole focus of Ferguson20. This says that:
- “[it] is not at all certain that suppression [of the virus through these measures] will succeed long term”
- “[h]ow populations and societies will respond [to long-term lockdown measures] remains unclear”
In other words, having recommended a course of action based on a litany of assumptions – none of which is necessarily right and much of which is open to debate – Fergson20’s authors distance themselves from any failure resulting from following their recommendations. “It wasn’t our fault, guv”, I can hear them plead in two years’ time.
Moreover, while the predictions about hospitalization, ICU use and death numbers that would have resulted from inaction are not necessarily wrong, nevertheless since they are entirely based on assumptions and not presented as the result of clear and transparent calculations there is no reason to conclude that they are right either. As a layman, and as far as I could see, the entire structure is a minefield of figures derived from a variety of sources, open to confusing and contradictory interpretation, and which omits a vast range of real-world factors that will affect the outcome. Even the software code seems to be gravely flawed.12
Quite how this could ever link to reality I have no idea. Worse, I don’t imagine any politician read Ferguson20 critically.
Am I any the wiser? What left me most concerned is that it would appear that the UK embarked down the lockdown/suppression route based on the advice of a very small group of experts who seem to have little or no confidence that their recommendations will succeed anyway. And when one looks at Sweden one rather wonders why they bothered to make them.
- 1
- 2Ferguson20, p.11.
- 3Tables 3 and A1.
- 4
- 5
- 6Tthe 258,000 provided in Table A1 of Ferguson20 refers to deaths under a mitigation policy, though precisely how it was calculated is not laid out.
- 7p.7.
- 8
- 9
- 10It’s also worth putting the IFR of 0.943% and the CFR of 1.23% in context. For Ebola, the CFR is as high as 90%. For the Spanish flu of 1918 it was in excess of 2.5%. For poliomyelitis it was as much as 10% in adults but between 2–5% in children.
- 11Figure 4.
- 12
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
‘And when one looks at Sweden one rather wonders why they bothered to make them.’
Make that two of us.
Maybe we can hazard a guess, the more outlandish the better?
Here’s mine:
Maybe they aimed to turn ‘Ferguson 20’ into a movie?
‘Carry On Covid!’ starring Professor Pantsdown
https://www.youtube.com/watch?v=mAg1s1ByYfM
We need more of this type of analysis to try and cut through the current hysteria. It is next to impossible to have a discussion questioning the ‘lockdown’ without becoming a target for ridicule and abuse.
So many things were wrong about the U turn by the government on weekend 21st March, how could such a drastic shift in policy be instigated on the strength of one paper? I suspect that the government have been ambushed by civil servants within The Blob, a preemptive strike before any reform of the club.
I’m as sceptical as they come, and think we are probably witnessing this country’s biggest ever public policy disaster. However, I don’t think you’ve quite skewered the right aspects of this debacle.
We’ve got to ask what Imperial’s modelling in the March 16 paper was trying to do. The clue is in the title of the paper:’Impact of non-pharmaceutical interventions (NPIs) to reduce COVID- 19 mortality and healthcare demand’. In other words to assess how public health measures could reduce the spread of the disease. Not to forecast the death toll.
But to run the model you need to make a range of assumptions, including a measure of the contagiousness of the virus (R0) and its lethality (IFR). If the transmission model suggests that 81% of the population could be infected unmitigated and the assumed IFR is 0.9%, this gives the half million death toll. But it is not a prediction. It is both an input to the model and an output. It was characterised somewhere as a ‘reasonable worst case scenario’ to get away from the idea that it was a prediction. But it wasn’t even that. It was just the result of scrabbling around with a limited amount of data to plug a number into the model to make it run so as to look at the effect of NPI’s on the spread of the virus.
So, I don’t think we should blame the model in this respect at least, even if it may have other faults, extensively discussed elsewhere, that might render it unfit for purpose. The chief modeller, Prof. Ferguson, needs to cop for some of the blame, for opening his mouth in public and going well beyond his area of expertise of mathematical biology. That’s not really his fault though. He simply shouldn’t have been allowed out. No, the real culprits in this are those downstream of Prof Ferguson. He may well be the fall-guy in all of this, but he’s just been fitted up.
The problem with this approach is that the range of assumptions that produced the background base case had no justification. But the default scenario had 510,000 deaths, with the epidemic peaking at the end of May. As you say, it was ambiguously an input and an output. In its role as an output, it scared the pants off the Government. But we know from multiple sources that an IFR of 0.9 not only leads to far too many deaths, but also to far too few previously infected individuals in the population per death, meaning that the self-attenuating behaviour of the epidemic is far too muted. The very least that should have been done is to sensitise the output to the IFR. If this had been done, they could have noted whether it would be acceptable to stick with mitigation policies on the assumption that the IFR was twice as high as Flu, say, or 4 times (the result variously seen in parts of China, Italy and the early local breakouts in Germany). It almost beggars belief that a paper which completely neglects discussion of this parameter (and bases itself on choice of an extreme value) can have been used to demolish a team of government advisors who had been basing their recommendations, quite reasonably, on mitigation and herd immunity.
And required a common sense leader of stature to stick to their guns……..
Ah, that’s where I keep coming back to Mrs T…I seriously doubt there is anyone of this calibre left in politics.
The exact sentence in the paper is ‘In total, in an unmitigated epidemic, we would predict approximately 510,000 deaths in GB and 2.2 million in the US’. So while you’re right that it was scrabbling around to make a model run, that didn’t stop them from stating that 510,000 was unequivocally their prediction and that is how it was treated by the government.
Of course it’s his fault. It’s precisely what he has done when any new public scare campaign about a pathogen has been perpetrated in the past:
2001 – he claimed that 130,000 people would develop variant Creutzfeld-Jakob disease (vCJD). Since that time, there have been a total of 128 deaths globally.
2005 – he claimed that H5N1 (avian flu) was going to rival the 1918 flu:
Actual toll – globally – in the following 15 years: 455
2009 – he claimed that H1N1 (‘swine flu’) could
• infect a third of the world’s population;
• had an IFR of 0.4-1.4%;
• be in line with the 1957 pandemic that killed about 3.5 million people (he caveated that immediately, by claiming that health care was better now).
Actual toll – globally – since then: 18,036
And that’s before we touch on his team’s eager participation in climate hysteria and the apocalyptic forecasts that fail to materialise (and have done since the 1980s).
If one thing comes out of this covid nonsense (apart from the slightly-earlier deaths of some sickly octagenarians), it would be the recognition that people who do modelling for government-funded research are just as beholden to their funders as those who do the same thing for the private sector.
The difference is, governments who get deliberately biased forecasts are doing so in order to justify foisting someone’s current (electorally-driven) infatuations onto the entire population. Those in the private sector are just trying to make money – they have to get consumers to decide to buy their products – and have much much tighter budget constraints.
That possible positive outcome – public skepticiam about government forecasts – is not going to happen though. Peoploe are just too stupid for that to happen, as perusal of any PIAAC study will show.
By the end of this hysteria, already-skeptical people will have their skepticism validated, and the other 95% or so will remain stupid and ignorant – and will participate in the orgy of government self-congratulation that will be amplified by a tax-dependent media (government ad-spend supports elevated prices for marginal ad-slots).
The point is: Ferguson is not trying to be right. That’s not his objective – if it was, he would be more honest, and would strongly caveat every piece of data he produces.
He and his team are doing something much more mundane – i.e., trying to make bank. That’s fair enough as far as it goes. The problem arises when making bank is best achieved by getting very large wodges of government cash in exchange for giving government what it wants to hear: at that point the non-charlatan demurs and finds some other way to earn a living.
Governments love their polities to be scared: if the median housewife gets told by morning TV that everyone faces extermination by greeblies, she insists that the entire household runs to Mummy Government and beg to be protected.
That goes triple if TV tells her that her kids are going to die. That is why the news for the last week or so has been desperately trying to increased perceived risk of covd19 to children – very very very very few of whom get any symptoms at all.
Nailed it! Every one of Ferguson’s previous predictions has been woefully, horrendously, orders of magnitude wrong. How could any sane person possibly believe that this one time he got it right?
The point I was making is that if you choose to get an astrologer to help guide your policy, you shouldn’t blame the astrologer when it doesn’t all go to plan. To go on about the astrologer’s poor record at crystal ball gazing in the past is interesting but you’ve got to point the finger at the idiots who chose to listen to an astrologer and fell for their mumbo jumbo. And then to let him spout their mumbo jumbo in public. Is all.
In the parable of the scorpion and the frog, it’s the frog who is stupid – the scorpion just does what a scorpion does.
It is disturbing that a national government places huge weight on a line of research that has been massively wrong with every previous assessment. The absence of any feedback loop coercing the researchers into improvement suggests a very poor research process.
On the contrary they are deemed successful and high status ….. well they are attracting massive levels of funding so must be great work!?!
If the research assessment people haven’t picked up on Ferguson’s team then they too should face harsh scrutiny and probably a change of leadership.
It’s clear to me that when the bones are picked over a key question will be the dynamics of how the government (in a broad-ranging sense) was persuaded by SAGE and specifically the Imperial College team that a lockdown was the proper course.* And to suddenly replace the carefully thought through (presumably) strategy that had been prepared as the contingency for a pandemic.
Furthermore, thinking back to the weekend when the lockdown suddenly appeared in what one might call a U-turn, the 510,000 prediction ( for as such it was presented) was never put in context or timeframe, thereby making it very scary. It was neither, so far as I recall, described as occurring over two years. Nor was there a context of the “normal” number of deaths over that period. Let alone that there could be considerable overlap between the “normal” mortality group and the covid group, reducing the excess dramatically.
From the article, over 2 years, typically 1.25 million people die. So this pandemic in the worst case would add less than 50% to that. To my mind that’s unfortunate and “every death is a tragedy” but it is not the end of the world. And given the overlap, the excess could easily be less than 25%
Actually what I remember the media delivering (and the Government is complicit in this by not dispelling the notion) is more an implication that the 510,000 people would die between March and, say, June: some relatively short period in any case, a level of death that would indeed be totally overwhelming to the systems and totally scary. And hence perhaps the idea of a lockdown to manage the peak (“squash the sombrero”) to enable the NHS to cope. Since that fateful and wrong decision the strategy seems to have morphed into an indefinite attempt to minimise the death toll by keeping us apart and preventing spread.
The most worrying thing is that the report seems to clearly assume that the only way to prevent a large total death number (though maybe not the full 510,00) is to apply a long-term lockdown until a vaccine or something else arrives. But there is no consideration that an 18-month lockdown will destroy the country by killing the economy. That is, apart from spreading the deaths, there is little we can do but accept they will all occur if we want to emerge with a functioning, recognisable country still in place. Of course it now looks as though the disease we have is very different from the disease modelled so that the whole situation needs radically rethinking.
* And to the exclusion of all other modelling groups. Very strange.
‘It’s clear to me…….a key question……how the government…..was persuaded………..that a lockdown was the proper course……..replace the……strategy…..prepared as the contingency for a pandemic.’
Let’s not kid ourselves here. The contingency planning had been done but not resourced. The conservative party were in power, Mr Johnson was in the cabinet and Mr Hunt was health secretary at the time. So the finger of blame in any enquiry will point firmly at those currently in power and also no doubt at the very same NHS administrators who have so recently been spouting alarmist nonsense in an attempt to convey the sense that this minor common cold coronavirus epidemic is ‘unprecedented’ and could not have been foreseen.
The lockdown is a good old fashioned cover up and by no means confined to this country. The real ‘conspiracy’ theory is how leaders across Europe have huddled together around the same erroneous policy so that they cannot be pilloried by a global media storm in the way that Sweden has been. The hope is that they no see the sense in depoliticising Health via a politically independent health Authority, just as an independent Bank of England has de-politicised interest rate policy
They will all also, by now, have realised what an opportunity they missed to cement their place in history in the way that Anders Tegnell most certainly has cemented his; a legend.
I have a lot of sympathy for this view. It simply goes beyond the realms of possibility that, given the evidence that is readily available regarding the effects of this virus, that the policy actions pursued are anything but disastrous.
As such the only remains possible conclusion is that we are witnessing a high level coverup.
The only question is how long can it be maintained?
My initial instinct without needing to go into the guts of the model was this:
Is the model the same in London as it is in the Highlands? Quite clearly the R number would be different for each area. So how exactly did they arrive at the assumped R value?
The model also assumes uniformity of spread as if we’re all stood side by side like chess pieces. Quite clearly we’re not and as above it will take longer to spread in the Highlands than it would in London.
Also, if the virus was only in England and it was making it’s way up to Scotland (as an example) then simply blocking the border would remove 5M people from the equation.
Simply put it is highly unlikely that 81% of the population would ever be allowed to get infected in any country.
It also doesn’t allow for the fact that usually for viruses or this type, around a third of people are naturally immune.
That was a remarkably accurate guess! I too couldn’t believe the IFR & % to be infected.
How can 81% of the population get infected with a uniform R0 of 2.4? Herd immunity would be reached at 1-1/R0 = 58%? Something very wrong here.
Of course 100% susceptibility is nonsense as well. I struggle to think of any virus that’s been shown to have 100% susceptibility, even HIV is only around 90%.
Herd immunity occurs when the effective reproduction number becomes 1 i.e. at 58%. But that doesn’t mean the disease is eliminated and no-one else becomes infected, it just means the disease no longer grows exponentially. This is why 81% and not 58% is infected.
Yeah I get the theory of herd immunity overshoot, we’ve seen that on the cruise ships, aircraft carriers, prisons etc. and to some extent in places like Bergamo & NYC I expect. However, that’s one almighty overshoot across a whole population. Some towns in Lombardy got absolutely overrun and still only reached 61% seroprevalance last time I looked.
Also doesn’t address the point about susceptibility. 100% susceptibility is another might big assumption.
One thing that this pandemic is revealing is that there is, apparently, not all that much to epidemiology. It seems to be just the mechanical processing of really simple, unsophisticated assumptions, such as the idea that all people are equally susceptible. Imperial are obviously regarded as the pinnacle of the field judging by the attention that their efforts receive. So is that really it?
I don’t know how many millions of pounds have been spent on funding the profession over the years, but if the culmination is a report that even amateurs can pick huge holes in then what’s the point?
This piece is confused in a number of ways, but one of the most obvious ones is the critique of the assumptions the model makes. It can be fair to critique a model’s assumptions, but one cannot then go from that to assuming that the output of the model is alarmist or potentially alarmist. In fact, under uncertainty, it is equally possible that the model’s assumptions were too conservative. For example, instead of assuming that everyone is “equally liable to infection”, we could instead assume that children are less liable to infection and elderly people more liable, and thus produce a much higher death figure. Or we could the opposite, as the author above seems to want to imply. You don’t get to pick the lower assumptions just because. Acting under uncertainty is actually an argument for risk aversion and therefore taking action, not against it!
I also enjoyed some of the places the author revealed their deep ignorance. For example:
As any mathematician or mathematical scientist will know, exponential growth is something that either applies or it doesn’t. It’s not something that “results… every X days”. Presumably what the author probably meant is something like “infection was assumed to result in doubling every 6.5 days” (though this is just a different way of restating the R0 number, not an extra assumption btw). This would then define exponential growth. The sentence as quoted is nonsensical.
This is the etymological fallacy, there’s no reason for the modern (never mind technical) meaning of a word to have any relation at all to its original etymology. (Nor is the etymology “humbling” or “amusing” btw). In fact, stochastic is very commonly used this way across mathematics and science. I promise you, it’s not some big-word scam to confuse you! Anyway, I also saw Peter Hitchens saying this exact point about “stochastic” on twitter – is the author of this piece Peter Hitchens, or been listening to him? Garbage in, garbage out, as you might say.
The author tells you at the start that he is a professional historian. But it is clear he *understands* the Imperial model.
You mention that the model might be too conservative. I don’t think the author denies that in itself – although he does point to Sweden as a real life example that seems to imply the model is too pessimistic. What he does is point out that the model is based on many assumptions, some of which come from empirical data (possibly dubious or derived from inappropriate circumstances) and the remainder of which are guesses.
One mindset that I have seen in my more scientifically-oriented acquaintances is that the decision to lockdown or release has to be based somehow on ‘data’ – ultimately you need to compare two numbers and select the option that’s ‘better’. They don’t see the irony in comparing two numbers that are fundamentally guesses.
I think Ferguson found that his program delivered differing results when re-run with identical parameters and starting values. This might be caused by erroneous programming leading to memory overflow. He addressed this by running the program several times for each set of parameters and starting conditions, and taking the average of the results. But the disparities would be unlikely to be a random process as he assumed. Also it’s very sloppy. It’s a bit like a case where you ask a supermarket to check your bill and every time they check it they get a different total. After five checks the manager proposes that you pay the average of the five different totals.
This was never the case. The determinism fallacy was propogated by Sue Denim with incorrect parameters. Used correctly, same seeds and same parameters always returned same results.
A couple of things the model could incorporate are:
Severity of illness is related to level of exposure (‘inoculum’, ‘viral load’).
Type and degree of future immunity is related to the above in some way.
This can be summed up in the sentence ‘A low dose of a virus can act like a vaccine’ heard in this discussion between professional immunologists and virologists https://www.microbe.tv/twiv/twiv-602/
The problem is that these characteristics are almost impossible to quantify – even for established viruses.
I’ve tried to incorporate these ideas (as guesses obviously) in my model. Having played with it, and having read and listened to the thoughts of immunologists, the impression I get is that after a while, viruses could quietly ‘seep into’ the population. Partial immunity, perhaps developed with repeated low level exposure to the virus, might act to deaden the spread of the virus. The Imperial model assumes that those with asymptomatic infection are 50% less infectious than symptomatic (a guess, obviously). But they don’t allow for the idea that maybe the asymptomatic person is statistically more likely to induce a symptomless illness if and when they do infect another person. To me, it just feels that in the real world there’s a deadening effect at work that confounds the excesses that the standard model suggests.
‘R0’ seems to be such a meaningless concept that I am surprised they are still using it. It derives from the simplest kind of model that was used even before computers were invented. We’re still shoehorning it into our epidemiology, now. It allows only for binary susceptible, infected and immune states. By making R0 so ‘iconic’, we are actually preventing ourselves from developing the more sophisticated model.
The first thing a modeller is asked is ‘What value are you using for R0?’. For a sophisticated model this would be meaningless on several levels. Do you mean as an input? How are we going to use it as an input? What’s the point of the model, then? And do you mean defined by antibodies, or do you allow that the virus invaded another person’s system without that person having to produce antibodies to clear it? It’s such a useless idea. If you were an epidemiologist and knew that every time you ever published anything people were going to ask you about R0, it would actually constrain how you thought about, and modelled, epidemics.
I think the whole concept of ‘R0’ may be to blame for epidemiological modelling’s manifest failures, and the global tragedy we are now facing – and I don’t mean because of the virus.
I too am becoming increasingly suspicious that the Imperial team’s research is cover for a simplistic Fermi calculation. How the 510000 figure is arrived at is anyone’s guess, but the other figures seem to be strangely convenient proportions of that basis. Is the the 260000 prediction for “mitigations” really the outcome of modelling, or has someone multiplied the base figure by 50% and rounded it up to something whose precision won’t arouse suspicions? Or has he alternatively simply allowed for the 49% assymptotic rate shown on the Diamond Princess?
Interverviewed on UnHerd, Professor Ferguson hypothesised that a strategy of shielding the elderly might only be 80% effective. Fair enough, it’s unlikely to be 100% effective, and hasn’t proved so in Sweden. But why 80%? Why not 90%? Or 70%? I’m not the best judge of facial expression, but as he made his assumption, I couldn’t help thinking that his expression screamed “I’ve said too much”. Reduce 510000 by 80%, and you have 102000, compare with the Imperial’s team stated announced prediction of “slightly over 100000” if the UK has followed Sweden’s example.
Likewise, the original prediction for full and immediate lockdown was a death toll of 26000. Could this possibly be 510000 reduced by 95% and rounded up so as not to look overly exact?
These aspersions could be discredited in an instant, if the code and the results of the actual modelling runs were published in full. Alternatively, I am happy to provide similar predictions of my own based on multiplying one number I’ve picked out of the air by another, at a fraction of the cost to the taxpayer.
I agree. Or suppose you want to start with a shocking enough number to seize the reins of policy. “510,000 looks like a good number to go with”. Do you need to keep re-running the model until you get something like that or just go straight to the table? If you don’t have to show your workings properly it must be tempting to go straight to typing out a table.
The total number of deaths is pretty much baked in once you decide on your ‘R0’. An epidemic will peter out to a fixed proportion of the population having been infected, and if you define a fixed proportion of the infected as dying (which you inherently will even if it is dressed up with age-related look-up tables), then ‘R0’ gives your total deaths. The rest of the model just purports to show what happens in between.
This work reveals that the results of the model are almost entirely driven by a few prior assumptions, such as R and the three way split of infection routes. Everything else is window dressing.
If those initial major inputs are identified at the start, those which drive the conclusions, then that is where criticism is properly directed. No models are required.
After all, every epidemic has a case and fatality profile that follows a log normal curve – like the normal distribution by with a long tail.
If the outputs produce that general shape, what does it prove?
Nothing you did not know at the start.
A lot of additional factors are included as inputs, but these all these do is:
a) obscure the fact that a few inputs dominate,
b) make it more difficult to decide if the modelling makes any sense and
c) perhaps most importantly, give the outputs a lot more (spurious) face validity than the model deserves.
In short this modelling was an obscurantist exercise to compel belief. Scientism at its worst.