We’re publishing a guest post today by former Google software engineer Mike Hearn about the shortcomings of the London School of Hygiene and Tropical Medicine’s alarmist Omicron modelling which has spooked the Government.
Today the Telegraph reported that:
Experts from the London School of Hygiene and Tropical Medicine (LSHTM) predict that a wave of infection caused by Omicron – if no additional restrictions are introduced – could lead to hospital admissions being around twice as high as the previous peak seen in January 2021.
Dr Rosanna Barnard, from LSHTM’s Centre for the Mathematical Modelling of Infectious Diseases, who co-led the research, said the modellers’ most pessimistic scenario suggests that “we may have to endure more stringent restrictions to ensure the NHS is not overwhelmed”.
As we’ve come to expect from LSHTM and epidemiology in general, the model forming the basis for this ‘expert’ claim is unscientific and contains severe problems, making its predictions worthless. Equally expected, the press ignores these issues and indeed gives the impression that they haven’t actually read the underlying paper at all.
The ‘paper’ was uploaded an hour ago as of writing, but I put the word paper in quotes because not only is this document not peer reviewed in any way, it’s not even a single document. Instead, it’s a file that claims it will be continually updated, yet which has no version numbers. This might make it tricky to talk about, as by the time you read this it’s possible the document will have changed. Fortunately, they’re uploading files via GitHub, meaning we can follow any future revisions that are uploaded here.
Errors
The first shortcoming of the ‘paper’ becomes apparent on page 1:
Due to a lack of data, we assume Omicron has the same severity as Delta.
In reality, there is data and so far it indicates that Omicron is much milder than Delta:
Early data from the Steve Biko and Tshwane District Hospital Complex in South Africa’s capital Pretoria, which is at the centre of the outbreak, showed that on December 2nd only nine of the 42 patients on the Covid ward, all of whom were unvaccinated, were being treated for the virus and were in need of oxygen. The remainder of the patients had tested positive but were asymptomatic and being treated for other conditions.
The pattern of milder disease in Pretoria is corroborated by data for the whole of Gauteng province. Eight per cent of Covid-positive hospital patients are being treated in intensive care units, down from 23% throughout the Delta wave, and just 2% are on ventilators, down from 11%.
Financial Times, December 7th
The LSHTM document claims to be accurate as of today, but just ignores the data available so far and replaces it with an assumption; one that lets them argue for more restrictions.
What kind of restrictions? The LSHTM modellers are big fans of mask wearing:
All scenarios considered assume a 7.5% reduction in transmission following the introduction of limited mask-wearing measures by the U.K. Government on November 30th 2021, which we assume lasts until April 30th 2022. This is in keeping with our previous estimates for the impact of increased mask-wearing on transmission.
I was curious how they arrived at this number given the abundant evidence that mask mandates have no impact at all (example one, example two). But no such luck – a reference at the end of the above paragraph points to this document, which doesn’t contain the word “mask” anywhere and “7.5%” likewise cannot be found. I wondered if maybe this was a typo but the claim that the relevant reference supports mask wearing appears several times and the word “mask” isn’t mentioned in references before or after either. (Correction: see below).
There are many other assumptions of dubious validity in this paper. I don’t have time today to try and list all of them, although maybe someone else wants to have a go. A few that jumped out on a quick read through are:
- An assumption that S gene drop-outs, i.e. cases where a PCR test doesn’t detect the spike protein gene at all, are always Omicron. That doesn’t follow logically given the very high number of mutations and given that theoretically PCR testing is very precise, meaning a missing S gene should be interpreted as “not Covid”. Of course, in reality – as is by now well known – PCR results are routinely presented in a have-cake-and-eat-it way, in which they’re claimed to be both highly precise but also capable of detecting viruses with near arbitrary levels of mutation, depending on what argument the user wishes to support.
- “We use the relationship between mean neutralisation titre and protective efficacy from Khoury et al. (7) to arrive at assumptions for vaccine efficacy against infection with Omicron” – The cited paper was published in May and has nothing to say on the topic of vaccine effectiveness against Omicron, which is advertised as being heavily mutated. Despite not citing any actual measured data on real-world vaccine effectiveness, the modelling team proceeds to make arguments for widespread boosting with a vaccine targeted at the original 2019 Wuhan version of SARS-CoV-2.
- They make scenarios that vary based on unmeasurable variables like “rate of introduction of Omicron”, making their predictions effectively unfalsifiable. Regardless of what happens, they can claim that they projected a scenario that anticipated it, and because such a rate is unknowable, nobody can prove otherwise. Predictions have to be falsifiable to be scientific, but these are not.
- Their conclusion says “These results suggest that the introduction of the Omicron B.1.1.529 variant in England will lead to a substantial increase in SARS-CoV-2 transmission” even though earlier in the ‘paper’ they say they assume anywhere between a 5%-10% lower transmissibility than Delta to 30%-50% higher (page 7), or in other words, they have no idea what the underlying difference in transmissibility is – and that’s assuming this is actually something that can be summed up in a single number to begin with.
Analysis
If you’re new to adversarial reviews of epidemiology papers some of the above points may seem nit-picky, or even made in bad faith. Take the problem of the citation error – does it really matter? Surely, it’s just some sort of obscure copy/paste error or typo?. Unfortunately, we cannot simply overlook such failures. (Correction: See below).
The reality is that academic output, especially in anything that involves statistical modelling, frequently turns out to not merely be unreliable but leaves the reader with the impression that the authors must have started with a desired conclusion and then worked backwards to try and find sciencey-sounding points to support it. Inconvenient data is claimed not to exist, convenient data is cherry picked, and where no convenient data can be found it’s just conjured into existence. Claims are made and cited but the citations don’t contain supporting evidence, or turn out to be just more assumptions. Every possible outcome is modelled and all but the most alarming are discarded. The scientific method is inconsistently used, at best, and instead scientism rules the day; meanwhile, universities applaud and defend this behaviour to the bitter end. Academia is in serious trouble: huge numbers of researchers just have no standards whatsoever and there are no institutional incentives to care.
Some readers will undoubtably wonder why we’re still bothering to do this kind of analysis given that there’s nothing really new here. On the Daily Sceptic alone we’ve covered these sorts of errors here, here, here, here, here and here – and that’s not even a comprehensive list. So why bother? I think it’s worth continuing to do this kind of work for a couple of reasons:
- Many people who didn’t doubt the science last year have developed newfound doubts this year, but won’t search through the archives to read old articles.
- The continued publication of these sorts of ‘papers’ is itself useful information. It shows that academia doesn’t seem to be capable of self-improvement and despite a long run of prediction failures, nobody within the institutions cares about the collective reputation of professors. The appearance of being scientific is what matters. Actually being scientific, not so much.
Correction
In the comments below user MTF points out a mistake in the article. The citation for mask wearing goes to a website with two documents with similar names. The primary document doesn’t talk about face masks, but the second supplementary document does mention them. It in turn cites two studies, this modelling study and this observational study in Bangladesh. These citations are not strong – the Bangladeshi study has criticised for not actually being a study of mask mandates, and it’s unclear if the results are even statistically significant. The modelling study states in the abstract that “We do not find evidence that mandating mask-wearing reduces transmission”, although they conclude that mask wearing itself (i.e. for reasons other than mandates) can reduce transmission. However, they find a different numbers to the cited 7.5%, which appears to come from the Bangladeshi study. This paper doesn’t seem to actually provide support for the cited claim, which is that the 7.5% reduction will come from the “introduce of limited mask wearing measures by the U.K. government”, i.e., mask mandates.
Nonetheless, my assertion that the cited document didn’t provide any support for the 7.5% mask figure was wrong – that’s embarrassing. In the software engineering world it’s become typical to handle publicly visible errors by writing a ‘post-mortem’ explaining what went wrong and what will be done in future to avoid it. That seems like a good practice to use for these articles too.
Firstly, what went wrong:
- Expectation bias. (The following paragraph appeared in the original article in a different place). The phenomenon of apparently random or outright deceptive citations is one I’ve written about previously. This problem is astoundingly widespread in academia. Most people will assume that a numerical claim by researchers that has a citation must have at least some level of truth to it, but in fact, meta-scientific study has indicated the error rate in citations is as high as 25%. A full quarter of scientific claims pointing to ‘evidence’ turn out when checked to be citing something that doesn’t support their point! This error rate feels roughly in line with my own experiences (not in all fields though!), and that’s why it’s always worth verifying citations for dubious claims.
In this case the prior evidence that mask mandates have no effect is very strong, and citations that don’t seem to support the cited claim are something I keep encountering in the health literature, so it seemed like a continuation of the pattern. - Failure to check for multiple kinds of problem. The citation is of a document titled “Scientific Advisory Group for Emergencies. LSHTM: autumn and winter scenarios 2021 to 2022”. The document with this title indeed contains no mention of masks, but the second document with a similar title does. I checked for typos in the reference numbers, but not ambiguity in the cited document, partly because I expected the correct document to be a dedicated study. In the event the citation is a citation-of-a-citation instead of a direct link to the actual papers they’re relying on.
- Insufficiently adversarial review. This is the first time (that we know of) that an article I’ve written contains a factual error. It is partly a consequence of my articles not being reviewed by other Daily Sceptic writers as carefully as they once were.
For my future articles here I’ll adopt the following new procedures:
- 48 hour publication delay. Potential problems often spring to mind after the first draft of an article is written and sent to others for review. In this case by the time I’d gone for a walk and done some shopping I was already developing doubts about this claim, and wondering if maybe the cited paper talked about masks under the term “face coverings” or in another way that I hadn’t noticed (as I didn’t read the full thing, just searched it to try and find the supporting evidence). Normally my articles are in a pending state for much longer than this one was, which went live almost immediately. An enforced waiting period gives time for conscious and sub-conscious reflection, and mostly what I write here isn’t time sensitive anyway – although this publication delay will only apply to me.
- Increasing the pool of reviewers. Although my articles are guest posts and thus are reviewed, most changes are only for house style. Clearly we could benefit from more aggressive peer review. Ideally I’d like MTF to review my articles pre-publication in future as he/she consistently finds the weaknesses in them before anyone else. Of course he/she may very justifiably not wish to volunteer for this sort of thing. If anyone else would like to help with this, please get in touch.
- Verification of apparently mis-linked citations with authors. In cases where a citation looks mis-directed I’ll ask the authors for a clarification or correction in future. I stopped doing this at some point because when asking about apparent problems with papers, the results have often been poor or non-existent, and in this case we still seem to have a citation that actually argues against the point being made re: mandates vs non-mandated wearing. Nonetheless, a citation that talks about something without supporting the point is different to one that doesn’t mention the point at all and the latter are more likely to either be errors in authorship or errors in detection.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.