Mike Hearn

How to Ensure Lockdowns Cannot Happen Again

There follows a guest post by former Google software engineer Mike Hearn.

How can we avoid a repeat of the last two years?

To ensure policy failure on such a scale never happens again, those of us who oppose them need concrete legislative proposals that could be implemented by a parliament or congress, and which address the root causes of the failed policies themselves. Very often in history we see that ideas for political reform have to be kicked around the public sphere for a while before being picked up by politicians. In that spirit I lay out some proposed changes to the law, designed to encode lessons learned from the Covid pandemic. Not all of these proposals apply to every country and they take for granted the acceptance of a viewpoint that is still contested – namely, that Covid non-pharmaceutical interventions (NPIs) were a mistake. But the ideas here will hopefully prove useful as a launching point for further discussion – and perhaps, eventually, political campaigns.

My goal here is to make proposals that are only partially within the Overton Window of currently acceptable political thought. The justification: ideas fully within the Window will be generated by politicians during any normal public inquiry anyway. Ideas fully outside it won’t be considered at all. All proposals should be somewhat uncomfortable to read for someone fully committed to mainstream politics, but not entirely so. Please note that anything related to pharmaceutical or financial interventions are out of scope for this article. Further work (perhaps by other people) may address legislative proposals around these.

Publisher Retracts 24 Scientific Papers for Being “Nonsensical” – What Happened to Peer Review?

This is a guest post by contributing editor Mike Hearn.

Last August, a cluster of fake scientific papers appeared in the journal Personal and Ubiquitous Computing. Each paper now carries a notice saying that it’s been been retracted “because the content of this article is nonsensical”.

This cluster appears to be created by the same group or person whom Daily Sceptic readers previously encountered in October. The papers are scientific-sounding gibberish spliced together with something about sports, hobbies or local economic development:

Therefore, the combination of LDPC and Polar codes has become the mainstream direction of the two technologies in the 5G scenario. This has further inspired and prompted a large number of researchers and scholars to start exploring and researching the two. In the development of Chinese modern art design culture, traditional art design culture is an important content…

Linlin Niu

This sudden lurch from 5G to Chinese modern art is the sort of text that cannot have been written by humans. Other clues are how the titles are obviously templated (“Application of A and B for X in Y”), how the citations are all on computing or electronics related subjects even when they appear in parts of the text related to Chinese art and packaging design, and of course the combination of extremely precise technical terms inserted into uselessly vague and ungrammatical statements about “the mainstream direction” of technology and how it’s “inspired and prompted” researchers.

Latest Modelling on Omicron Ignores All Evidence of Lower Severity, Among Numerous Other Problems

We’re publishing a guest post today by former Google software engineer Mike Hearn about the shortcomings of the London School of Hygiene and Tropical Medicine’s alarmist Omicron modelling which has spooked the Government.

Today the Telegraph reported that:

Experts from the London School of Hygiene and Tropical Medicine (LSHTM) predict that a wave of infection caused by Omicron – if no additional restrictions are introduced – could lead to hospital admissions being around twice as high as the previous peak seen in January 2021.

Dr Rosanna Barnard, from LSHTM’s Centre for the Mathematical Modelling of Infectious Diseases, who co-led the research, said the modellers’ most pessimistic scenario suggests that “we may have to endure more stringent restrictions to ensure the NHS is not overwhelmed”.

As we’ve come to expect from LSHTM and epidemiology in general, the model forming the basis for this ‘expert’ claim is unscientific and contains severe problems, making its predictions worthless. Equally expected, the press ignores these issues and indeed gives the impression that they haven’t actually read the underlying paper at all.

The ‘paper’ was uploaded an hour ago as of writing, but I put the word paper in quotes because not only is this document not peer reviewed in any way, it’s not even a single document. Instead, it’s a file that claims it will be continually updated, yet which has no version numbers. This might make it tricky to talk about, as by the time you read this it’s possible the document will have changed. Fortunately, they’re uploading files via GitHub, meaning we can follow any future revisions that are uploaded here.

Errors

The first shortcoming of the ‘paper’ becomes apparent on page 1:

Due to a lack of data, we assume Omicron has the same severity as Delta.

In reality, there is data and so far it indicates that Omicron is much milder than Delta:

Early data from the Steve Biko and Tshwane District Hospital Complex in South Africa’s capital Pretoria, which is at the centre of the outbreak, showed that on December 2nd only nine of the 42 patients on the Covid ward, all of whom were unvaccinated, were being treated for the virus and were in need of oxygen. The remainder of the patients had tested positive but were asymptomatic and being treated for other conditions.

The pattern of milder disease in Pretoria is corroborated by data for the whole of Gauteng province. Eight per cent of Covid-positive hospital patients are being treated in intensive care units, down from 23% throughout the Delta wave, and just 2% are on ventilators, down from 11%.

Financial Times, December 7th

Why Did Switzerland Vote For Vaccine Passports?

The Swiss have voted to keep vaccine passports by a clear majority. I live in Switzerland (but cannot vote), and in this essay I’ll present some analysis of why this outcome may have occurred.

Firstly, what was the vote actually about? It was a referendum on whether to keep the COVID law, which authorised (among other things) the implementation of the vaccine passport and contact tracing systems. As such, although passports are effectively a form of coercion, this wasn’t directly a vote on mandatory vaccination. There were two sides: ‘No’, meaning scrap the law and end the passports, and ‘Yes’, meaning keep it.

That’s all in theory. In reality, of course, the vote is already being used by politicians to argue for lockdowns for the unvaccinated (about one third of the population).

So – what went wrong for the ‘No’ side? I believe there were at least three factors that fed into each other:

  1. Unlike the British Government, the Swiss government doesn’t release the core data you would need to argue against the vaccine passport policies.
  2. For the second time in a row, the ‘No’ campaign chose its messaging very poorly. The campaign they ran was unconvincing.
  3. Like elsewhere, the news is dominated by the Government’s own narrative-building efforts and uncritically accepted reports – even nonsensical claims. In particular, public health officials have been spreading misinformation by convincing people the unvaccinated are unsafe to be around even if you’re vaccinated (which makes no sense if you also believe the vaccines are highly effective).

I will analyze each factor below.

Despite this, we should recognize the possibility that how people voted had nothing to do with any campaigns or policies, but simply reflects their pre-existing vaccination decisions. As we can safely assume almost nobody voted ‘Yes’ while also choosing to be unvaccinated (as this would simply be a vote to impose expensive and awkward restrictions on themselves indefinitely), we must also assume, given the results, that almost everyone who chose to take the vaccine also chooses to try and force other people to take it.

The psychology of this is probably core to the state of the world right now and deserves a much closer look. However, today I’ll make the simplifying assumption that campaigns and arguments do have at least some impact and analyze it through that lens.

Negative Vaccine Effectiveness Isn’t a New Phenomenon – It Turned Up in the Swine Flu Vaccine

This is a guest post by Mike Hearn, a software engineer who between 2006-2014 worked at Google in roles involving data analysis.

The Daily Sceptic has for some time been reporting on the apparent negative vaccine effectiveness visible in raw U.K. health data. Despite some age ranges now showing that the vaccinated are more than twice as likely to get Covid as the unvaccinated, this is routinely adjusted out, leading UKHSA to un-intuitively claim that the vaccines are still highly effective even against symptomatic disease. A recent post by new contributor Amaneunsis explains the Test Negative Case Control approach (TNCC) used by authorities and researchers to adjust the data, and demonstrates that while a theoretically powerful way to remove some possible confounders, it rests on an initially reasonable-sounding assumption that vaccines don’t make your susceptibility to infection worse:

A situation where this assumption may be violated is the presence of viral interference, where vaccinated individuals may be more likely to be infected by alternative pathogens.

Chua et al, Epidemiology, 2020

Amanuensis then compares results between the two different statistical approaches in a Qatari study to explore whether violation of this assumption is a realistic possibility and concludes that the multi-variate logistic regression found in their appendix supports the idea that viral interference can start happening a few months after initial vaccination.

What other angles can we explore this idea through? One way is to read the literature on prior epidemics.

The Mail Asks Serious Questions About Fraudulent Research

A few days ago journalist Barney Calman published a thorough and well researched article about the problem of academic research fraud. Although the contents will seem familiar to any long time reader of the Daily Sceptic, it’s great news that much bigger audiences are now being exposed to information about the scale and nature of the problems inside scientific institutions.

In July the Daily Sceptic published an article by me entitled, “Photoshopping, fraud and circular logic in research“. It discussed the problem of Asian paper forging operations colloquially nicknamed “paper mills”, the Chinese Government policies that incentivise forging of scientific research, and cited former BMJ editor Richard Smith’s essay on the problem of fictional clinical trials. For classical journalists to write about a topic typically requires them to find an insider or specialist willing to put their own name on things – indeed, one of the major weaknesses of newspapers vs blog sites like this one is their reluctance to do original research into scientific topics. Scientists willing to put their names on allegations is the permission journalists need to cover a story like this – and now the Mail has it:

Speaking on the Mail on Sunday’s Medical Minefield podcast, Smith – who was involved in the investigations that exposed Malcolm Pearce – said:

“It’s shocking, but common. Many of these fraudulent studies are simply invented. There were no patients. The trial never happened.”

Research coming out of countries where doctors are commonly rewarded with pay rises for publishing their work – such as Egypt, Iran, India and China – is more likely to be faked, investigations show. 

“In China, doctors can only get promoted if they score enough ‘points’, by getting published,” says John Carlisle, an NHS anaesthetist who spends his spare time hunting for fraudulent medical studies.

Calman cites many examples of serious research fraud:

  • Malcolm Pearce, who created a non-existent pregnant women he claimed to have saved from an ectopic pregnancy and who forged a drug trial.
  • Werner Bezwoda, who falsely claimed he had cured women with breast cancer by giving them bone marrow transplants.
  • Eric Poehlman, the only one ever jailed for research fraud, who fabricated studies into weight gain and the menopause.
  • Woo Suk Hwang, who became a national hero in South Korea after claiming a breakthrough in stem cell research that never actually happened.
  • Joachim Boldt, who forged a staggering 90 studies into drugs for regulating blood pressure during surgery. “These trials had been published over many years in leading journals, but it turned out they had never happened,” says Ian Roberts, Professor of Epidemiology at the London School of Hygiene and Tropical Medicine. “Again, when they were excluded from the review, it showed the treatment was not effective. British surgical guidelines had to be changed. It made me realise, if someone can get away with fabricating 90 studies, the system isn’t working.”

The story also discusses how the scientific system has been unable to reach agreement on the effectiveness against COVID-19 for both hydroxychloroquine and ivermectin, largely due to how high profile trials showing efficacy keep turning out to be fraudulent.

There’s much more and the entire article is, of course, worth reading in full.

Analysis

Smith zeros in on the core problem: the scientific system is entirely trust based. If someone emails a Word document containing a table of results to a journal, then it’s just assumed that the trial did in fact take place as written. The document itself is supposed to be reviewed, although as we’ve previously discussed here peer review is sometimes claimed to have happened when it very obviously couldn’t have. But nobody checks anything deeply. Peer reviews, when they properly happen, take the intellectual honesty of the submitter for granted.

This system was probably okay at the start of the 20th century when science was a small affair dominated by hobbyists, companies and standalone inventors. It’s easy to forget that Einstein, perhaps the most celebrated scientist of all time, came to the attention of the world only after developing new physics in his spare time whilst working as a Swiss patent clerk. But after the end of World War Two governments drastically ramped up their spending on academic research. Throughout the 20th century science didn’t just grow, it grew exponentially (nb. log scale):

Source: Lutz & Bornmann, 2014

In the second half of the 20th century, the number of papers published annually was doubling about every nine years, with the end of the war being a clear inflection point.

A century ago there was very little incentive for a scientist to lie to a journal. There was no point because there wasn’t much money in it. Academic positions were rare, the communities were small, and there were few enough interesting claims being published that they’d attract attention and be discovered if they weren’t true. But in 2021 it’s all very different. Annual production of new scientists by academia alone is vast:

The benefit of using the PhD as the yardstick for number of scientists is that it has a more standard definition across countries than measures such as the number of professional researchers and engineers.

The effect Chinese policies have had on science can be clearly seen in this graph, but even before China more than doubled its PhD production the trend was strongly upwards.

Underlying this system is an implicit assumption that the number of discoveries waiting to be made within a given time window is unlimited. Giving scientists money is seen as an uncontroversial vote winning position, so nobody in government stops to ask whether there are actually enough answerable scientific questions available to absorb the increased research budgets. If there aren’t then people become tempted to either make up answers, as in much of the COVID ‘science’ that is written about on this site, or make up questions, hence the proliferation of un-rigorous fields like the study of “white tears“.

Did Barney Calman get wind of this story by reading this site? It’d be nice to think so. If you’re out there Barney, why not drop us a line and say hello? There are plenty more investigations like that one in the archives of the Daily Sceptic, such as “Fake Science: the misinformation pandemic in scientific journals” and “436 randomly generated papers published by Springer Nature“, which examine the use of AIs to generate fake scientific papers, or “The bots that are not“, which shows that virtually all academic research into the existence of bots on Twitter is wrong. It’s of vital importance that our society becomes more aware of the flaws in the research system, as it’s the only way to break the cycle of governments and media taking so-called scientific claims for granted.

Is Tom Chivers Right to Say PCR False Positives are “So Rare” they can be Ignored?

Tom Chivers at UnHerd has published an article headlined “PCRs are not as reliable as you might think“, sub-headlined “Government policy on testing is worryingly misleading”. The core argument of the article is that due to high rates of false negatives, a positive lateral flow test followed by a negative ‘confirmation’ PCR should be treated as a positive. I pass no comment on this. However, the article makes a claim that itself needs to be fact checked. It’s been quite a long time since PCR accuracy last came up as a topic, but this article provides a good opportunity to revisit some (perhaps lesser known) points about what can go wrong with PCR testing.

The claim that I want to quibble with is:

False positives are so rare that we can ignore them.

Claims about the false positive (FP) rate of PCR tests often turn out on close inspection to be based on circular logic or invalid assumptions of some kind. Nonetheless, there are several bits of good news here. Chivers – being a cut above most science journalists – does provide a citation for this claim. The citation is a good one: it’s the official position statement from the U.K. Office for National Statistics. The ONS doesn’t merely make an argument from authority, but directly explains why it believes this to be true using evidence – and multiple arguments for its position are presented. Finally, the arguments are of a high quality and appear convincing, at least initially. This is exactly the sort of behaviour we want from journalists and government agencies, so it’s worth explicitly praising it here, even if we may find reasons to disagree – and disagree I do.

Please note that in what follows I don’t try to re-calculate a corrected FP rate, alternative case numbers or to argue that Covid is a “casedemic”.

Let’s begin.

436 Randomly Generated ‘Peer Reviewed’ Papers Published by Springer Nature

There follows a guest post by Daily Sceptic contributing editor Mike Hearn about the ongoing problem of apparently respectable scientific journals publishing computer-generated ‘research’ papers that are complete gibberish.

The publisher Springer Nature has released an “expression of concern” for more than four hundred papers they published in the Arabian Journal of Geosciences. All these papers supposedly passed through both peer review and editorial control, yet no expertise in geoscience is required to notice the problem:

The paper can’t decide if it’s about organic pollutants or the beauty of Latin dancing, and switches instantly from one to the other half way through the abstract.

The publisher claims this went through about two months of review, during which time the editors proved their value by assigning it helpful keywords:

If you’re intrigued by this fusion of environmental science and fun hobbies, you’ll be overjoyed to learn that the full article will only cost you about £30 and there are many more available if that one doesn’t take your fancy, e.g.

Background

Peer-reviewed science is the type of evidence policymakers respect most. Nonetheless, a frequent topic on this site is scientific reports containing errors so basic that any layman can spot them immediately, leading to the question of whether anyone actually read the papers before publication. An example is the recent article by Imperial College London, published in Nature Scientific Reports, in which the first sentence was a factually false claim about public statistics.

Evidence is now accruing that it’s indeed possible for “peer reviewed” scientific papers to be published which have not only never been reviewed by anybody at all, but might not have even been written by anybody, and that these papers can be published by well known firms like Springer Nature and Elsevier. In August we wrote about the phenomenon of nonsensical “tortured phrases” that indicate the usage of thesaurus-driven paper rewriting programs, probably the work of professional science forging operations called “paper mills”. Thousands of papers have been spotted using this technique; the true extent of the problem is unknown. In July, I reported on the prevalence of Photoshopped images and Chinese paper-forging efforts in the medical literature. Papers are often found that are entirely unintelligible, for example this paper, or this one whose abstract ends by saying, “Clean the information for the preparation set for finding valuable highlights to speak to the information by relying upon the objective of the undertaking.” – a random stream of words that means nothing.

Where does this kind of text come from?

The most plausible explanation is that these papers are being auto-generated using something called a context-free grammar. The goal is probably to create the appearance of interest in the authors they cite. In academia promotions are linked to publications and citations, creating a financial incentive to engage in this sort of metric gaming. The signs are all there: inexplicable topic switches half way through sentences or paragraphs, rampant grammatical errors, the repetitive title structure, citations of real papers and so on. Another sign is the explanation the journal supplied for how it occurred: the editor claims that his email address was hacked.

In this case, something probably went wrong during the production process that caused different databases of pre-canned phrases to be mixed together incorrectly. The people generating these papers are doing it on an industrial scale, so they didn’t notice because they don’t bother reading their own output. The buyers didn’t notice – perhaps they can’t actually read English, or don’t exist. Then the journal didn’t notice because, apparently, it’s enough for just one person to get “hacked” for the journal to publish entire editions filled with nonsense. And finally none of the journal’s readers noticed either, leading to the suspicion that maybe there aren’t any.

The volunteers spotting these papers are uncovering an entire science-laundering ecosystem, hiding in plain sight.

We know randomly generated papers can get published because it’s happened hundreds of times before. Perhaps the most famous example is SCIgen, “a program that generates random Computer Science research papers, including graphs, figures, and citations” using context-free grammars. It was created in 2005 by MIT grad students as a joke, with the aim to “maximize amusement, rather than coherence“. SCIgen papers are buzzword salads that might be convincing to someone unfamiliar with computer science, albeit only if they aren’t paying attention.

Despite this origin, in 2014 over 120 SCIgen papers were withdrawn by leading publishers like the IEEE after outsiders noticed them. In 2020 two professors of computer science observed that the problem was still occurring and wrote an automatic SCIgen detector. Although it’s only about 80% reliable, it nonetheless spotted hundreds more. Their detector is now being run across a subset of new publications and finds new papers on a regular basis.

Root cause analysis

On its face, this phenomenon is extraordinary. Why can’t journals stop themselves publishing machine-generated gibberish? It’s impossible to imagine any normal newspaper or magazine publishing thousands of pages of literally random text and then blaming IT problems for it, yet this is happening repeatedly in the world of academic publishing.

The surface level problem is that many scientific journals appear to be almost or entirely automated, including journals that have been around for decades. Once papers are submitted, the reviewing, editorial and publishing process becomes handled by computers. If the system stops working properly editors can seem oblivious – they routinely discover they published nonsense only because people who don’t even subscribe to their journal complained about it.

Strong evidence for this comes from the “fixes” journals present when put under pressure. As an explanation for why the 436 “expressions of concern” wouldn’t be repeated the publisher said:

The dedicated Research Integrity team at Springer Nature is constantly searching for any irregularities in the publication process, supported by a range of tools, including an in-house-developed detection tool.

The same firm also proudly trumpeted in a press release that:

Springer announces the release of SciDetect, a new software program that automatically checks for fake scientific papers. The open source software discovers text that has been generated with the SCIgen computer program and other fake-paper generators like Mathgen and Physgen. Springer uses the software in its production workflow to provide additional, fail-safe checking.

A different journal proposed an even more ridiculous solution: ban people from submitting papers from webmail accounts. The more obvious solution of paying people to read the articles before they get published is apparently unthinkable – the problem of fake auto-generated papers is so prevalent, and the scientific peer review process so useless, that they are resorting to these feeble attempts to automate the editing process.

Diving below the surface, the problem may be that journals face functional irrelevance in the era of search engines. Clearly nobody can be reading the Arabian Journal of Geosciences, including its own editors, yet according to an interesting essay by Prof Igor Pakpublisher’s contracts with [university] libraries require them to deliver a certain number of pages each year“. What’s in those pages? The editors don’t care because the libraries pay regardless. The librarians don’t care because the universities pay. The universities don’t care because the students and granting bodies pay. The students and granting bodies don’t care because the government pays. The government doesn’t care because the citizens pay, and the citizens DO care – when they find out about this stuff – but generally can’t do anything about it because they’re forced to pay through taxes, student loan laws and a (socially engineered) culture in which people are told they must have a degree or else they won’t be able to get a professional job.

This seems to be zombie-fying scientific publishing. Non-top tier journals live on as third party proof that some work was done, which in a centrally planned economy has value for justifying funding requests to committees. But in any sort of actual market-based economy many of them would have disappeared a long time ago.

The Bots That Are Not

Since 2016 automated Twitter accounts have been blamed for Donald Trump and Brexit (many times), Brazilian politics, Venezuelan politics, skepticism of climatology, cannabis misinformation, anti-immigration sentiment, vaping, and, inevitably, distrust of COVID vaccines. News articles about bots are backed by a surprisingly large amount of academic research. Google Scholar alone indexes nearly 10,000 papers on the topic. Some of these papers received widespread coverage:

Unfortunately there’s a problem with this narrative: it is itself misinformation. Bizarrely and ironically, universities are propagating an untrue conspiracy theory while simultaneously claiming to be defending the world from the very same.

The visualization above comes from “The Rise and Fall of Social Bot Research” (also available in talk form). It was quietly uploaded to a preprint server in March by Gallwitz and Kreil, two German investigators, and has received little attention since. Yet their work completely destroys the academic field of bot research to such an extreme extent that it’s possible there are no true scientific papers on the topic at all.

The authors identify a simple problem that crops up in every study they looked at. Unable to directly detect bots because they don’t work for Twitter, academics come up with proxy signals that are asserted to imply automation but which actually don’t. For example, Oxford’s Computational Propaganda Project – responsible for the first paper in the diagram above – defined a bot as any account that tweets more than 50 times per day. That’s a lot of tweeting but easily achieved by heavy users, like the famous journalist Glenn Greenwald, the slightly less famous member of German Parliament Johannes Kahrs – who has in the past managed to rack up an astounding 300 tweets per day – or indeed Donald Trump, who exceeded this threshold on six different days during 2020. Bot papers typically don’t provide examples of the bot accounts they claimed to identify, but in this case four were presented. Of those, three were trivially identifiable as (legitimate) bots because they actually said they were bots in their account metadata, and one was an apparently human account claimed to be a bot with no evidence. On this basis the authors generated 27 news stories and 323 citations, although the paper was never peer reviewed.

In 2017 I investigated the Berkley/Swansea paper and found that it was doing something very similar, but using an even laxer definition. Any account that regularly tweeted more than five times after midnight from a smartphone was classed as a bot. Obviously, this is not a valid way to detect automation. Despite being built on nonsensical premises, invalid modelling, mis-characterisations of its own data and once again not being peer reviewed, the authors were able to successfully influence the British Parliament. Damian Collins, the Tory MP who chaired the DCMS Select Committee at the time, said: “This is the most significant evidence yet of interference by Russian-backed social media accounts around the Brexit referendum. The content published and promoted by these accounts is clearly designed to increase tensions throughout the country and undermine our democratic process. I fear that this may well be just the tip of the iceberg.”

But since 2019 the vast majority of papers about social bots rely on a machine learning model called ‘Botometer’. The Botometer is available online and claims to measure the probability of any Twitter account being a bot. Created by a pair of academics in the USA, it has been cited nearly 700 times and generates a continual stream of news stories. The model is frequently described as a “state of the art bot detection method” with “95% accuracy”.

That claim is false. The Botometer’s false positive rate is so high it is practically a random number generator. A simple demonstration of the problem was the distribution of scores given to verified members of U.S. Congress:

The Latest Paper From Neil Ferguson et al. Defending the Lockdown Policy is Out of Date, Inaccurate and Misleading

Neil Ferguson’s team at Imperial College London (ICL) has released a new paper, published in Nature, claiming that if Sweden had adopted U.K. or Danish lockdown policies its Covid mortality would have halved. Although we have reviewed many epidemiological papers on this site, and especially from this particular team, let us go unto the breach once more and see what we find. The primary author on this new paper is Swapnil Mishra.

The paper’s first sentence is this:

The U.K. and Sweden have among the worst per-capita Covid mortality in Europe.

No citation is provided for this claim. The paper was submitted to Nature on March 31st, 2021. If we review a map of cumulative deaths per million on the received date then this opening statement looks very odd indeed:

Sweden (with a cumulative total of 1,333 deaths/million) is by no means “among the worst in Europe” and indeed many European countries have higher totals. This is easier to see using a graph of cumulative results:

But that was in March, when the paper was submitted. We’re reviewing it in August because that’s when it was published. Over the duration of the journal’s review period this statement – already wrong at the start – became progressively more and more incorrect: