Mike Hearn

Negative Vaccine Effectiveness Isn’t a New Phenomenon – It Turned Up in the Swine Flu Vaccine

This is a guest post by Mike Hearn, a software engineer who between 2006-2014 worked at Google in roles involving data analysis.

The Daily Sceptic has for some time been reporting on the apparent negative vaccine effectiveness visible in raw U.K. health data. Despite some age ranges now showing that the vaccinated are more than twice as likely to get Covid as the unvaccinated, this is routinely adjusted out, leading UKHSA to un-intuitively claim that the vaccines are still highly effective even against symptomatic disease. A recent post by new contributor Amaneunsis explains the Test Negative Case Control approach (TNCC) used by authorities and researchers to adjust the data, and demonstrates that while a theoretically powerful way to remove some possible confounders, it rests on an initially reasonable-sounding assumption that vaccines don’t make your susceptibility to infection worse:

A situation where this assumption may be violated is the presence of viral interference, where vaccinated individuals may be more likely to be infected by alternative pathogens.

Chua et al, Epidemiology, 2020

Amanuensis then compares results between the two different statistical approaches in a Qatari study to explore whether violation of this assumption is a realistic possibility and concludes that the multi-variate logistic regression found in their appendix supports the idea that viral interference can start happening a few months after initial vaccination.

What other angles can we explore this idea through? One way is to read the literature on prior epidemics.

The Mail Asks Serious Questions About Fraudulent Research

A few days ago journalist Barney Calman published a thorough and well researched article about the problem of academic research fraud. Although the contents will seem familiar to any long time reader of the Daily Sceptic, it’s great news that much bigger audiences are now being exposed to information about the scale and nature of the problems inside scientific institutions.

In July the Daily Sceptic published an article by me entitled, “Photoshopping, fraud and circular logic in research“. It discussed the problem of Asian paper forging operations colloquially nicknamed “paper mills”, the Chinese Government policies that incentivise forging of scientific research, and cited former BMJ editor Richard Smith’s essay on the problem of fictional clinical trials. For classical journalists to write about a topic typically requires them to find an insider or specialist willing to put their own name on things – indeed, one of the major weaknesses of newspapers vs blog sites like this one is their reluctance to do original research into scientific topics. Scientists willing to put their names on allegations is the permission journalists need to cover a story like this – and now the Mail has it:

Speaking on the Mail on Sunday’s Medical Minefield podcast, Smith – who was involved in the investigations that exposed Malcolm Pearce – said:

“It’s shocking, but common. Many of these fraudulent studies are simply invented. There were no patients. The trial never happened.”

Research coming out of countries where doctors are commonly rewarded with pay rises for publishing their work – such as Egypt, Iran, India and China – is more likely to be faked, investigations show. 

“In China, doctors can only get promoted if they score enough ‘points’, by getting published,” says John Carlisle, an NHS anaesthetist who spends his spare time hunting for fraudulent medical studies.

Calman cites many examples of serious research fraud:

  • Malcolm Pearce, who created a non-existent pregnant women he claimed to have saved from an ectopic pregnancy and who forged a drug trial.
  • Werner Bezwoda, who falsely claimed he had cured women with breast cancer by giving them bone marrow transplants.
  • Eric Poehlman, the only one ever jailed for research fraud, who fabricated studies into weight gain and the menopause.
  • Woo Suk Hwang, who became a national hero in South Korea after claiming a breakthrough in stem cell research that never actually happened.
  • Joachim Boldt, who forged a staggering 90 studies into drugs for regulating blood pressure during surgery. “These trials had been published over many years in leading journals, but it turned out they had never happened,” says Ian Roberts, Professor of Epidemiology at the London School of Hygiene and Tropical Medicine. “Again, when they were excluded from the review, it showed the treatment was not effective. British surgical guidelines had to be changed. It made me realise, if someone can get away with fabricating 90 studies, the system isn’t working.”

The story also discusses how the scientific system has been unable to reach agreement on the effectiveness against COVID-19 for both hydroxychloroquine and ivermectin, largely due to how high profile trials showing efficacy keep turning out to be fraudulent.

There’s much more and the entire article is, of course, worth reading in full.

Analysis

Smith zeros in on the core problem: the scientific system is entirely trust based. If someone emails a Word document containing a table of results to a journal, then it’s just assumed that the trial did in fact take place as written. The document itself is supposed to be reviewed, although as we’ve previously discussed here peer review is sometimes claimed to have happened when it very obviously couldn’t have. But nobody checks anything deeply. Peer reviews, when they properly happen, take the intellectual honesty of the submitter for granted.

This system was probably okay at the start of the 20th century when science was a small affair dominated by hobbyists, companies and standalone inventors. It’s easy to forget that Einstein, perhaps the most celebrated scientist of all time, came to the attention of the world only after developing new physics in his spare time whilst working as a Swiss patent clerk. But after the end of World War Two governments drastically ramped up their spending on academic research. Throughout the 20th century science didn’t just grow, it grew exponentially (nb. log scale):

Source: Lutz & Bornmann, 2014

In the second half of the 20th century, the number of papers published annually was doubling about every nine years, with the end of the war being a clear inflection point.

A century ago there was very little incentive for a scientist to lie to a journal. There was no point because there wasn’t much money in it. Academic positions were rare, the communities were small, and there were few enough interesting claims being published that they’d attract attention and be discovered if they weren’t true. But in 2021 it’s all very different. Annual production of new scientists by academia alone is vast:

The benefit of using the PhD as the yardstick for number of scientists is that it has a more standard definition across countries than measures such as the number of professional researchers and engineers.

The effect Chinese policies have had on science can be clearly seen in this graph, but even before China more than doubled its PhD production the trend was strongly upwards.

Underlying this system is an implicit assumption that the number of discoveries waiting to be made within a given time window is unlimited. Giving scientists money is seen as an uncontroversial vote winning position, so nobody in government stops to ask whether there are actually enough answerable scientific questions available to absorb the increased research budgets. If there aren’t then people become tempted to either make up answers, as in much of the COVID ‘science’ that is written about on this site, or make up questions, hence the proliferation of un-rigorous fields like the study of “white tears“.

Did Barney Calman get wind of this story by reading this site? It’d be nice to think so. If you’re out there Barney, why not drop us a line and say hello? There are plenty more investigations like that one in the archives of the Daily Sceptic, such as “Fake Science: the misinformation pandemic in scientific journals” and “436 randomly generated papers published by Springer Nature“, which examine the use of AIs to generate fake scientific papers, or “The bots that are not“, which shows that virtually all academic research into the existence of bots on Twitter is wrong. It’s of vital importance that our society becomes more aware of the flaws in the research system, as it’s the only way to break the cycle of governments and media taking so-called scientific claims for granted.

Is Tom Chivers Right to Say PCR False Positives are “So Rare” they can be Ignored?

Tom Chivers at UnHerd has published an article headlined “PCRs are not as reliable as you might think“, sub-headlined “Government policy on testing is worryingly misleading”. The core argument of the article is that due to high rates of false negatives, a positive lateral flow test followed by a negative ‘confirmation’ PCR should be treated as a positive. I pass no comment on this. However, the article makes a claim that itself needs to be fact checked. It’s been quite a long time since PCR accuracy last came up as a topic, but this article provides a good opportunity to revisit some (perhaps lesser known) points about what can go wrong with PCR testing.

The claim that I want to quibble with is:

False positives are so rare that we can ignore them.

Claims about the false positive (FP) rate of PCR tests often turn out on close inspection to be based on circular logic or invalid assumptions of some kind. Nonetheless, there are several bits of good news here. Chivers – being a cut above most science journalists – does provide a citation for this claim. The citation is a good one: it’s the official position statement from the U.K. Office for National Statistics. The ONS doesn’t merely make an argument from authority, but directly explains why it believes this to be true using evidence – and multiple arguments for its position are presented. Finally, the arguments are of a high quality and appear convincing, at least initially. This is exactly the sort of behaviour we want from journalists and government agencies, so it’s worth explicitly praising it here, even if we may find reasons to disagree – and disagree I do.

Please note that in what follows I don’t try to re-calculate a corrected FP rate, alternative case numbers or to argue that Covid is a “casedemic”.

Let’s begin.

436 Randomly Generated ‘Peer Reviewed’ Papers Published by Springer Nature

There follows a guest post by Daily Sceptic contributing editor Mike Hearn about the ongoing problem of apparently respectable scientific journals publishing computer-generated ‘research’ papers that are complete gibberish.

The publisher Springer Nature has released an “expression of concern” for more than four hundred papers they published in the Arabian Journal of Geosciences. All these papers supposedly passed through both peer review and editorial control, yet no expertise in geoscience is required to notice the problem:

The paper can’t decide if it’s about organic pollutants or the beauty of Latin dancing, and switches instantly from one to the other half way through the abstract.

The publisher claims this went through about two months of review, during which time the editors proved their value by assigning it helpful keywords:

If you’re intrigued by this fusion of environmental science and fun hobbies, you’ll be overjoyed to learn that the full article will only cost you about £30 and there are many more available if that one doesn’t take your fancy, e.g.

Background

Peer-reviewed science is the type of evidence policymakers respect most. Nonetheless, a frequent topic on this site is scientific reports containing errors so basic that any layman can spot them immediately, leading to the question of whether anyone actually read the papers before publication. An example is the recent article by Imperial College London, published in Nature Scientific Reports, in which the first sentence was a factually false claim about public statistics.

Evidence is now accruing that it’s indeed possible for “peer reviewed” scientific papers to be published which have not only never been reviewed by anybody at all, but might not have even been written by anybody, and that these papers can be published by well known firms like Springer Nature and Elsevier. In August we wrote about the phenomenon of nonsensical “tortured phrases” that indicate the usage of thesaurus-driven paper rewriting programs, probably the work of professional science forging operations called “paper mills”. Thousands of papers have been spotted using this technique; the true extent of the problem is unknown. In July, I reported on the prevalence of Photoshopped images and Chinese paper-forging efforts in the medical literature. Papers are often found that are entirely unintelligible, for example this paper, or this one whose abstract ends by saying, “Clean the information for the preparation set for finding valuable highlights to speak to the information by relying upon the objective of the undertaking.” – a random stream of words that means nothing.

Where does this kind of text come from?

The most plausible explanation is that these papers are being auto-generated using something called a context-free grammar. The goal is probably to create the appearance of interest in the authors they cite. In academia promotions are linked to publications and citations, creating a financial incentive to engage in this sort of metric gaming. The signs are all there: inexplicable topic switches half way through sentences or paragraphs, rampant grammatical errors, the repetitive title structure, citations of real papers and so on. Another sign is the explanation the journal supplied for how it occurred: the editor claims that his email address was hacked.

In this case, something probably went wrong during the production process that caused different databases of pre-canned phrases to be mixed together incorrectly. The people generating these papers are doing it on an industrial scale, so they didn’t notice because they don’t bother reading their own output. The buyers didn’t notice – perhaps they can’t actually read English, or don’t exist. Then the journal didn’t notice because, apparently, it’s enough for just one person to get “hacked” for the journal to publish entire editions filled with nonsense. And finally none of the journal’s readers noticed either, leading to the suspicion that maybe there aren’t any.

The volunteers spotting these papers are uncovering an entire science-laundering ecosystem, hiding in plain sight.

We know randomly generated papers can get published because it’s happened hundreds of times before. Perhaps the most famous example is SCIgen, “a program that generates random Computer Science research papers, including graphs, figures, and citations” using context-free grammars. It was created in 2005 by MIT grad students as a joke, with the aim to “maximize amusement, rather than coherence“. SCIgen papers are buzzword salads that might be convincing to someone unfamiliar with computer science, albeit only if they aren’t paying attention.

Despite this origin, in 2014 over 120 SCIgen papers were withdrawn by leading publishers like the IEEE after outsiders noticed them. In 2020 two professors of computer science observed that the problem was still occurring and wrote an automatic SCIgen detector. Although it’s only about 80% reliable, it nonetheless spotted hundreds more. Their detector is now being run across a subset of new publications and finds new papers on a regular basis.

Root cause analysis

On its face, this phenomenon is extraordinary. Why can’t journals stop themselves publishing machine-generated gibberish? It’s impossible to imagine any normal newspaper or magazine publishing thousands of pages of literally random text and then blaming IT problems for it, yet this is happening repeatedly in the world of academic publishing.

The surface level problem is that many scientific journals appear to be almost or entirely automated, including journals that have been around for decades. Once papers are submitted, the reviewing, editorial and publishing process becomes handled by computers. If the system stops working properly editors can seem oblivious – they routinely discover they published nonsense only because people who don’t even subscribe to their journal complained about it.

Strong evidence for this comes from the “fixes” journals present when put under pressure. As an explanation for why the 436 “expressions of concern” wouldn’t be repeated the publisher said:

The dedicated Research Integrity team at Springer Nature is constantly searching for any irregularities in the publication process, supported by a range of tools, including an in-house-developed detection tool.

The same firm also proudly trumpeted in a press release that:

Springer announces the release of SciDetect, a new software program that automatically checks for fake scientific papers. The open source software discovers text that has been generated with the SCIgen computer program and other fake-paper generators like Mathgen and Physgen. Springer uses the software in its production workflow to provide additional, fail-safe checking.

A different journal proposed an even more ridiculous solution: ban people from submitting papers from webmail accounts. The more obvious solution of paying people to read the articles before they get published is apparently unthinkable – the problem of fake auto-generated papers is so prevalent, and the scientific peer review process so useless, that they are resorting to these feeble attempts to automate the editing process.

Diving below the surface, the problem may be that journals face functional irrelevance in the era of search engines. Clearly nobody can be reading the Arabian Journal of Geosciences, including its own editors, yet according to an interesting essay by Prof Igor Pakpublisher’s contracts with [university] libraries require them to deliver a certain number of pages each year“. What’s in those pages? The editors don’t care because the libraries pay regardless. The librarians don’t care because the universities pay. The universities don’t care because the students and granting bodies pay. The students and granting bodies don’t care because the government pays. The government doesn’t care because the citizens pay, and the citizens DO care – when they find out about this stuff – but generally can’t do anything about it because they’re forced to pay through taxes, student loan laws and a (socially engineered) culture in which people are told they must have a degree or else they won’t be able to get a professional job.

This seems to be zombie-fying scientific publishing. Non-top tier journals live on as third party proof that some work was done, which in a centrally planned economy has value for justifying funding requests to committees. But in any sort of actual market-based economy many of them would have disappeared a long time ago.

The Bots That Are Not

Since 2016 automated Twitter accounts have been blamed for Donald Trump and Brexit (many times), Brazilian politics, Venezuelan politics, skepticism of climatology, cannabis misinformation, anti-immigration sentiment, vaping, and, inevitably, distrust of COVID vaccines. News articles about bots are backed by a surprisingly large amount of academic research. Google Scholar alone indexes nearly 10,000 papers on the topic. Some of these papers received widespread coverage:

Unfortunately there’s a problem with this narrative: it is itself misinformation. Bizarrely and ironically, universities are propagating an untrue conspiracy theory while simultaneously claiming to be defending the world from the very same.

The visualization above comes from “The Rise and Fall of Social Bot Research” (also available in talk form). It was quietly uploaded to a preprint server in March by Gallwitz and Kreil, two German investigators, and has received little attention since. Yet their work completely destroys the academic field of bot research to such an extreme extent that it’s possible there are no true scientific papers on the topic at all.

The authors identify a simple problem that crops up in every study they looked at. Unable to directly detect bots because they don’t work for Twitter, academics come up with proxy signals that are asserted to imply automation but which actually don’t. For example, Oxford’s Computational Propaganda Project – responsible for the first paper in the diagram above – defined a bot as any account that tweets more than 50 times per day. That’s a lot of tweeting but easily achieved by heavy users, like the famous journalist Glenn Greenwald, the slightly less famous member of German Parliament Johannes Kahrs – who has in the past managed to rack up an astounding 300 tweets per day – or indeed Donald Trump, who exceeded this threshold on six different days during 2020. Bot papers typically don’t provide examples of the bot accounts they claimed to identify, but in this case four were presented. Of those, three were trivially identifiable as (legitimate) bots because they actually said they were bots in their account metadata, and one was an apparently human account claimed to be a bot with no evidence. On this basis the authors generated 27 news stories and 323 citations, although the paper was never peer reviewed.

In 2017 I investigated the Berkley/Swansea paper and found that it was doing something very similar, but using an even laxer definition. Any account that regularly tweeted more than five times after midnight from a smartphone was classed as a bot. Obviously, this is not a valid way to detect automation. Despite being built on nonsensical premises, invalid modelling, mis-characterisations of its own data and once again not being peer reviewed, the authors were able to successfully influence the British Parliament. Damian Collins, the Tory MP who chaired the DCMS Select Committee at the time, said: “This is the most significant evidence yet of interference by Russian-backed social media accounts around the Brexit referendum. The content published and promoted by these accounts is clearly designed to increase tensions throughout the country and undermine our democratic process. I fear that this may well be just the tip of the iceberg.”

But since 2019 the vast majority of papers about social bots rely on a machine learning model called ‘Botometer’. The Botometer is available online and claims to measure the probability of any Twitter account being a bot. Created by a pair of academics in the USA, it has been cited nearly 700 times and generates a continual stream of news stories. The model is frequently described as a “state of the art bot detection method” with “95% accuracy”.

That claim is false. The Botometer’s false positive rate is so high it is practically a random number generator. A simple demonstration of the problem was the distribution of scores given to verified members of U.S. Congress:

The Latest Paper From Neil Ferguson et al. Defending the Lockdown Policy is Out of Date, Inaccurate and Misleading

Neil Ferguson’s team at Imperial College London (ICL) has released a new paper, published in Nature, claiming that if Sweden had adopted U.K. or Danish lockdown policies its Covid mortality would have halved. Although we have reviewed many epidemiological papers on this site, and especially from this particular team, let us go unto the breach once more and see what we find. The primary author on this new paper is Swapnil Mishra.

The paper’s first sentence is this:

The U.K. and Sweden have among the worst per-capita Covid mortality in Europe.

No citation is provided for this claim. The paper was submitted to Nature on March 31st, 2021. If we review a map of cumulative deaths per million on the received date then this opening statement looks very odd indeed:

Sweden (with a cumulative total of 1,333 deaths/million) is by no means “among the worst in Europe” and indeed many European countries have higher totals. This is easier to see using a graph of cumulative results:

But that was in March, when the paper was submitted. We’re reviewing it in August because that’s when it was published. Over the duration of the journal’s review period this statement – already wrong at the start – became progressively more and more incorrect:

Postcard from France

There follows a guest post by former Google software engineer Mike Hearn.

I just got back the south of France, flying from Switzerland. Myself and my fiancée visited Antibes and its local theme parks. The Pass Sanitaire came shortly before we arrived, so I got to see how it was doing on its first days of implementation.

Polling from the end of July stated that about half of the French are opposed to the anti-pass protests, about 35% are supportive and about 15% are indifferent. How does it look on the ground? I decided to do a simple experiment to find out: always present an expired test even though I had a valid negative one, and see what happens. Over a four day stay I was required to show a valid pass exactly zero times; that includes at the airports in both directions. Compliance is absolutely min viable and often lower. At small businesses enforcement was non-existent: sometimes the pass requirement was ignored entirely, other times we were asked “do you have a pass” and our answer wasn’t checked. One restaurant had come up with a clever way to detect police stings without requiring customers to actually present a pass. As expected, enforcement was stricter by larger firms, however even there we saw the following:

  • Test certificates being checked once and then swapped for a token that doesn’t expire.
  • Expired tests being accepted.
  • People accepting paper test certificates without scanning them.
  • Scanning tests and then not looking at the screen to see the results.
  • Accepting QR codes that failed to scan.

We saw no evidence of compliance checks being done on businesses, although it was only a short stay. We saw only one person voluntarily present a pass when it wasn’t being requested, and that person was unsurprisingly quite elderly.

Mask enforcement has collapsed. In the theme parks nobody was wearing masks despite the signs and announcements telling people it was obligatory. Even at large venues the staff frequently wear masks around their chins or dispense with them altogether. Social distancing is of course a long forgotten memory by now.

It’s a good thing enforcement is lax because the testing system is in a state of disarray. The massive throughput needs of the Pass Sanitaire mean that every pharmacy is operating rapid testing tents with staff no more trained than the average counter clerk. Despite that, there are always huge ‘queues’ (often more like a crowd milling around outside a tent). At one pharmacy, personal details had to be filled out on a smartphone while you were hanging around, but you didn’t get any kind of code or evidence you’d done so. The tester did the test, then assured me I’d get my results in 20 minutes. I had to point out that this was impossible because he had no idea who I was.

Delivery of test results also seems to be quite broken. Although the test completes within 15-20 minutes the results email frequently took hours to arrive for me. Moreover, that email did not contain the certificate. Instead you retrieve your results only after getting a code via email or SMS, which must be typed in within ten minutes. SMS codes never turned up and emails were routinely being delivered after the ten minute window had expired, meaning that actually downloading your test certificate was an exercise in frustration. My guess is the sudden spike in testing combined with the desire to use new-style digitally signed QR codes is causing automatic anti-spam throttling of messages from the Government. It’s possible they didn’t anticipate this and now have no way to fix it without removing the ‘security’ on their system.

Macron has claimed that, “Never before in our history was a crisis of such magnitude fought in such a democratic way.” In the parts we visited at least, the French are ‘democratically’ rejecting his rule by simply ignoring it. The motions are being made but on close-up inspection nothing is actually happening.

Polls vs. reality

How can this experience be reconciled with the anti-protest polling?

One answer is that the beliefs of a composite/average French person don’t actually matter here. The scheme is most popular with the elderly and public sector workers, least popular with the young and business owners. But retirees and government workers aren’t the ones waiting tables or selling tickets. Nor are they the employers of the people who do. In fact, among company managers, sympathy for the protests rises to 60%; higher than private sector workers as a whole. We may also assume that plenty of people are against the Pass Sanitaire while also being against protests, which have a history of being violent and disruptive in France, and it seems safe to assume that support for it will fall further as the system actually starts to bite (the poll pre-dates enforcement). Thus the true levels of support for the pass amongst the managerial classes are certainly much lower than the 40% this poll would imply.

Finally, not for the first time, we must raise eyebrows at polling that paints a totally different picture of what people think than what is observable with our own eyes. Polling firms try hard to ensure their sample is representative, but it’s been known for many years that their samples are not genuinely reflective of the population under test. A large but very predictable problem is that polls massively over-represent volunteers. It seems likely that there’s a correlation between the sort of people who support the Pass Sanitaire and the sort of people who enthusiastically volunteer to spend time on surveys without compensation. In the past I’ve encountered a belief among (ex) professional pollsters that it’s an open secret in the business that any question with a “pro-social” answer will get wildly un-representative answers. However, I’ve never been able to find any kind of rigorous written discussion of this. If you work or have worked in polling and have some insight to offer here, please do get in touch and share it. Enquiring minds would like to learn more!

Conclusion

Converting ordinary businesses into an unpaid police force cannot work without high levels of support amongst the young and entrepreneurial. The Pass Sanitaire doesn’t have that. Implementation difficulties leading to arbitrary and random arrests will only further erode support for the scheme.

Fake Science: The Misinformation Pandemic in Scientific Journals

Another cluster of fake scientific papers has been discovered, this time primarily about electronic medical devices and software. A group of three researchers has published an exposé of papers in which ordinary terms like artificial intelligence and facial recognition are replaced with bizarre alternatives auto-generated from a thesaurus. This appears to be an attempt to hide plagiarism, AI-driven paper auto-generation and/or “paper mill” activity, in which companies generate forged research and sell it to (pseudo-)scientists who want to get promoted.

GENUINE TERMAUTO-GENERATED REPLACEMENT
Big dataColossal information
Facial recognitionFacial acknowledgement
Artificial intelligenceCounterfeit consciousness
Deep neural networkProfound neural organization
Cloud computingHaze figuring
Signal to noiseFlag commotion
Random valueIrregular esteem
Examples of machine generated substitutions

Often these papers originate in China, where the CCP has mandated that every single medical doctor must publish research papers to get promoted (i.e. in their non-existent spare time). If you’re new to this topic, my previous article on Photoshopped images and impossible numbers in scientific papers provides some background along with an entertaining begging letter from a Chinese doctor who got busted.

Most of the bad science covered on the Daily Sceptic is of the intellectually dishonest kind: an absurd assumption here, ignored evidence over there. Sometimes professors – like those at Imperial College London – turn out to be incapable of using computers correctly and are presenting internal data corruption in their models as ‘evidence’, a problem I wrote about in my first article for this site. While these papers are extremely serious for public trust in science, especially given the huge impact they have had, there are even worse problems lurking in the depths of the literature. The biggest is probably 100% fake papers that report on non-existent experiments, often in obscure areas of Alzheimer’s research or oncology.