by Sue Denim
I’d like to provide a followup to my first analysis. Firstly because new information has come to light, and secondly to address a few points of disagreement I noticed in a minority of responses.
The hidden history. Someone realised they could unexpectedly recover parts of the deleted history from GitHub, meaning we now have an audit log of changes dating back to April 1st. This is still not exactly the original code Ferguson ran, but it’s significantly closer.
Sadly it shows that Imperial have been making some false statements.
ICL staff claimed the released and original code are “essentially the same functionally”, which is why they “do not think it would be particularly helpful to release a second codebase which is functionally the same”.
In fact the second change in the restored history is a fix for a critical error in the random number generator. Other changes fix data corruption bugs (another one), algorithmic errors, fixing the fact that someone on the team can’t spell household, and whilst this was taking place other Imperial academics continued to add new features related to contact tracing apps.
The released code at the end of this process was not merely reorganised but contained fixes for severe bugs that would corrupt the internal state of the calculations. That is very different from “essentially the same functionally”.The stated justification for deleting the history was to make “the repository rather easier to download” because “the history squash (erase) merged a number of changes we were making with large data files”. “We do not think there is much benefit in trawling through our internal commit histories”.
The entire repository is less than 100 megabytes. Given they recommend a computer with 20 gigabytes of memory to run the simulation for the UK, the cost of downloading the data files is immaterial. Fetching the additional history only took a few seconds on my home WiFi.
Even if the files had been large, the tools make it easy to not download history if you don’t want it, to solve this exact problem.
I don’t quite know what to make of this. Originally I thought these claims were a result of the academics not understanding the tools they’re working with, but the Microsoft employees helping them are actually employees of a recently acquired company: GitHub. GitHub is the service they’re using to distribute the source code and files. To defend this I’d have to argue that GitHub employees don’t understand how to use GitHub, which is implausible.
I don’t think anyone involved here has any ill intent, but it seems via a chain of innocent yet compounding errors – likely trying to avoid exactly the kind of peer review they’re now getting – they have ended up making false claims in public about their work.
Effect of the bug fixes. I was curious what effect the hidden bug fixes had on the model output, especially after seeing the change to the pseudo-random number generator constants (which means the prior RNG didn’t work). I ran the latest code in single threaded mode for the baseline scenario a couple of times, to establish that it was producing the same results (on my machine only), which it did. Then I ran the version from the initial import against the latest data, to control for data changes.
The resulting output tables were radically different to the extent that they appear incomparable, e.g. the older code outputs data for negative days and a different set of columns. Comparing by row count for day 128 (7th May) gave 57,145,154 infected-but-recovered people for the initial code but only 42,436,996 for the latest code, a difference of about 34%.
I wondered if the format of the data files had changed without the program being able to detect that, so then I reran the initial import code with the initial data. This yielded 49,445,121 recoveries – yet another completely different number.
It’s clear that the changes made over the past month and a half have radically altered the predictions of the model. It will probably never be possible to replicate the numbers in Report 9.
Political attention. I was glad to see the analysis was read by members of Parliament. In particular, via David Davis MP the work was seen by Steve Baker – one of the few British MPs who has been a working software engineer. Baker’s assessment was similar to that of most programmers: “David Davis is right. As a software engineer, I am appalled. Read this now”. Hopefully at some point the right questions will be asked in Parliament. They should focus on reforming how code is used in academia in general, as the issue is structural incentives rather than a single team. The next paragraph will demonstrate that.
Do the bugs matter? Some people don’t seem to understand why these bugs are important (e.g. this computational biology student, or this cosmology lecturer at Queen Mary). A few people have claimed I don’t understand models, as if Google has no experience with them.
Imagine you want to explore the effects of some policy, like compulsory mask wearing. You change the code and rerun the model with the same seed as before. The number of projected deaths goes up rather than down. Is that because:
- The simulation is telling you something important?
- You made a coding error?
- The operating system decided to check for updates at some critical moment, changing the thread scheduling, the consequent ordering of floating point additions and thus changing the results?
You have absolutely no idea what happened.
In a correctly written model this situation can’t occur. A change in the outputs means something real and can be investigated. It’s either intentional or a bug. Once you’re satisfied you can explain the changes, you can then run the simulation more times with new seeds to estimate some uncertainty intervals.
In an uncontrollable model like ICL’s you can’t get repeatable results and if the expected size of the change is less than the arbitrary variations, you can’t conclude anything from the model. And exactly because the variations are arbitrary, you don’t actually know how large they can get, which means there’s no way to conclude anything at all.
I ran the simulation three times with the code as of commit 030c350, with the default parameters, fixed seeds and configuration. A correct program would have yielded three identical outputs. For May 7th the max difference of the three runs was 46,266 deaths or around 1.5x the actual UK total so far. This level of variance may look “small” when compared to the enormous overall projections (which it seems are incorrect) but imagine trying to use these values for policymaking. The Nightingale hospitals added on the order of 10-15,000 places, so the uncontrolled differences due to bugs are larger than the NHS’s entire crash expansion programme. How can any government use this to test policy?
An average of wrong is wrong. There appears to be a seriously concerning issue with how British universities are teaching programming to scientists. Some of them seem to think hardware-triggered variations don’t matter if you average the outputs (they apparently call this an “ensemble model”).
Averaging samples to eliminate random noise works only if the noise is actually random. The mishmash of iteratively accumulated floating point uncertainty, uninitialised reads, broken shuffles, broken random number generators and other issues in this model may yield unexpected output changes but they are not truly random deviations, so they can’t just be averaged out. Taking the average of a lot of faulty measurements doesn’t give a correct measurement. And though it would be convenient for the computer industry if it were true, you can’t fix data corruption by averaging.
I’d recommend all scientists writing code in C/C++ read this training material from Intel. It explains how code that works with fractional numbers (floating point) can look deterministic yet end up giving non-reproducible results. It also explains how to fix it.
Processes not people. This is important: the problem here is not really the individuals working on the model. The people in the Imperial team would quickly do a lot better if placed in the context of a well run software company. The problem is the lack of institutional controls and processes. All programmers have written buggy code they aren’t proud of: the difference between ICL and the software industry is the latter has processes to detect and prevent mistakes.
For standards to improve academics must lose the mentality that the rules don’t apply to them. In a formal petition to ICL to retract papers based on the model you can see comments “explaining” that scientists don’t need to unit test their code, that criticising them will just cause them to avoid peer review in future, and other entirely unacceptable positions. Eventually a modeller from the private sector gives them a reality check. In particular academics shouldn’t have to be convinced to open their code to scrutiny; it should be a mandatory part of grant funding.
The deeper question here is whether Imperial College administrators have any institutional awareness of how out of control this department has become, and whether they care. If not, why not? Does the title “Professor at Imperial” mean anything at all, or is the respect it currently garners just groupthink?
Insurance. Someone who works in reinsurance posted an excellent comment in which they claim:
- There are private sector epidemiological models that are more accurate than ICL’s.
- Despite that they’re still too inaccurate, so they don’t use them.
- “We always use 2 different internal models plus for major decisions an external, independent view normally from a broker. It’s unbelievable that a decision of this magnitude was based off a single model“
They conclude by saying “I really wonder why these major multinational model vendors who bring in hundreds of millions in license fees from the insurance industry alone were not consulted during the course of this pandemic.“
A few people criticised the suggestion for epidemiology to be taken over by the insurance industry. They had insults (“mad”, “insane”, “adding 1 and 1 to get 11,000” etc) but no arguments, so they lose that debate by default. Whilst it wouldn’t work in the UK where health insurance hardly matters, in most of the world insurers play a key part in evaluating relative health risks.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Here’s the argument for the virus ‘not existing’…
https://drsambailey.com/a-farewell-to-virology-expert-edition/
Dr Tom Cowan on the subject…
https://www.peteyvid.com/dr-thomas-cowan-explains-the-difference-of-a-virus-653005638.php
I have always been clear in my mind that PCR testing at high cycle thresholds is a waste of time. The other problem I would point out is that due to the pressures to ramp up the number of daily tests performed, i’m not convinced that protocols where always followed either at testing centres or diagnostic labs.
Conducting a highly sensitive test on someone leaning out of a car window, by a barely trained student in a car park, then thrown in boxes in a shipping container, to be picked up by a courier and chucked in the back of a van, delivered to a lab that was running at 200% intended capacity, by trainees working 12 hour shifts, under pressure to hit targets – doesn’t exactly strike me as a recipe for sound clinical control.
Sound clinical control is not what they’re after. Numbers to ramp up the fear to stop folk from using their critical thinking skills & just use the primitive brain, the amygdala, to make an emotional decision based on fear. The fear & primitive brain thinking is what got the bioweapon injections into so many arms. They’re trying to ramp up the fear again for the booster bioweapon injections currently.
I concur!
PCR is pretty much useless as a diagnostic tool, thanks to high thresholds all it proves is the presence of a molecule, that the immune system may or may not have dealt with. Given that the immune system is not a protective bubble this makes PCR a pretty hopeless tool without reference to symptoms consistent with illness. This was of course all known by the con artists profiteering from convid, so the nonsense of asymptomatic respiratory disease was pushed by the fear mongering MSM. Of course if you test positive AND have severe symptoms you have COVID, if not you dont.
The Author is a scientist (I guess). I’m a medic. Unfortunately the scientists don’t understand a simple fact and that is Medicine is 50% science and 50% human Psychology. That’s why medicine, unlike the other sciences, has the bizarre Placebo effect and Psychosomatic illness and symptoms.
The mistake being made by all is to assume that Covid is from the scientific side of medicine when it actually, I believe, belongs firmly within the human psychology side of medicine.
It’s not a ‘new’ virus because, despite PCR, it is not an illness that exists in reality.
Yes, the clever PCR can detect some genetic sequence but that, by no means, makes it a physical virus its detecting.
Let someone take a PCR test, tell them they have a deadly virus in them, tell them what the symptoms are and, guess what?. They start to exhibit those symptoms! That’s the psychosomatic bit.
“but what if they’ve got a physical symptom, like a high temperature?’ That’s can be explained by misdiagnosis of an existing illness (usually a cold) which is then distorted by the mind (aided by PCR) to confirm that it is a ‘different’ illness. Funnily enough, even though the symptoms are identical, it always seems to be “way worse” than a cold or the “worst thing ever”.
The only symptom different is the loss of taste and smell but this is actually a common symptom of a cold and is a subjective symptom, it depends upon what someone tells us, it’s not observable and can’t be tested for. It’s meaningless in diagnosis.
What the PCR test has achieved is activate another far, far deadlier illness to humans and that’s Mass Psychosis.
That illness killed 200,000 in the Medieval Witch trials and, arguably, the extermination of 6 million Jews in WW2.
It’s a collective belief in a delusion or a group insanity that has its basis in irrational fear.
A microcosm of what happened with Covid can be seen in the 2006 Dartmouth Hitchcock Medical Centre Whooping Cough outbreak. That was a mini medical hysteria over what turned out to be an imaginary illness. It was one of the earliest incidences where PCR was used to confirm the diagnosis and that PCR testing turned out to be….. 0% accurate.
It’s not a ‘new’ virus, it’s a very, very old human illness.
Covid does exist, but unfortunately, it only exists in our minds.
and then there’s Dr Andrew Kaufman showing how the codes being tested for can be sourced through BLAST to match a pantload of other bodily sources…
https://www.bitchute.com/video/MEzC8eu1W3fj/
The primers are quite short, so indeed on their own they will match any number of other sequences, that is trivially true. However PCR uses two primers, a forward and a reverse, and it will only exponentially amplify a sequence whose ends match both, which those other instances will not. There is a tendency in some quarters to quote Andrew Kaufman the way conformist medics quote the MHRA and the CDC, as an infallible authority whose word alone is sufficient to make something true, but here he just makes it look like he doesn’t understand PCR.
Not personally saying Kaufman is an infallible authority…just pointing out alternative viewpoints. Also, it’s not necessarily ‘the ends’ of sequences that are being matched by primer pairs either. Who does understand PCR in regards to it’s present dubious application? There’s serious challenge to it’s use as a diagnostic tool for ‘virology’ from Kary Mullis himself all the way to …
https://www.researchgate.net/publication/346483715_External_peer_review_of_the_RTPCR_test_to_detect_SARS-CoV-2_reveals_10_major_scientific_flaws_at_the_molecular_and_methodological_level_consequences_for_false_positive_results
Toxicology could be a good be better approach to all these mysteries in terms of that potentially exosomes originating from a universal toxin are being detected and being mistaken for viral material.
https://exosome-rna.com/is-covid-19-virus-an-exosome/
If the RNA content of an EV(exosome) induced from a known toxin has been sequenced, you don’t need to actually have an ‘external’ viral source…everybody is simultaneously producing them as we have been since forever…and the toxin might be emf’s whether natural or man-made. Electrosensitivity is an individual and highly variable human quality that would preclude illness or simply detox as exosomes are purported to be the harbingers of anyways. Protein synthesis is a subtle and sensitive chain of electron transfer between large molecules based on the evolution/creation of the biological system within the terrestrial emf.
PCR can be a great diagnostic tool when used on symptomatic people, particularly for differential diagnosis. For example it can tell you which strain of TB you have so that doctors won’t waste time with antibiotics that the strain is resistant to, with a result in about an hour rather than the several days which a lab culture would take. But even then, it would be normal to diagnose TB with a chest X-ray first. Using PCR as an initial screen, particularly on asymptomatic people, is certainly much more dodgy because the technique is so sensitive.
So the test is reliable when used in an appropriate way, is conducted correctly at all stages and is interpreted intelligently.
It is a shame about the last couple of years.
It could be argued that the only time a PCR could be justified would be if the person had symptoms and the sample was run with a cut off point ( I think 24 Cts is the sweet spot for detecting infection ) so that it wasn’t picking up meaningless fragments of RNA that the body had cleared. Also culturing the samples in a lab would provide further confirmation, but there’s bound to be cost and time constraints with doing that en masse.
But the counter argument for that approach, now that we are in Omicron times, is that a person could be just as symptomatic, if not more severely so, with one of the other 199 viruses floating around at this time of year, so what exactly is to be gained from singling out this one specific, now very inconsequential and less dangerous to everyone, virus? It is no longer a threat no matter how much disproportionate attention is paid to it. People need to move on.
It’s interesting when you start inserting pcr test primers from (for example) here…
https://pubs.acs.org/doi/10.1021/acsinfecdis.0c00464
into here…
https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE_TYPE=BlastSearch&BLAST_SPEC=OGP__9606__9558&LINK_LOC=blasthome
“In 2020, public health bureaucrats decided it would be more fun to exterminate a virus, regardless of what this project meant for anybody’s health.”
Which basically means we are back to cock-up theory. Every national public health body in the Western world abandoned years of learning for a bit of jolly experimental covid elimination.
I have never bought this theory and without concrete evidence I never will.
In other words this has nothing to do with the WEF and the Davos Deviants although suspiciously three African presidents died (murdered more like) for failing to get with the programme.
The rest of the article has some valid points but as Mogs has posted earlier it is time to move on from the start of their Reset and focus on the next set of horrors waiting to be unleashed eg CBDC, Social Credit and downright poverty as in cold and hunger and no health service.
It’s not just the deliberate manipulation via high CTs, or lower ones but for the vaxxed only as once ridiculously put in place in the USA, it is the total lack of standardization with regard to CTs, target gene snippets, solutions used, confirmation cycle yes/no etc., which always was and still is the main problem with their ab-use.
For individuals, that means that there was and still is a huge lottery-like element when submitting to one.
With the submitting being an illegal assault on ones bodily autonomy in any case anyway.
An interesting article but I think the evidence for this being about virus elimination is shaky. It might have been about that for some, at some stage, and it’s a good story, but I don’t buy it. It’s just too far fetched. Too many people knew it was insanity. We will probably never know all of everyone’s reasons and they may not know all of them themselves but to ascribe anything constructive or noble to what was done is far too generous.
A blast rom the past:
At Dartmouth, when the first suspect pertussis cases emerged and the P.C.R. test showed
pertussis, doctors believed it. The results seem completely consistent with the patients’
symptoms.
https://www.bleadon.org.uk/media/other/24400/FaithinQuickTestLeadstoEpidemicThatWasnt-TheNewYorkTimes.pdf
What about finally getting some basics right?
Assuming a certain property P is randomly and statically distributed among the members of a population X, the averaged outcomes of a testing a series of random selected subgroups of X for P will eventually converge to the true relative frequency of property P among members of population X. As Sars-CoV2 infections (or those of any other communicable pathogen) are neither randomly nor statically distributed among the members of the examined population, a series of tests for this propery conducted on random subsets of the population will yield a meaningless series of random numbers (meaningless and random in this respect).
Simple, contrived example showing this: Let’s assume all people in London are carrying Sars-CoV2 RNA and no people in Manchester do that. Randomly selecting a group of people from both London and Manchester for PCR testing will thus not communicate anything about the rate of Sars-CoV2 infections in the combined population of London and Manchester but will instead be the random rate of Londoners in the randomly selected group.
Correction of the example: This should have been will not communicate anything about the rate of infection in the population of Manchester.
Here’s a gentleman with a positive PCR test. The strain of Coronavirus it detected, we usually know as the “common cold”. However, as far as the NHS and the government were concerned, he had Covid-19.
An interesting article, but with not enough weight given to the deliberately over-inflated cycle count that was used by Governments everywhere. We know that a high cycle count will result in finding fragments of the thing you’re looking for, but what we don’t know is the origin of that thing you’ve found, or it’s relevance with respect a pandemic. I could have some dead virus stuck up my nose somewhere that’s never been anywhere else other than my nose i.e. it’s a meaningless finding. We have millions of these meaningless findings, pretend that they are meaningful, and, hey presto, we have a ‘pandemic’ and a way to control society. It’s the high cycle count that allowed governments control, and is one of many major indicators that it wasn’t all some, unfortunate, unplanned, global mistake.
The claim in the article is that the PCR tests were good at detecting the SARS 1 or SARS 2 virus or traces of it.
Two questions that I would hope the author might answer:
Perhaps dumb questions from a neophyte, but if this is all about educating the public, then perhaps the author or someone who knows could indulge me.
I think the authors of the plandemic wanted something like a virus economy, for which a moving statistic – the prevalence of an alleged pathogen – was needed. That’s what the disgusting criminal Drosten gave us – a genetic sequence that exists in the human population, circulating constantly amongst the population, that could be tracked by authorities as though it were important. Eugyppius is right that the scam wouldn’t work if it was entirely fictional; the labs operating the PCR tests, the health services, and government officials are not in on the scam. Rather, the foundational notions of the covid-19 exercise are the lie – that the disease is dangerous enough to be worth tracking or taking unusual measures to mitigate. We see the sane thing happening with ‘Climate Change’ where the same forces wish to create a climate economy where changes in temperature can somehow be tracked to determine the level of threat and ultimately curtail our freedoms. They’re just successful capitalists whose fortunes have been decided on the graphs of stock exchanges; it’s all they know.
“Critics of PCR Have three fundamental complaints…..”
Actually mine isn’t any of those..it’s 4…that a test that detected various iffy ‘results’, that then led to being told there was no treatment…(so what’s the point?)…that mainly led to people in Government making shed loads of money both for themselves and their cronies…..
the author is engaging in intellectual sophistry. the author fails to understand we’re dealing with a worldwide crime scene. the crime is a genocidal assault against humanity, predicated on egregious data frauds. two areas of science are central to these data frauds – ‘covid’ science and ‘climate’ science. data falsification degrades the quality of many attempts at rational conversation about so-called ‘covid’. this accords with Lord Denning’s 1956 judgement that fraud vitiates everything it touches. the author’s motives in wanting to rehabilitate use of a diagnostic test which is central to a genocidal agenda are at best confused.
I agree..there is no ‘good’ here, no matter how you twist it….