Biostatistics Professor Jon Deeks has written a critical piece on the shortfalls of the current mass testing regime for children in schools. Of particular concern is the occurrence of false positives. UnHerd has the story.
The fitness for purpose [of the testing regime] is a combination of the sensitivity of the test (what percentage of infected cases it correctly identifies) and the specificity (what percentage of uninfected people it correctly says are negative). But it also depends on how, where and when a test is used.
Different studies of the UK Innova lateral flow test (the test being used in schools) have reported variously that its sensitivity is 78%, 58%, 40% and even 3%. The higher 78% and 58% figures come from using the test among people with symptoms, the lower 40% and 3% figures come from using it for mass testing among people without symptoms, as is being done in schools. (And none of these studies have assessed how well the test detects infection in children). So although the test can pick up people who have the infection, it will miss quite a few – so there is a risk that disinhibition after a negative test could actually exacerbate case numbers if children incorrectly think they are safe and the rules no longer apply.
But the more concerning aspect are the false positives, related to the specificity. The original Government studies found only around 3 in 1,000 people were getting false positives, and this dropped to 1 in 1,000 in the Liverpool study. Doesn’t sound like a lot, right?
But consider the problem from the perspective of a pupil who has just got a positive test result. The reasonable question for them (and their parents) to ask is “what are the chances that this is a false positive?” Given that a positive test result means the pupil, their family and their school bubble will have to isolate for 10 days, a high false positive probability is a real problem.
Jon, who is a Senior Researcher in the Institute of Applied Health Research at Birmingham University, goes on to outline three scenarios regarding the prevalence of the disease and the accuracy of the tests.
Where 1 in 100 pupils have the infection (Scenario A), by testing a million we would find 5,000 cases but get 990 false positives. This ratio of true to false positives is quite favourable – 5 out of every 6 with positive results actually would have COVID-19 infection – so the probability that the pupil genuinely has the infection is over 80%.
However, the picture becomes less favourable as the infection becomes rarer: if only 1 in 1,000 pupils were infected (Scenario B) we would detect 500 cases but get 999 false positives. The ratio of true to false positives is now unfavourable – one true result for every two false results.
If only 1 in 10,000 had the infection there would be one true result for every 20 false results (Scenario C). Why would anybody consent to a test where the chances that a positive result is wrong are so much higher than it is right? This isn’t the fault of the test – it’s the application in a low prevalence setting. Using any test – even one with an incredibly high specificity – will lead to more false than true positive results when the disease becomes rare.
Worth reading in full.
Stop Press: The Government’s refusal to let pupils use follow-up tests to confirm positive Covid results is “ruining” their return to school, parents say.
Stop Press 2: Professor Deeks made the same point in an interview for Radio 5 Live.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.