I now understand the issue of false positives. So if 200,000 a day are tested - then roughly 2,000 a day could be false.
How do false negatives work? If 200,000 are tested again - how many test false - but are actually positive?
Do they balance each other - or is one affect bigger than the other?
Cheers
You just need to test all the positives a second time. That reduces the false positives to a more reasonable number.
Given the nature of the tests (around 95% accuracy) for every 1000 you test with an actual 1% infection rate then you will get 10 true positives and 50 false positives. If you test all 60 positives for a second time then you get 10 true positives and about 3 false positives. So more than 20% of your positives are still actually false. positive but that is better than over 85%. A third test would get you down to less than 10% false positive but you would also lose at least one true positive. So swings and roundabouts.
Which is why testing on first physical symptoms is the only test strategy that makes the slightest bit of scientific sense. Although SARS CoV 2 symptoms are very non-specific, basically a cold, at least you would be starting with a sample base that has a reasonable probability of have around a 10% infection rate. That would get the first pass false positive rate down to about 30%. And the second round to about 5%.
I was talking this mathematics over with one of my kids who is just starting a PhD in pure mathematics and they came up with a great example. If you take a diagnostic test (98% accuracy) for a very rare disease and the test returns a positive result the probability of you actually having the rare disease is at most only one or two percent. One of the more arcane areas of statistical probability theory that runs completely counter to "common sense". Which is why good doctors run lots of tests before completing their differential diagnosis. And even then unless there is overwhelming proof from a very disease specif symptom (does not happen very often) good doctors always keep an open mind. Because diagnosis is always about probabilities. Both in diagnosis and treatments.
Robert Hales
Suppose you have a population of 100000, of whom 100 truly have an infection.
You have a test with advertised sensitivity of 99% and specificity of 99.5%. These are classical performance criteria for any test, and they look pretty good. Yes?
Sensitivity is the ability to find positives among true positives.... so the test finds 99 of the 100 infected and misses only 1, which we call a 'False negative'. Specificity is the ability to find true negatives. Here there are 99900 true negatives (100000 minus 100) and the the test finds 99.5% of them = 99401. It mistakes the other 499 true negatives as positive. These are our 'False positives.
There are then two more key parameters:, which depend on both the sensitivity and specificity BUT ALSO ON THE DISEASE PREVALENCE IN THE POPULATION - the positive predictive value (PPV) and the negative predictive value. (NPV).
The PPV is the proportion of positives found that are true positives...... In this example we had 99 true positives found and 499 false positives, so the PPV is 99/(499+99) = 16.6%
The NPV is how sure you can be that a negative is a real negative. Here it is 99401/(99401+1) = almost 100%
So a negative result is trustable but a positive isn't. But that's because the test is being used on a population where the disease prevalence is very low (1 in 1000); It'd be a perfectly good test in a setting where you had lots of real positives..... (try the sums with 10000 positive and 90000 negative)
This is a well know curse when tests are applied to populations with a low disease prevalence. Some high street chemists do (or did) Chlamydia tests using a PCR/NAATs method that also looks for gonorrhoea. But they don't release the gonorrhoea results because the infection is rare in the folks who go to the chemist for a Chlamydia test, meaning that false positives outweigh true positives (low PPV). Telling people they've got gonorrhoea when they haven't upsets them and their partners! The same tests are perfectly fit for purpose in a STD clinic, where the population has a much higher gonorrhoea prevalence.
There is a danger of going around in circles here.
The critical document and starting place is new guidance issued to reporting PCR laboratories at the time that cases started to rise.
Eg
It is fairly simple; laboratories were being instructed to now report borderline, limit of detection, LOD, results as positives.
In other words results, or “low results”, that were not certain or to close to call a few weeks ago must now be called positives.
Without double checking; or repeat analysis, on pillar 2.
I think this is important because to me it looks like on this covid case data; something might be changing other than testing rates.
They appear to have changed the level at which a negative becomes a positive.
There is a danger of going around in circles here.
The critical document and starting place is new guidance issued to reporting PCR laboratories at the time that cases started to rise.
Eg
['I think it was created on the 7th and updated on the 11th]
It is fairly simple; laboratories were being instructed to now report borderline, limit of detection, LOD, results as positives.
In other words results, or “low results”, that were not certain or to close to call a few weeks ago must now be called positives.
Without double checking; or repeat analysis, on pillar 2.
I think this is important because to me it looks like on this covid case data; something might be changing other than testing rates.
They appear to have changed the level at which a negative becomes a positive.
Dave. If you are right here, it would confirm the suspicions I had at the beginning of this thread, where there seems to have been a jump in cases/tests from 0.5% to 1.4%. Another quote from the link above is
All laboratories should determine the threshold for a positive result at the limit of detection based on the in-use assay.
As I said, this new 'plateau' at about 1.4% seems to be continuing:
Now look at the daily results, without 7-day averaging:
The change appears to have been on 6th September.
I really, really don't want to believe that there has been deliberation manipulation of the data here. Please, please somebody come up with an explanation!








