So the total positives from testing only declines about 15%, the number of true positives declines 90%, and the percentage of false positives goes up to 98%.
Yes, you make a very good point. The 'truth', or signal, can easily get lost in the noise.
The next 'noise' will be from Boris - an increase in lockdown measures. In order to obfuscate the emerging truths here, he needs to implement something a.s.a.p. in the next two weeks because there will be no evidence of any deaths to prove these cases false. He will be able to claim the measures did it.
We really need to head him off on this one.
More information on false positives here:
https://www.medrxiv.org/content/10.1101/2020.04.26.20080911v1.full.pdf
On page 10 (of 11) is a table of ranges of false positive values for various diagnostic laboratories.
Apart from one ebola test at 0.3%, the absolute minimum seems to be about 0.6% with averages around the 3% mark
Two problems with that paper. One, its a meta-study, enough said. Two, only three of the studies are for corona-viruses which are very different beasts from all the other infectious agents in the other studies. Most of the population is exposed to human corona-virus infections on a daily basic and at least 1% of the population (> 10% for children) have an active HCOV infection at any given time. Two of the studies are for MERS which had a very low cases count. The one for SARs CoV 2 has a sample N of 174. So barely an interesting anecdote, statistically speaking.
When reading bio-science papers like this one the first thing I look at is sample size and just how potentially random the sample is. With truly random samples (samples that reflect the general population ) a N < 350 is noise, N > 500 is getting interesting and N > 1000 is normally real science. There are techniques to extract useful information from small and non random samples (found in most bio-science papers) but all they do is give you some confidence interval which can be almost as large as the range of the data. This can give you results which are interesting and informative but they are very far from definitive. And not data which any substantive policy or treatment decisions should be made on. Except in very unusual situations. Like with rare diseases.
The problem of false positives is one of the basic mathematics of large data sets (N > 10000) not of actual physical testing procedures.
If this has already been covered , please point me to the post. My query sits with the 2 graphs embedded into the false positive article. The upper i understand to be a calculation of community cases during the early testing , and the lower the actual cases reported then and now .
The axis in these differs by an order of magnitude which is fine, as the original number of cases in community was expected to be that level higher, however , if i overlay the bottom graph ontop of the upper one and compress the scales to match the modefied graph is showing higher cases on the far right than are discussed in the paper.
Apoloiges for the poor quality but ive changed the colour and altered the opacity of the compressed lower graph so you can see it over layed The huge scale difference is a little misleading in my onion as it immediately shows a huge flattening of the RHS of the curve.
The actual RHS of the curve accounting for false positive will be much lower than shown, but it would have to rescaled to the lower axis to show this, as others seem to have done since on recent posts.
If this has already been covered , please point me to the post. My query sits with the 2 graphs embedded into the false positive article. The upper i understand to be a calculation of community cases during the early testing , and the lower the actual cases reported then and now
Are you referring to the charts that I embedded here or something else? I don't follow what you have done.







