27 March 2021  /  Updated 17 July 2021
A question of false...
 
Notifications
Clear all

A question of false positives

Page 9 / 15

MikeAustin
Posts: 1193
Topic starter
(@mikeaustin)
Joined: 1 year ago

As it is popular to do CV’s ; I am a investigative analytical chemist of 35 years
experience; a bit different to ploddy routine analysis.

Phew, my head is spinning!
Bottom line, Dave, have you any idea of relative timescales to perform these tests? That is, by abbreviating the time allocated to testing - say to 1/3rd - what would be the implications on level of false positives?

Reply
jmc
Posts: 597
 jmc
(@jmc)
Joined: 1 year ago

Well the article you quote most definitely does not say that *all* positives are tested again. Only that some "problematic" ones are. If you have a math degree as stated then you should absolutely no problem understanding the actual mathematics of false positives. Its pretty basic undergrad stuff. Explained above in one posting and explained in more detail in the paper I posted.

In order to get the false positives to an acceptable level with *actual* field sample test accuracy levels not calibration test levels then second past testing of all positives is the only scientifically acceptable approach. Do the math. All other approaches are purely media driven hysteria politics. Nothing more.

As for the quality of the government statisticians etc. You are familiar with the term GIGO? When you use data inputs that are actually epidemic tracking statistics not actual medical mortality statistics then its very much GIGO. The two data sets are very different both in origin, constitution and purpose.

Then we have train wrecks like the Imperial College Models on which the first lockdown was based . These models turned out to be not only a rats nest of badly written code but key variables for the models like R0 and asymptomatic infection rate etc turned out to be completely made up numbers bearing no plausible or justifiably relationship to real world numbers. Or even to the early provisional numbers from reliable sources place like the HKU Med, and the CDC's of South Korea and Taiwan.

I think the correct term for these models is made up numbers. If its any consolation the models from the Bill Gates financed "research" group associated with the University of Washington were an even more pathetic mathematical joke. Simplistic interpolations based on partial non-normalized data sets.

And we are suppose to trust these "experts", according to your argument from authority.

Here is something you might want to reflect on. There were four human corona-viruses (HCOVs) in general circulation last year. 229E, AC43, NL63, HKU1. Now there are five. Because it turns out SARs CoV 2 has an IFR's and CFR about the same as the other four HCOV's. And very different from the IFR's and CFRs of SARs CoV 1 and MERS. Which both had a IFR = CFR. Turns out the asymptomatic infection rate is very very important in modelling epidemics.

So you have a math degree. How about doing the one year mortality risk numbers of say 229E / OC43 HCOV's infections given that 1% of the population has an active HCOV infection at any given time, that when tested around 10% of all hospital patents admitted with a serious repository infection tested positive for a HCOV infection, and that the raw CFR for the HCOV positive patients was 15% for OC43 and up to 25% for 229E. The numbers for SARs CoV 2 are infection rate about 0.2% (maybe), IFR 0.2% and a CFR of around 3%. The range of IFR's / CFR's for various countries is almost perfectly correlated with rate of secondary hospital borne infections and the surge capacity for the national medical systems.

For an adult with a low CURB-65 score dont be too surprised if the one years mortality risk is about the same order as dying in a road accident.

I think its you who might be living in the bubble. Some of us have not only done our relevant literature research but refuses to kow-tow to anyone who claims to argue from authority . An authority that has so far proved itself to be singularly incompetent. What becomes quickly apparent when one reads the relevant scientific literature is than none of the decisions being made seem to be supported by the actual published scientific literature. Even the most basic stuff. Like with no effective therapeutic treatments or no effective vaccine this means that physical impediment strategies like lockdowns will have not the slightest impact on the final outcome in spread of an infectious disease. Just slightly delays the final result. A population equilibrium infection rate of 35% to 40%. Thats all.

That's the actual science. And math. And has been for over 150 years.

Maybe its you who should do the math.

Reply
dave b
Posts: 24
(@dave-b)
Joined: 1 year ago

I think the data we need to focus on is cases/tests which is often obscure.

And we need to think about false positives.

Could increased testing, stressed and over worked analysts result in mistakes leading to more false positives and thus cases/tests?

Yes but it could equally got the other way; and I think it normally does in fact.

Sending excess work to ‘other laboratories’ who generate the kind of results you might like to see is another matter.

I think there is fraud probably going on here but people need to understand how it can happen.

The analyst and the analytical machines he operates generates what is often called raw data which is often in real time uploaded to databases in some kind excel spreadsheet type crap.

We don’t do that but I know labs we use do; in fact it is standard

And then it is a matter of the database manger or the manger of the database manger to adjust stuff so that more or less samples are positive,

So the analytical technician might not know whether his analysis has been logged as a positive or negative.

He or she is not on the need to know basis on that.

I suspect to some degree or other the recent UK case data has been measuring the empirical false positive rate for the labs that the UK uses.

And therefore or if that premise is correct, you change the false positive rate up you will start to measure that in the case data and should get ‘step’ change in case/test data that should level off after the change in protocol feeds through.

It is worth bearing in mind that when it comes to changes in protocols for analytical reporting etc .

There are always draft documents circulated around sometime before they are published and labs get to where they have to be before it is published.

I think however there might be something else natural going on;as well.

From my microbiology- epidemiology subsid I was told that pathogenic bugs tend
evolve into less pathogenic and more ‘infectious’ forms as less pathogenic bugs are more successful.

Natural vaccination in effect.

Reply
dave b
Posts: 24
(@dave-b)
Joined: 1 year ago

I think the data we need to focus on is cases/tests which is often obscure.

And we need to think about false positives.

Could increased testing, stressed and over worked analysts result in mistakes leading to more false positives and thus cases/tests?

Yes but it could equally got the other way; and I think it normally does in fact.

Sending excess work to ‘other laboratories’ who generate the kind of results you might like to see is another matter.

I think there is fraud probably going on here but people need to understand how it can happen.

The analyst and the analytical machines he operates generates what is often called raw data which is often in real time uploaded to databases in some kind excel spreadsheet type crap.

We don’t do that but I know labs we use do; in fact it is standard

And then it is a matter of the database manger or the manger of the database manger to adjust stuff so that more or less samples are positive,

So the analytical technician might not know whether his analysis has been logged as a positive or negative.

He or she is not on the need to know basis on that.

I suspect to some degree or other the recent UK case data has been measuring the empirical false positive rate for the labs that the UK uses.

And therefore or if that premise is correct, you change the false positive rate up you will start to measure that in the case data and should get ‘step’ change in case/test data that should level off after the change in protocol feeds through.

It is worth bearing in mind that when it comes to changes in protocols for analytical reporting etc .

There are always draft documents circulated around sometime before they are published and labs get to where they have to be before it is published.

I think however there might be something else natural going on;as well.

From my microbiology- epidemiology subsid I was told that pathogenic bugs tend
evolve into less pathogenic and more ‘infectious’ forms as less pathogenic bugs are more successful.

Natural vaccination in effect.

I don’t know too much about doing this PCR analysis.

I have used it by sending stuff to outside labs since 2008 to detect one kind of stuff in another ; bit like the horse meat thing and we have used it to identify or not identify problematic microbes [non medical].

So you learn how to talk the talk if nothing else.

I have started reading the method documentation a couple of days ago from manufacturers of the kits that is online.

It has interestingly repeatability and reproducibility data [false positive rates] but it seems to be on puff ball samples of 30/30.

And the manufacturers of these kits are lying bastards.

I have heard that the test can take 30 minutes or less once it gets to the front of the queue but that can take 5 days.

This biotech stuff is really is not my game; I do use it a little bit sort of with enzymic test kits which is similar.

Reply
MikeAustin
Posts: 1193
Topic starter
(@mikeaustin)
Joined: 1 year ago

I think the data we need to focus on is cases/tests which is often obscure.
And we need to think about false positives.
Could increased testing, stressed and over worked analysts result in mistakes leading to more false positives and thus cases/tests?

Bingo.

The analyst and the analytical machines he operates generates what is often called raw data which is often in real time uploaded to databases in some kind excel spreadsheet type crap.
We don’t do that but I know labs we use do; in fact it is standard
And then it is a matter of the database manger or the manger of the database manger to adjust stuff so that more or less samples are positive,
So the analytical technician might not know whether his analysis has been logged as a positive or negative.

Oh dear! I was rather hoping it might be a simple case of expediency that corrupts the results. This would be either sinister or dexterous, depending on viewpoint.
I suspect to some degree or other the recent UK case data has been measuring the empirical false positive rate for the labs that the UK uses.
And therefore or if that premise is correct, you change the false positive rate up you will start to measure that in the case data and should get ‘step’ change in case/test data that should level off after the change in protocol feeds through.

My suspicion of the possibility, your knowledge of it.
I think however there might be something else natural going on;as well.
From my microbiology- epidemiology subsid I was told that pathogenic bugs tend
evolve into less pathogenic and more ‘infectious’ forms as less pathogenic bugs are more successful.
Natural vaccination in effect.

Which would be nice...

Reply
Page 9 / 15
Share: