27 March 2021  /  Updated 17 July 2021
Racaniello disagree...
 
Notifications
Clear all

Racaniello disagrees with NERVTAG on new variant

Page 4 / 4

benj
Posts: 78
 benj
Topic starter
(@wade)
Joined: 1 year ago

The 70% figure did not come from Ferguson at all.

PHE technical document:-
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/947048/Technical_Briefing_VOC_SH_NJL2_SH2.pdf

They just released actual data so now people can actually go through it and inspect.

from the document:The UK has a high throughput national testing system for community cases based in a small number of large laboratories.

Three of these laboratories use a three target assay (N, ORF1ab, S) from Thermo Fisher (TaqPath).

Currently more than 97% of pillar 2 PCR
tests which test negative on the S-gene target and positive on other targets are due to
the VOC
which I read as

we test for N, ORF1ab and S
we get positive for N and ORF1ab
we get negative for S

and they explain that as S has changed due the Variant Of Concern, it wasn't coming up positive at the same time N and ORF1ab were

and so they use the number of S negative, N and ORF1ab positive, as they way to count the VOC.

Reply
benj
Posts: 78
 benj
Topic starter
(@wade)
Joined: 1 year ago

The 70% figure did not come from Ferguson at all.

PHE technical document:-
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/947048/Technical_Briefing_VOC_SH_NJL2_SH2.pdf

They just released actual data so now people can actually go through it and inspect.

Thanks for this. I cannot say that I follow it, but I would like to understand what is meant by this statement:
We correct these weekly growth factors by raising them to the power of 6.57 to ensure they can be interpreted as reproduction numbers (given the mean generation time of SARS-CoV-2).

I not completely sure, but I think that the power raised by 6.57 is to do with the growth rate of infection by SARS-CoV-2, without the NPIs (Non Pharmaceutical Interventions - i.e. washing hands, wearing masks, keeping a distance). Is that right Splatt? Which seems odd as the number should be around 2 with all the NPIs, see https://science.sciencemag.org/content/369/6507/1106.full (Serial interval of SARS-CoV-2 was shortened over time by nonpharmaceutical interventions).

The 70% seems to be based on comparing NHS STPs (Sustainability and transformation partnerships) that don't have the new variant and those that do, when calculating the R value, as the document said:As an example, under the fixed effect model, an area with an Rt of 0.8 without the new variant would have an Rt of 1.32 [95%CI:1.19-1.50] if only the VOC was present.

So you calculate the percentage increase by ((1.32 - 0.8)/0.8) * 100 which is 65%.

Reply
jmc
Posts: 597
 jmc
(@jmc)
Joined: 1 year ago

My first reaction reading the paper based on what happened with the initial CDC assay debacle back in February is I would not use S negative / others positive as a proxy unless you were 110% certain it was not a test assay or test process failure or artifact that was been observed . A view which is just reinforced by this quote..
We therefore use the frequency of S-gene target negatives among PCR positives as a proxy for frequency of the VOC. This proxy has a limited time window, and is generally a poor proxy the further back in time considered due to other older virus variants which also test negative on spike.

Plus the document starts with the Kent new variant outbreak and the first thing I noticed is that two of the three months mentioned have no statically meaningful data. Just one month has data which anything reliable can be inferred from. Plus those Confidence Intervals used by the estimation models are very wide (as you would expect with any Bayesian model), far too wide for use as the basis for decisions that is causing many billions of pounds of losses per week and massive disruption to millions of lives.

Mathematically this paper is on the level of a promising undergraduate paper. Thats all. If it was presented in a business situation as the basis for a financial decision I would sent it back and ask for a lot more detail on at lest three or four points before it could be used as a reliable basis for any important decision.. For a start the Rt logic seems muddied. The higher model Rt for a new variant in this situation is usually a matter of variant displacement rather than much higher instantaneous infection totals. The ratio of new variant to old variant changes but not much change in total frequency in the population. Given that any random diffusion spread is never a smooth curve.

So an assay gives an unexpectedly high false negative rate recently and there is a new variant observed which has a mutation in the binding area of the failed test assay. The causation chain inferred in this paper looks promising but is very far from proved. Because assay failure due to manufacture and handling and related procedure failures are not uncommon. Until that element is shown to be completely robust and not a point of failure the conclusions of this paper are unproven.

Reply
Splatt
Posts: 1609
(@splatt)
Joined: 1 year ago

I think from that and the other documents is they're using S gene knock out as a fast, pseudo way of estimating new variant growth.
It wont be 100% but its a good analogue for all strains with that particular mutation because you cant sequence everything all the time.

Im not sure they're basing any real quantitative calculations off that lucky change.

FWIW quite a few more sequences of the B.1.1.7 on GISAid today from Hong Kong, Switzerland, Israel and elsewhere. These goes back to at least November.

Denmark has updated too.

The key is none are showing it taking over from other strains which would hint that it does NOT have a competitive advantage (or at least not large.) That said, newer data hasn't been uploaded yet. The UKs far more rapid and quantity of sequences is showing here. Potentially we're ahead of the curve so much we've accidentally created a situation where it looks like theres an issue and it started here.

They need to do more work on this, theres a potential increase and a potential mechanism but no data showing it to be the case. Worth keeping an eye on but not worth large scale panic.
In a way the NervTag group is in an awkward situation in that they know they need more data, they have a potential issue but are being pushed to actually make a call on it far earlier than they'd like. Their documents do a decent job of being decision neutral.

Reply
Page 4 / 4
Share: