by Dr Clare Craig FRCPath
The story of how false positive test results can create the illusion of actual infections in the population will be a familiar one by now for many readers of this site. Assuming that the false positive percentage remains reasonably constant, then increasing numbers of tests inevitably lead to a higher absolute number of false positive test results over time. Uncritical policy decisions made without discounting for this effect have no evidenced-based validity.
One real potential problem as the number of tests increases, however, is that the underlying number of actual cases – and their location – may ironically be somewhat obscured by steadily increasing and randomly distributed false positive results, caused by the higher volume of testing.
By contrast, a rapid rise in the percentage of tests that are returning positive results should normally indicate that the real number of Covid cases is increasing and the false positive results will become less and less important because they are usually constant.
Unfortunately, the acceleration we have seen in the recent case numbers is increasingly out of synch with crucial data from other sources. The ONS random population sampling, Zoe App and NHS triage data all show a slowing and even a plateau in the number of actual cases in the last fortnight.
Either the data in all these alternative sources is wrong, or there is something wrong with the latest official testing data.
To rely on the official testing data as being more accurate than the alternative data sources listed above, it is critical that the increasing percentage of positive test results is a genuine and neutral observation and not skewed for any reason.
There are two ways to indicate the percentage of positive tests in a coherent and consistent manner. Either a figure could be published giving the number of positive tests and the total number of tests done – but these figures have not been published since August 20th. Alternatively, the percentage could be given by the number of newly diagnosed patients and the total number of patients tested. The difference between the two is that many people are repeatedly tested.
Instead, the Government press briefing on September 30th, published alongside the data, indicates that the official published figures need to be treated with some caution.
The number of people tested in a given week will exclude some people who have been tested in a previous week, so may not be an accurate denominator to use. For example, someone testing negative for the first time in week 1 will be counted in the ‘people tested’ figure for that week. If that same person tests negative again in week 4, they will not be counted in the ‘people tested’ figure for week 4.
What this means is that for all the people tested more than once, a positive test result will count towards the numerator, but a negative test result will not count towards the denominator. Someone who tested negative in May could be contact traced again now and if they test negative their result would not be included in the official figures but if they test positive it would be. The relevant percentage of positive tests would therefore be falsely elevated.
The numbers of people being tested repeatedly include more than 400,000 nursing home residents, nursing home staff, hospital staff and many patients. Since we do not know how many tests are not being counted in the denominator, we cannot know to what extent current policy has skewed the results.
Collecting and publishing accurate data is significantly more difficult than it might seem. In the spring, we were told the number of positive tests rather than the number of positive individuals. At the beginning of July a decision was made to change the reporting rules so that data from then onwards was based on the number of positive individuals.
However, a decision must be taken as to the appropriate time frame for adjusting the historical statistics for the individuals who were tested more than once. If patients who have been tested more than once have all but one result disregarded for statistical purposes then the only way to really understand the percentage of positive results accurately is to track the positive rate by test not by individual.
When a current alleged infection is identified through random population screening, including tests increasingly done outside the Government’s official testing programme, these cases are added to the total number of positive test results which is the numerator for percentage purposes. The Government’s COVID-19 testing data methodology note says:
It is a legal requirement that all positive cases for presence of the virus are reported to Public Health England, irrespective of pillar. As such, when pillar 4 research studies (for antigen testing) identify positive cases, Public Health England are notified and this data flows into the Surveillance system. This means that currently all positive cases identified by pillar 4 surveillance studies (for antigen testing) are captured under pillar 1 or 2.
It is not clear whether these positive cases are included in the relevant percentage calculations but if they are then those that tested negative also need to be included in the denominator. It is not clear from the published data whether this is happening.
It is important that there is full, unedited and clear publication not only of all the relevant figures, including their sources, but all the methodologies used to calculate alleged positive test result percentages so that experts can double check the figures and ensure that they are accurate. This is particularly important where the Government’s published figures are increasingly inconsistent with figures from other data sources.
Donate
We depend on your donations to keep this site going. Please give what you can.
Donate TodayComment on this Article
You’ll need to set up an account to comment if you don’t already have one. We ask for a minimum donation of £5 if you'd like to make a comment or post in our Forums.
Sign UpLatest News
Next PostLatest News