We are all familiar by now with the various efforts that are underway to govern what we are allowed to say, and share, online. But this is a much bigger problem than is commonly perceived – it goes far beyond the ‘mere’ suppression of misinformation or disinformation, as misguided and dangerous as that effort is. What it represents is in fact a problematisation of information as such. Increasingly, those who govern us simply take the view that it would be better if we had the minimal necessary supply of curated information, carefully vetted by experts, in order that we should arrive at the decisions they deem to be appropriate.
The result is not just suppression of speech through a ‘censorship industrial complex’ but a suppression of knowledge – a deliberate narrowing of our horizons – in the interests of ensuring that a particular moral ‘Truth’ is revealed. This makes it more apt to analyse our era through the lens of political theology than any other conceptual framework. As I have argued before, we are governed, more and more, by a form of politics that has greater in common with the medieval pastorate than 19th or 20th century liberal democracy – a sort of atheist theocracy rooted in a claim to have sole possession of the knowledge not of what is true or false so much as what is right and wrong.
The COVID-19 period (remember that?) should be seen as a crucial turning point in this regard. The war against so called ‘mis/dis/malinformation’ had been being waged for some time before 2020, of course, but it took COVID-19 and the response to it for there to emerge an idea that there needed to be a globally coordinated strategy to manage information systemically. This was what an international lawyer might recognise as something akin to a “Grotian moment” – a short phase of rapid change in which a new body of rules or norms develops and gains sudden and widespread acceptance. Before 2020, the chattering classes across the West talked about ‘fake news’ as though it was a problem. After it, governments and international organisations began to collude together in earnest to find methods to establish control over how much, and what, information populations get to consume.
At the heart of this shift was the World Health Organisation (WHO) itself, which relatively early on after COVID-19 began to spread came to the conclusion that a significant minority of people, on being told to jump, were not asking ‘how high?’ with sufficient alacrity. In April 2020 it issued a “call for action” in respect of the “COVID-19 infodemic”, and a few months later convened an international event (via Zoom, natch), the First WHO Infodemiology Conference, as a follow-up. This foregrounded the notion that “infodemic management” ought to be a matter of priority for governments, public health authorities, scientists, academics and so on, and led to the publication of a subsequent ‘Public Health Research Agenda for Managing Infodemics‘, which itself might be said to have founded “infodemiology” as a matter of academic study. This has now mutated into an entire pseudo-discipline of its own, complete with training courses, proposed alterations to curricula for existing educational programmes, teaching manuals, course ‘modules’ and so forth.
The portmanteau ‘infodemic’ is asinine, but effectively communicates the basic idea: while an epidemic is a sudden increase in the prevalence of a disease, an ‘infodemic’ is a sudden increase in information about an epidemic. And, as the generic definition goes (it is the same in almost all documents regarding the subject produced by the WHO), “it spreads between humans in a similar manner to an epidemic” and “makes it hard for people to find trustworthy sources and reliable guidance when they need it”. This is purportedly a problem, because “during epidemics and crises” it is important to “disseminate accurate information quickly” so that people can “take steps to protect themselves, their families and communities against the infection”. There can therefore be only one conclusion: infodemics need to be managed. And a field of study – the aforementioned “infodemiology” – must be devoted to discovering the best ways to manage them.
It is important to be clear about the way in which ‘information’ is conceptualised within this burgeoning field. It is one thing to advocate suppression of information which is false – which is to say ‘mis-’ or ‘disinformation’. There is a whole host of problems associated with doing so, both as a matter of principle and in purely consequentialist terms, and we are all broadly familiar with them: the people who get to decide what is ‘true’ and what is ‘false’ are easily politicised; they are often themselves very badly informed and not particularly thoughtful; they are not usually subject to democratic or judicial oversight; they suffer from groupthink; they fall easily into suppressing opinions they disagree with on the basis of ‘falsity’; I could go on. But the essence of the enterprise of suppressing mis/disinformation does at least rely on an – admittedly childish and naïve application of – a universal human moral standard: it is good to know the truth rather than falsehoods.
Infodemic management, however, represents something of a step change in that it deliberately sets its face against the petty matter of truth versus falsity in the factual sense. What it is really concerned with is (again, to draw from the generic definition used throughout the WHO literature) “overabundance of information” (emphasis added). The problem is not people spreading ‘information’ which is false. The problem is people spreading information as such. There is just too much of it.
And this is based on a curious anthropological position with respect to the way in which human beings develop knowledge. On the one hand, as the ‘Public Health Research Agenda for Managing Infodemics’ puts it on pages two and three, “within the scientific community” there exist “systematic tools for capturing, assessing and synthesising large amounts of… evidence”, and this means that in that context “too much information is a far better situation than a lack [of it]”.
But this is only true for scientists, or rather those in the “scientific community”. For “most parts of the population”, the document goes on, being confronted with “too much” information “does not necessarily have positive aspects”. Unlike for scientists (sorry, those in the “scientific community”), when mere mortals get too much information it “makes it difficult [for them] to filter through it for what would be useful and relevant”. And “too much information can also engender a feeling of disorientation, which may induce people to lose heart, lose the perception that they have any control over what happens to them, or paralyse them from action”. And the result is “an inherent problem of too much good information”.
So, to be clear, infodemic management – while obviously it includes an anti- mis-/disinformation component – is really about something else entirely: the control of information per se. The aim is not just to make sure that when people look for information what they find is accurate. The aim is to make sure that people get the approved amount of information, targeted in such a way that it is “useful and relevant to them”. This is a much grander project – not so much one of censorship or suppression of what is false, or the establishment of what is factually true, but one of the production and dissemination of a singular pre-determined moral Truth, in pre-packaged, bespoke and individuated doses. And it suggests a regime of social control of truly all-pervading scope – one which purports to grasp not only what is true or false per se, but also literally what is “useful and relevant” for each and every person to know at any given moment, such that they always act in the proper knowledge of what is right or wrong.
The result of this is more theological than it is scientific or political – a vision of a scientific priestly class, with special spiritual insight into matters of right and wrong that transcend factual accuracy, dispensing its wisdom to the flock, and supported in doing so by a temporal power, the state, which operationalises its advice and supports it with lavish funding and other privileges. Since the members of the “scientific community” alone have the “systematic tools” to get to what is really True, it is they who are placed in an exalted position which, the implication goes, it would be too dangerous for the hoi polloi to occupy. It is only The Scientist who may approach the altar, and who takes on the task of grappling with the mighty “overabundance” of information with which he is confronted there. And it is The Scientist who then, in his benevolence, returns to the congregation with what is “useful” and “relevant” to them so as to guide them in their decision-making with ‘constant modulation’ and ‘individuated kindness’ in light of what he has glimpsed behind the veil. It is, to repeat, a kind of theocratic atheism, within which ‘science’ becomes something far greater than a mere quest for truth, in the old fashioned sense, but rather, to repeat, for moral Truth – in which it becomes a means to discern right from wrong.
This is, in a way, only what we should expect; the rapid secularisation of the past 50 years has not produced anything quite so crass or obvious as the French Revolution’s Cult of Reason, but it seems to have confirmed those who throughout history have suggested that when human beings stop worshipping God they tend not to transform into Vulcans but rather just start worshipping other things instead (and, chiefly, their own purported intelligence).
And it also seems to confirm the views of those who have warned us that we risk falling prey to the veneration of technology itself as a spiritual phenomenon. As anybody with a brain could foresee, ‘infodemiologists’ have become very interested in AI, because the deployment of AI fits the project of infodemic management like a glove – it is the the perfect “systematic tool” not only for analysing evidence but also for disseminating “useful” and “relevant” information in a bespoke way, and in real time. And so it is that early in 2021 the WHO was already developing an AI tool to carry out so-called “social listening” – so as to “help identify rising narratives that are catching people’s attention in online conversations” in order to “[help] health authorities better understand what information people are seeking so they can meet that need”.
This, naturally, comes with an awkward and glib acronym (EARS – the Early AI-supported Response with Social listening tool), and its promise, although it is not spelled out in quite so many words, is clear – that, soon enough, technology will enable public health authorities to be able to predict in real time what information people will seek, and provide it to them, in a “useful” and “relevant” way, without them having to look for it at all. This is almost the perfect vehicle for producing an understanding of moral Truth in society, because it pre-empts deliberation and fact-based truth-seeking on the part of the population, conceived not as autonomous agents but as receptacles of approved wisdom. AI will thus provide insights permitting the authorities in question both to discern moral Truth and ensure that it is transmitted in advance even of the population knowing that they need it. And it therefore will become a sort of automated Truth production device – a technological oracle capable of generating not just material but moral insight.
This kind of approach, it seems safe to say, will rapidly transcend the context of public health, because of course the same basic discursive relationship between governing authority and individual appears across the piece in the way in which our public life is nowadays constituted. Everywhere, we see the same dynamic playing out, in which a particular moral Truth is revealed by purported experts and then produced within the hearts of the population accordingly irrespective of what other forms of public deliberation might take place. The use cases for AI engaging in “social listening” so as to identify patterns of communication in advance, and provide “useful” and “relevant” information accordingly, will rapidly proliferate, and so will the potential for the state to magnify the degree to which it produces the knowledge of Truth in those it governs. “Infodemic management” is therefore likely the start of a process rather than a culmination – a foretaste of what will follow as the ‘censorship industrial complex’ evolves more fully into what, in a previous post, I referred to as “a political-theological edifice wherein the state takes on the responsibility for our souls, albeit typically cast in the language of ‘progress’.”
The irony of this is obvious: there is pretty much nothing you could imagine that would be better placed, to refer back to the ‘Public Health Research Agenda for Managing Infodemics’, to “engender a feeling of disorientation, which may induce people to lose heart, lose the perception that they have any control over what happens to them, or paralyse them from action” than this. But one wonders whether this is what the people in charge of this sort of thing are in the end really concerned about or whether indeed it – unconsciously – suits their interests. Regular readers of my Substack will of course be familiar with the notion that the modern state is given to perpetuate itself precisely through enervating the population so as to cultivate their loyalty, and that it does this to bolster its own position and justify its own authority. This is something to reflect on in considering the increasing interest modern governance displays towards the circulation of information and the management of its limits, ostensibly in the interests of public health but frequently extending into the realms of nothing less than the establishment of moral Truth.
Dr. David McGrogan is an Associate Professor of Law at Northumbria Law School. You can subscribe to his Substack – News From Uncibal – here.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.