Last night I submitted a response on behalf of the Daily Sceptic to the request from Meta’s Oversight Board for comment on the company’s COVID-19 misinformation policy. I tried to keep it fairly short and punchy.
The Daily Sceptic’s Response
I’m not going to respond to the questions directly. The way they’ve been drafted, it’s as if Meta is taking it for granted that some suppression of health misinformation is desirable during a pandemic – because of the risk it might cause “imminent physical harm” – and what you’re looking for is feedback on how censorious you ought to be and at what point in the course of a pandemic like the one we’ve just been through you should ease back on the rules a little. My view is that suppressing misinformation is never justified.
The first and most obvious point is that it’s far from obvious what’s information and what’s misinformation. Who decides? The government? Public health officials? Bill Gates? None of them is infallible. This was eloquently expressed by the former Supreme Court judge Lord Sumption in a recent article in the Spectator about the shortcomings of the Online Safety Bill:
All statements of fact or opinion are provisional. They reflect the current state of knowledge and experience. But knowledge and experience are not closed or immutable categories. They are inherently liable to change. Once upon a time, the scientific consensus was that the sun moved around the Earth and that blood did not circulate around the body. These propositions were refuted only because orthodoxy was challenged by people once thought to be dangerous heretics. Knowledge advances by confronting contrary arguments, not by hiding them away. Any system for regulating the expression of opinion or the transmission of information will end up by privileging the anodyne, the uncontroversial, the conventional and the officially approved.
To illustrate this point, take Meta’s own record when it comes to suppressing misinformation. In the past two-and-a-half years, you have either removed, or shadow-banned, or attached health warnings on all your social media platforms to any content challenging the response of governments, senior officials and public health authorities to the pandemic, whether it’s questioning the wisdom of the lockdown policy, expressing scepticism about the efficacy and safety of the Covid vaccines, or opposing mask mandates. Yet these are all subjects of legitimate scientific and political debate. You cannot claim this censorship was justified because undermining public confidence in those policies would make people less likely to comply with them and that, in turn, might cause harm, because whether or not those measures prevented more harm than they caused was precisely the issue under discussion. And the more time passes, the clearer it becomes that most if not all of these measures did in fact do more harm than good. It now seems overwhelmingly likely that by suppressing public debate about these policies, and thereby extending their duration, Meta itself caused harm.
Which brings me to my second point. Because there is rarely a hard line separating information from misinformation, the decision of where to draw that line will inevitably be influenced by the political views of the content moderators (or the algorithms designers), meaning the act of labelling something “mostly false” or “misleading” is really just a way for the content moderators (or the algorithm designers) to signal their disapproval of the heretical point of view the ‘misinformation’ appears to support.
How else to explain the clear left-of-centre bias in decisions about what content to suppress? We know from survey data that content that challenges left-of-centre views is more likely to be flagged as ‘misinformation’ or ‘disinformation’ and removed by social media companies than content that challenges right-of-centre views.
According to a Cato Institute poll published on December 31st 2021, 35% of people identifying as ‘strong conservatives’ said they’d had a social media post reported or removed, compared to 20% identifying as ‘strong liberals’.
Strong conservatives were also more likely to have had their accounts suspended (19%) than strong liberals (12%).
This clear political bias is one of the reasons suppressing so-called conspiracy theories is counterproductive. One obvious case-in-point is Facebook’s suppression of the lab leak hypothesis in the first phase of the pandemic, which the Institute for Strategic Dialogue described as a ‘conspiracy theory’ in April 2020. This censorship policy was so counterproductive, that today even the head of the WHO is reported to believe this ‘conspiracy theory’.
Okay, that particular conspiracy theory is very probably true. What about when a hypothesis is clearly false, such as the claim that Joe Biden stole the 2020 Presidential election? That’s still not a reason to censor it. That particular conspiracy theory, energetically promoted by Trump himself, played a part in the violent protests by Trump supporters that took place in Washington on January 6th 2020 and for that reason anyone sharing this theory on Facebook will see their posts instantly removed and they risk being permanently banned from the platform.
But if the intention of suppressing this conspiracy theory was to stop its spread, it hasn’t worked.
According to an Axios-Momentive poll from earlier this year, more than 40% of Americans don’t believe Joe Biden won the Presidential election legitimately, a slight increase on the number expressing the same belief in 2020 in a poll carried out before January 6th.
To be fair, I don’t think the content moderators (or the algorithm designers) are deliberately acting in a partisan way to promote their favoured political candidates and causes – at least, not most of the time. Rather, they believe removing misinformation is good for the health of democracy – it will promote civic virtues like well-informed public debate and increase democratic participation and make ordinary people more responsible citizens. But the problem is that this idea is itself rooted in left-wing ideology, a point made by Barton Swain in a comment piece for the Wall St Journal earlier this year attacking the new Disinformation Governance Board:
The animating doctrine of early-20th-century Progressivism, with its faith in the perfectibility of man, held that social ills could be corrected by means of education. People do bad things, in this view, because they don’t know any better; they harm themselves and others because they have bad information. That view is almost totally false, as a moment’s reflection on the many monstrous acts perpetrated by highly educated and well-informed criminals and tyrants should indicate. But it is an attractive doctrine for a certain kind of credentialed and self-assured rationalist. It places power, including the power to define what counts as ‘good’ information, in the hands of people like himself.
So what should Meta’s policy be on health-related and other forms of misinformation? Simple: leave it alone. As the Supreme Court Justice Louis Brandies said almost 100 years ago about attempts to suppress false information:
If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.
This is known as the Counterspeech doctrine and this quote should be carved into the desk of every Facebook content moderator, algorithm designer and external fact-checker. Meta should behave as if it’s owned by the U.S. Government and not engage in any act of censorship that the Supreme Court would rule is contrary to the First Amendment.
As President Obama said, echoing Louis Brandies, before he decided that misinformation was the scourge of liberal democracy: “The strongest weapon against hateful speech is not repression; it is more speech.”
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.