Call me old-fashioned, but if I follow a hunch down a rabbit hole and it leads to a dead end, I let it go. However, this, it turns out, is not how journalism is done at Britain’s major TV news organisations. Last week, both the BBC and ITV reported on the same non-story: that no evidence was found of Reform U.K. using ‘bots’ to influence its social media reach ahead of the July 4th General Election. Nonetheless, the stories these outlets published left readers with the clear impression that something nefarious had been exposed. After all, academics were suspicious of the challenger party’s social media reach and the polling bounce it had had since Nigel Farage returned as leader.
“Are fake accounts swaying voters towards Reform U.K.?”, asked BBC ‘disinformation correspondent’ Marianna Spring. Apparently without any sense of irony, Spring built her own bot farm – 24 mobile telephones, each of which is loaded with social media accounts of a fictional profile, which Spring uses to “like, follow and watch relevant content”. Spring used these accounts to contact other social media accounts which had committed the egregious sin of sharing Reform U.K. videos or imploring others to “Vote Reform!” to establish their authenticity as social media users, their motivations and their relationship with the party. A few reply. Most ignore her.
Spring finds little evidence from their replies for anything more than suspicion – “hallmarks of inauthentic accounts” – and Reform denies that its campaign includes such an initiative. Spiring admits that despite identifying 50 suspicious accounts, “they could still have been genuine”. Nonetheless, Spring’s fig leaf is that this is “one more piece of evidence” that “social media users and anonymous accounts have the ability to shape the online conversation just as effectively as the content coming from the political parties themselves”.
So why the focus on Reform?
Just the day after Spring’s report, Digital Video Producer at ITV News George Hancorn claimed to expose the “suspicious accounts ‘with Nigerian following’ being used to push pro-Reform U.K. content”. His own fake social media account bot farm, developed with “cybersecurity experts at Cardiff University” had “noticed a collection of accounts with ‘unusual behaviour’ on TikTok”. What was this behaviour? Apparently, the experts’ super-senses were triggered by “how many comments included the phrases ‘Vote Reform’ or ‘Reform U.K.’ being posted on repeat”.
ITV and the Cardiff’s ‘experts’ analysed the comments on 14 videos they posted to their own fake social media accounts. Of these 14 videos, two featured Reform spokespeople. But four per cent of 7,766 comments posted in reply to all 14 videos were comments such as “Vote Reform” or “Reform U.K.”, which “far outstripped other comments openly supporting other parties”. If that strikes you as what American cop shows would call “probable cause” for investigating election interference, notice that four per cent of 7,766 is just 310, and ITV offered no analysis of the remaining 7,456 comments. From the authors of these 310 comments, ITV’s experts determined that “more than half” of the 14 accounts they analysed (note the 14 accounts were not the source of the 14 videos and the recurrence of the number 14 is apparently coincidence) “could potentially be ‘bots’”. The 14 accounts were contacted, and three replied, but only to decline further conversations. “The responses from some of these accounts show they are being run by Reform supporters, while others are still unproven and could well be inauthentic,” conclude Hancorn and his ‘experts’.
In other words, there exists no evidence of anything nefarious. Yet both the BBC’s and ITV’s so-called ‘investigations’ – published within a day of each other – continued to heavily insinuate that social media was nonetheless being manipulated in Reform’s favour. “Make no mistake. These are highly professional coordinated campaigns,” said “cybersecurity expert” Tom Kirkham to Hancorn, who went on to claim that the “coordinated campaign” meant “TikTok’s algorithm had been influenced by the weight of comments”.
If stories about an election being manipulated are run by national broadcasters on the basis of 310 TikTok comments then British democracy clearly has bigger problems than a bit of social media monkey business. Moreover, if mere suspicions of a “coordinated campaign” of pro-Reform comments on social media are worthy of investigation, what should we make of a manifestly anti-Reform coordinated campaign on conventional news media? Can it be mere coincidence that two national broadcasters produced such studies, based on thin air, within 24 hours of each other? Furthermore, if there are concerns that bot farms are pushing out pro-Reform comments, might there be an equivalently grubby outfit, perhaps based in Cardiff rather than Nigeria, producing vapid analyses of social media, to the opposite effect?
It’s a rhetorical question, of course. The good faith of the unnamed ‘experts’ at Cardiff University and the newsrooms that have hired them can hardly be taken for granted. As I pointed out recently, the recruitment of academics into political debates, not so much to inform them as cackhandedly manage PR in relation to major policy agendas, has poisoned the relationship between the public and Westminster. That degradation arises from two things. First, psychological manipulation, as per ‘nudge’ theory, is grossly condescending and anti-democratic, and likely to be met with public cynicism. Second, what such academics offer is invariably cod science, advanced by chancers, who spot the opportunity for their careers and political causes.
Similarly, reinforcing the established political narrative with analysis of social media has become big business for second-rate academic institutions and AI exploiters. Whereas in the 2000s and early 2010s social media was credited with the rise of seemingly progressive so-called ‘colour revolutions’ and the election of Barack Obama, things took a different turn in 2016 when the U.K. electorate voted to leave the European Union and Americans put Donald Trump in the White House. Since then, official paranoia about the potential of unauthorised and uncontrolled public discussion has sought the counsel of experts in Big Data, machine-learning and other new techniques to find ways to alternately harness, prohibit and mitigate the power of social media in the establishment’s interest – to prevent democracy making the wrong decisions.
One such outfit is the Institute for Strategic Dialogue (ISD). Established in 2006 ostensibly to track the rise of online jihadism, following the death of its founder Lord George Weidenfeld, the ISD began to track a broader range of dangers to society, such as anti-wind farm campaigners, using large donations from European governments (including the U.K.) and the usual ‘philanthropic’ foundations, such as those funded by Bill Gates, Christopher Hohn and George Soros. The ISD, now with a budget of approximately £5 million a year, produces glossy reports linking vaccine scepticism and climate change denial to the emergence of the online “far Right”, “conspiracy theories” and “hate”. In coordination with other beneficiaries of green billionaire largesse these function largely to support strategic policy agendas such as ‘online harms’ legislation, recently seen moving through legislatures in the USA, EU, U.K. and Canada.
The ISD has close working relationships with both the BBC World Service (and BBC Verify) and Cardiff University. The anonymous experts cited in the ITV report likely hailed from one of the University’s new departments in this new space, such as the Hate Lab or the Digital Media and Society Research Group. Both of these are Cardiff organisations that, like the ISD, claim to track the unfolding of online nastiness in real time. The former, for example, claims that coronavirus had become a “justification for extremism on the internet”. The latter opaquely claims to “explore the uses of digital media within a range of social, political and cultural contexts”. These research projects may sound bland, or even reasonable enough, until we take a closer look at the content and the motivation. The mission of tracking online ‘hate’, for example, soon becomes a spooky watchlist when the category of ‘hate’ is broadened to include support for Brexit and demands for the regulation of social media to prevent it. As the following segment of an ITV documentary featuring Hate Lab research shows, academics can supply national news broadcasters with extremely partial accounts of ‘hate’ – because only Remainers face online harassment – to create a moral panic requiring clampdowns.
BBC Verify is the state-funded broadcaster’s escalation in the misinformation domain, coalescing formerly disparate parts of the network from places such as BBC News, BBC World Service and BBC Media Action (being the ‘NGO’ wing of the BBC and a beneficiary of philanthropic and Government grants rather than licence fee extortion). BBC Verify brings together 60 journalists to ‘fact-check’, debunk and counter claims that the vast team believes pose a threat to the integrity of the democratic process. The ISD’s influence on the direction of this new outfit has been evident since before it was formally convened. The ISD’s work featured heavily throughout Spring’s and other BBC fact-checkers’ reports before BBC Verify appeared, including the ISD’s research, which claimed that Covid pandemic conspiracy theorists were “pivoting” towards climate change denial conspiracy theories and “peddling falsehoods about climate change”.
It would not be quite so alarming were it not for these organisations’ extremely poor grasp of the debates and issues in question. To say that their work is sloppy would be generous. Often, for example, it is obvious to anyone with knowledge of the controversies in question that the reporters have done little more than contact the ISD or an academic with known biases on a subject for an opinion, to purportedly ‘debunk’ a claim, rather than explore the issue with journalistic objectivity. BBC Verify’s work, despite its bloated staff, is sloppy to the point of being false, partial to the point of being deeply ideological and hypocritical. Its own ‘fact checking’ is littered with errors.
Despite the institutional prestige offered by fancy-pants ‘institutes’ and research organisations, the product is always the same: smear and innuendo, rather than facts developed through careful research. With their own bot farms, conspiracy theories, spam-like trolling and rush to acquire ‘clout’ with clickbait and bullshit, the organisations working in the ‘misinformation’ and ‘election interference’ domains have become precisely what they set out to expose. National broadcasters, think tanks and academic research organisations have lowered themselves to the level of Russian or Nigerian troll farms, merely to shore up an establishment that is unwilling or unable to engage in debate about its favoured policy agenda with outsiders and challengers such as Reform and the hoi polloi. It is this network of bullshit that is the threat to democracy.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.