Former Supreme Court Judge Lord Jonathan Sumption has written a splendid piece for the Spectator, explaining exactly what’s wrong with the Online Safety Bill and its creation of a new category of speech which is ‘legal but harmful’. Here are the opening paragraphs.
Weighing in at 218 pages, with 197 sections and 15 schedules, the Online Safety Bill is a clunking attempt to regulate content on the internet. Its internal contradictions and exceptions, its complex paper chase of definitions, its weasel language suggesting more than it says, all positively invite misunderstanding. Parts of it are so obscure that its promoters and critics cannot even agree on what it does.
Nadine Dorries, the Culture Secretary, says that it is all about protecting children and vulnerable adults. She claims it does nothing to limit free speech. Technically, she is right: her bill does not directly censor the internet. It instead seeks to impose on media companies an opaque and intrusive culture of self-censorship – which will have the same effect.
As things stand, the law distinguishes between online publishers (like The Spectator) that generate content and can be held responsible for it; and online intermediaries (Google, Facebook, etc) that merely provide online facilities and have no significant editorial function. Mere intermediaries have no obligation to monitor content and are only required to take down illegal material of which they are aware.
The Online Safety Bill will change all this. The basic idea is that editorial responsibility for material generated by internet users will be imposed on all online platforms: social media and search engines. They will have a duty to ‘mitigate and manage the risks of harm to individuals’ arising from internet use.
A small proportion of the material available on the internet is truly nasty stuff. There is a strong case for carefully targeted rules requiring the moderation or removal of the worst examples. The difficulty is to devise a way of doing this without accidentally suppressing swaths of other material. So the material targeted must be precisely defined and identifiable. This is where the Online Safety Bill falls down. …
The real vice of the bill is that its provisions are not limited to material capable of being defined and identified. It creates a new category of speech which is legal but ‘harmful’. The range of material covered is almost infinite, the only limitation being that it must be liable to cause ‘harm’ to some people. Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any ‘physical or psychological harm’. As if that were not general enough, ‘harm’ also extends to anything that may increase the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it.
Worth reading in full.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.