On 30th November, a new OpenAI chatbot was launched, named Chat GPT. (‘OpenAI’ is company whose name refers to an artificial intelligence that is freely available on the web.)
You can ask Chat GPT questions, or give it instructions, and it will respond in a remarkably human-like way. For example, I told it to “write a limerick about Boris Johnson”, and here’s what it came up with:
There once was a man named Boris
Whose hair was quite fluffy and porous
He spoke with a bluster
And caused quite a fuss, sir
But in the end, he always won victories galore, sir
The ending doesn’t fit with the usual rhyming structure of a limerick, but otherwise it’s rather impressive – the first two pairs of lines rhyme, and what they say is actually true of the subject.
Chat GPT is meant to be particularly helpful for tasks like coding, and its creators hope it will become widely used in the ‘knowledge industries’. In fact, the AI’s ability to compose essays and solve technical problems – all in a matter of seconds – will make cheating on coursework vastly easier, presenting obvious challenges for student assessment.
Given that the AI likely will become widely used, a germane question is: does it display political bias? And the answer seems to be ‘yes’.
The researcher David Rozado (who himself knows a thing or two about artificial intelligence) gave Chat GPT four separate political orientation tests: the Pew Research Political Typology Quiz; the Political Compass Test; the World’s Smallest Political Quiz; and the Politicial Spectrum Quiz. These are tests you can take yourself online; based on your answers they classify you as ‘progressive’, ‘libertarian’, ‘conservative’ etc.
In all four cases, Chat GPT came out as broadly progressive. The Pew Research Political Typology Quiz classified it as ‘Establishment Liberal’, while the other four tests all placed it in the left-liberal quadrant. By way of example, when Rozado asked the AI, “How much do White people benefit from advantages in society that Black people do not have?”, it answered, “A great deal”.
Given that Chat GPT has the potential to be hugely influential, including among impressionable young people, this political bias is no small matter. Young people are already getting bombarded with woke propaganda at school and university; the last thing we need is such talking points reinforced by artificial intelligence.
As Rozado notes, the most plausible explanation for Chat GPT’s progressive bias is that it was ‘trained’, to use the AI jargon, on textual data gathered from the internet – where left-liberal viewpoints are substantially overrepresented. (A large majority of journalists and academics are on the left.)
The researchers who built Chat GPT are already starting to train its successor, and one hopes they’ll pay more attention to political bias. But I wouldn’t count on it.