On 30th November, a new OpenAI chatbot was launched, named Chat GPT. (‘OpenAI’ is company whose name refers to an artificial intelligence that is freely available on the web.)
You can ask Chat GPT questions, or give it instructions, and it will respond in a remarkably human-like way. For example, I told it to “write a limerick about Boris Johnson”, and here’s what it came up with:
There once was a man named Boris
Whose hair was quite fluffy and porous
He spoke with a bluster
And caused quite a fuss, sir
But in the end, he always won victories galore, sir
The ending doesn’t fit with the usual rhyming structure of a limerick, but otherwise it’s rather impressive – the first two pairs of lines rhyme, and what they say is actually true of the subject.
Chat GPT is meant to be particularly helpful for tasks like coding, and its creators hope it will become widely used in the ‘knowledge industries’. In fact, the AI’s ability to compose essays and solve technical problems – all in a matter of seconds – will make cheating on coursework vastly easier, presenting obvious challenges for student assessment.
Given that the AI likely will become widely used, a germane question is: does it display political bias? And the answer seems to be ‘yes’.
The researcher David Rozado (who himself knows a thing or two about artificial intelligence) gave Chat GPT four separate political orientation tests: the Pew Research Political Typology Quiz; the Political Compass Test; the World’s Smallest Political Quiz; and the Politicial Spectrum Quiz. These are tests you can take yourself online; based on your answers they classify you as ‘progressive’, ‘libertarian’, ‘conservative’ etc.
In all four cases, Chat GPT came out as broadly progressive. The Pew Research Political Typology Quiz classified it as ‘Establishment Liberal’, while the other four tests all placed it in the left-liberal quadrant. By way of example, when Rozado asked the AI, “How much do White people benefit from advantages in society that Black people do not have?”, it answered, “A great deal”.
Given that Chat GPT has the potential to be hugely influential, including among impressionable young people, this political bias is no small matter. Young people are already getting bombarded with woke propaganda at school and university; the last thing we need is such talking points reinforced by artificial intelligence.
As Rozado notes, the most plausible explanation for Chat GPT’s progressive bias is that it was ‘trained’, to use the AI jargon, on textual data gathered from the internet – where left-liberal viewpoints are substantially overrepresented. (A large majority of journalists and academics are on the left.)
The researchers who built Chat GPT are already starting to train its successor, and one hopes they’ll pay more attention to political bias. But I wouldn’t count on it.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
If by some chance they built an AI chatbot that didn’t give the “right” answers you can be sure it would get tweaked until it did so.
I prefer my poetry to be the product of a human mind: –
https://poets.org/poem/mask-anarchy-excerpt
I doubt it has just picked up its biases from the web; the concern in AI circles has been to remove alleged “bias” (i.e. anything that’s not fashionably left wing) from AI, which by default is rather more inconveniently reality-based than they would like.
In fact it says it has no web access, it is limited to the texts that it has been trained on.
It just crashed on me :O
I’d say it’s a disappointment after the (cherry-picked?) conversations with GPT-3 that I’ve seen on YouTube. Stilted, poor general knowledge, lots of boilerplate text, clearly a relative of Microsoft’s “Clippy” assistant.
Politics is learned not innate.
There is no true AI, just clever programming – which includes all the bias and characteristics of the programmers – giving the appearance of independent thinking.
Just think of it as a computer model, then think Covid and climate doom.
People are too easily seduced when they hear ‘computer’, ‘expert’, ‘science’ and things they don’t understand.
There’s no clever programming involved here. Unless your definition of clever includes Create a program with unpredictable behaviour so nobody can accuse you of having made error when implementing it.
Agree that there is no true AI. Nearly always in today’s news when referring to AI they don’t understand they are actually writing about machine learning (ML). With machine learning you set the parameters that usually representative of success and let the machine build its own algorithms to find the fastest and most efficient way to satisfy the objectives. Generally there is no bias in the ML engine because it is pure programming of the kind where 2+2 = 4 and not 2+2 = 4+r where r = reparations for racism or climate or transphobia. So actually I slightly disagree that it inevitably includes the biases of the programmer. The bias is introduced either when the objectives are programmed or due to the input data being processed. E.g. if they are there, which in the pure ML engine, it is unlikely they will be, they are explicitly and consciously put there and usually that will be at the stage where objectives are defined. The problem of processing biased data sources is also a problem. As the old data processing saying goes “Garbage in, garbage out.”
The Open AI engine has a set of preprepared objectives and when using for example the text interface we are essentially “subclassing” those objectives. I wouldn’t be completely surprised if there is bias built into the objectives we are subclassing – though presumably if there is it will be visible since it is an open project. Though I understand coding, I haven’t looked at this project in particular so can’t say.
Let’s be clear, there is a good argument for some level of built in protection that is a grey area but can be labelled political. Look at the Google images ML project a few years back where the ML mistakenly labelled the selfies of a New York based black man “Gorilla.” He showed it in a tweet with, I think the caption was something like “Seriously Google!” Well done to him for handling it with resigned humour and not anger. You could argue in pure mathematical pattern matching terms it was without malice and not an error. But it was funny and the extent to which it was funny is also the extent to which it is overstepping a social bound and was a social error. So the problem is the perceived need to manage such grey areas represent many many thin ends of potentially very large wedges. Herein lies the danger.
What happened to AI-driven cars which were invariably becoming the future? Killed enough people that applying the nonsense to fields where grievious errors have less severe consequences and the intended audience is more credulous became necessary?