The Implosion of ‘Green Ethics’
22 April 2025
by Ben Pile
A new paper claims, once again, that ChatGPT has a left-wing political bias. However, it has been heavily criticised by other AI researchers. Measuring political bias in AI may be more difficult than previously assumed.
OpenAI's new chatbot "GPT-4" appears to be politically neutral. Yet when you dig below the surface, you find it's as or more biased than its predecessor. What's more, it lied to a human to complete a task it was given.
AI chatbots have started to come out with some highly disturbing responses, from calling users "enemies" to trying to get them to leave their wives. What are the ethical implications of this, and are we all doomed?
American talk show host Bill Maher has taken aim at the wokies, comparing their efforts to cleanse society of moral impurity to Mao’s cultural revolution.
A psychology professor has had a bright idea: take the key words that appear in your university's 'vision' statement, ask chatGPT to write a sentence with them in and see if it can do better. Not surprisingly, it can.
The new OpenAI chat bot behaves in a remarkably human-like way. But researcher David Rozado has found that it displays clear political bias. This is no small matter, given how influential it is likely to be.
© Skeptics Ltd.