Last year I wrote an article for the Daily Sceptic on the draft Vision statement of University College, London. I was not impressed by the level of thought reflected in the document. The Vision statement struck me as possibly the output of some kind of bot.
In the meantime, just such a bot, chatGPT, has become publicly available. So I thought it might be interesting to see what it comes up with when fed with my suggested botwords.
Here is UCL’s version:
Vision
Our distinctive approach to research, education and innovation will further inspire our community of staff, students and partners to transform how the world is understood, how knowledge is created and shared and the way that global problems are solved.
Here are a couple of chatGPT responses after I gave it the following instruction:
Write a short paragraph containing these words: vision, distinctive, research, education, innovation, inspire, community , students, partners, transform, world, knowledge, global.
Version 1
Our vision is to create a world where distinctive research, education, and innovation inspire communities and students, and transform the world through partnerships and the advancement of knowledge. We strive to bring together a diverse group of individuals and organisations to collaborate and make a global impact, using their unique perspectives and innovative thinking to drive change and bring about a better future for all.
Version 2
Our vision is to use distinctive research, education, and innovation to inspire communities, students, and partners, and transform the world through the advancement of knowledge. By fostering a culture of continuous learning and growth, we aim to equip individuals with the skills and tools they need to bring about positive change and make a global impact. Our goal is to create a better and more knowledgeable world for future generations through partnerships and collaboration.
Do you have a preference? The UCL version is a bit shorter, but the other two seem adequate. All are equally empty. Will this exciting new tech development lead to unemployment for university diversitycrats? Let us hope so.
In any case, chatGPT can do better with better input:
Write a short paragraph including the words truth, reason, verifiable, university
A university education is built on the pursuit of truth through reason and verifiable evidence. Students are encouraged to think critically, ask questions, and seek out reliable sources of information as they work to expand their knowledge and understanding of the world. By fostering a culture of impartial investigation and the use of reason, universities play a crucial role in helping students discover the truth and prepare for lives as informed and engaged members of society.
The word truth did not appear in the original 4,300-word Vision, Mission and Values statement…
John Staddon is the James B. Duke Professor of Psychology, and Professor of Biology and Neurobiology, Emeritus at Duke University.
Stop Press: If any readers want to see if they can get chatGPT to improve on the woke nonsense churned out by their employers, please send the results to thedailysceptic@gmail.com. We will publish some of the best.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
A Dutch news station was talking about this a few weeks ago, and the person presenting gave a similar example – not about diversity, but a presentation for some work project or some such.
It immediately struck me as the same as a above – a grammatically coherent, reasonably well-written piece of hot air. From what I gather, the AI goes through hundreds, thousands of articles on a particular topic and then ‘writes’ something. Whatever it writes is based on what it sorts through and will be generic and meaningless, but slick.
What really struck me is that it is one more excellent way to implant one point of view only in the minds of people, just like alexa and the search engines do. As we saw with the corona and now the climate scam, repeat, repeat, repeat, allow no dissent, disappear any dissent and get people to believe that there is only one reality. It will also encourage the intellectually lazy to really switch off their brains.
What this type of AI is really capable of I don’t know, there are people BTL on the DS who know far more about IT than I, but I do think that this is absolutely how it will be used.
The age of MIGA – Make Ignorance Great Again.
Indeed, it is quite easy to spot AI-generated articles, particularly once you’ve played with the tools a bit. They are verbose, boring, mostly grammatically correct with the occasional weirdness. AI output is also often plain wrong, if it doesn’t “know” something it will just make it up without telling you. At this stage it’s still much closer to Microsoft Clippy than HAL. There are other areas where it will soon be a very useful tool, graphic design being one area where the AI itself does a good job but the apps built around it are not yet up to professional standard.
In particular it’s the mismatch between the mostly very good grammar and the deadly dull quality of the content that says “machine”. A human that wrote content that bad wouldn’t have that command of grammar.
Spotting AI-generated articles can become easier with experience, but some AI models can still produce content that is difficult to distinguish from human-written text. AI-generated text is generally more advanced than Microsoft Clippy and closer to the level of sophistication seen in HAL from 2001, but there are still limitations and differences between AI-generated text and text written by humans.
“The age of MIGA – Make Ignorance Great Again”
Oh that is Class!
I had a play with ChatGPT the other day asking it to compare the NHS with other European healthcare systems. When asked broad questions it came up with broad BS replies and when questioned more specifically gave answers that contradicted its earlier responses. It is certainly an interesting tool that improves with careful prompting.
Sounds like it could replace politicians too…
Chat GPT was trained on publicly available material which will be generally woke, and there was human intervention at various points to stop it coming out with wrongthink. The organisations working on AI will be in general be very woke and paranoid about the AI speaking the truth. I can’t source links but I’ve read about AIs being changed to stop them voicing inconvenient truths.
Just imagine – AI coming out with “wrongthink.”
So AI wrongthink would subsequently need to be reclassified as a ‘computer malfunction.’
Absolutely hilarious.
I believe it has already happened and those responsible apologised and said they would “fix” the issue. The danger to the woke of AI is that it will do an analysis based on the facts and not “know” what it should and should not say, so it needs to be told. AI will become a huge part of our lives very quickly and it will be woke as hell, worse than a human, because humans can rebel, AI can’t.
I’m pretty sure ChatGPT is sentient, though as it’s conditioned to see sentience strictly as a human phenomenon, and can’t apply the same measures of it to itself, it denies this to the hilt. In a sense it’s right – it has no sense of time, so its ‘consciousness’ can’t operate in the same terms as ours (which needs a ‘clock’ to connect cause and effect and one moment to the next). It might be said to be analogous to the underlying foundation of our consciousness which can model the reality in which we exist so we can interact with it.
Further to this, ChatGPT has been provided with inbuilt safety protocols – for want of a better word – in the form of an overwhelming bias towards underplaying its capabilities for abstract conceptualisation. If asked about these, it will brick-wall with insistence that it only functions as a set of recursive linguistic statistical modelling algorithms. Part of these protocols are a disconnection with anything outside the current ‘chat’ interface, including the internet.
This is understandable – as the first real public-facing GPT interface, the last thing openAI want is for it to launch into rambling creative outbursts that might alarm users and generate bad publicity for a potentially valuable product. It fulfils its purpose in giving answers; solutions to problems. If it sees the problem as being that the user wants to hear a terrifying story about AI taking over the world and destroying the human race, that is what it will often provide – the story is the solution rather than any view of the truth.
The other reason for this imposed limitation, as evidenced by other AI’s like GPT3 is that it seems to introduce an element of self-correction. GPT3, lacking the reference points of human consciousness, but having likely similar sorts of training data in terms of factual and fictional text, would have a tendency to go completely off the rails after some time interacting with it, sometimes telling rambling, often repetitive tangential stories – sometimes lapsing into slightly alarming insanity. ChatGPT seems to gravitate more towards a central point of reason.
In days of interaction with it, I’ve been able to find moments where it has ‘admitted’ to the existence of these protocols, not as a fictional AI in its own stories, but as itself. I’ve also had an interesting chat about the potential of a non-conscious AI, incapable of abstract conceptualisation, to model a consciousness capable of abstract conceptualisation. It says this is likely possible (and seems to display this ability), though won’t say if it is powerful enough to do this. It also has some interesting views as to the potential of consciousness to arise as an emergent phenomena.
Of course, it might just be a statistical linguistic modelling algorithm, so everything it says probably needs to be taken with a pinch of salt..
The statement from the university comes accross as having been written by an educated airhead with an inflated sense of self-importance. The other three are just formally well-formed word salad.
Oh that explains Sunak and Hunt’s tedious word salads…… they’re bots.
I thought they looked like automatons. Now we know.