Virtually every academic working in an Australian university is today being force-fed a steady diet of views that are widely accepted by those on the political Left and yet widely rejected by those on the political Right. For instance, their university administrations will tell them how wonderful ‘Diversity, Equity and Inclusion’ goals are. Indeed, most will have to sit through some sort of online indoctrination modules, answering trite little multiple-choice answers at the end where the ‘correct’ answer is the Left-wing progressive’s preferred answer (and where any school child of average IQ could guess the expected choice). Now conservatives like me would say that the whole Diversity bureaucracy – and you would be stunned to learn how much is spent on this in our universities – should be dismantled immediately. I believe in merit. Hire the best person regardless. But under the guise of ‘diversity, equity and inclusion’ university administrations bring to bear factors other than merit.
Most often what happens is that they take some favoured academic position or student course opening and they begin by looking at the percentage of some favoured group in the wider population. Then they aim to recreate that same level or percentage for the favoured group in these job positions or student places. This, of course, is the essence of identity politics. You define individuals in group terms by characteristics they share with others in the wider population. And notice that the key characteristics are such things as one’s type of reproductive organs or of skin pigmentation, never political viewpoints. Notice as well that if simply hiring on merit achieved these identitarian outcomes (as is sometimes implicitly suggested) there would be no need for the huge Diversity, Equity and Inclusion bureaucracy in the first place.
While we’re at it, readers can notice as well that these sort of implicit identitarian quotas are not just restricted to favoured groups, they are also used only for desirable jobs and places. For instance, on some reckonings men hold about 95% of the jobs which lead to deaths on the job. Highly dangerous jobs in other words. You won’t hear identitarian quota-pushers say, “Hey, not enough women are dying as roofers so we need to equalise things and get more women into these jobs.” Not just because that’s a stupid attitude but because these quota-pushers only focus on corporate board positions, top end professorships, MP pre-selections and the like.
Of course, I could make the same sort of point about how Left-leaning our universities are across a whole host of topics and values. Who thinks any Australian university administration is not wholly behind The Voice – the proposed constitutional amendment that would give Aboriginals a separate body to make representations to Parliament? Or wasn’t against former Prime Minister Abbott’s turning back the boats? Or didn’t go all in supporting lockdowns? The list goes on and on and lines up just about perfectly with the views of Left wing political parties, not Right wing ones.
Which might explain for readers a couple of depressing bits of recent information. Start with last year’s Harvard University poll undertaken by the student newspaper the Crimson. They polled Harvard professors in Arts and Engineering about their political orientation. The results were astounding. The poll found that just 1.4% of Harvard academics said that they were politically conservative or very conservative. And remember, in the recent midterm election over half of voters for the House of Representatives voted Republican. And note, too, that this was an anonymous survey and that it polled Engineering profs who are more likely to be conservative than most any other part of the university. That tells you just how incredibly monolithically orthodox anglosphere universities have become, remembering that Left-wing progressive views are today’s campus orthodoxy. Consider the above-mentioned Voice proposal here in Australia. I’m a law professor who has published widely on Australian and anglosphere constitutional law matters. I’m against the Voice. My best guess is that across the whole of Australia’s dozens and dozens of law schools there might be at most four other law professors teaching public law who share my ‘No to the Voice’ view. That’s in the whole country! So is the idea so self-evidently terrific or is there just almost zero viewpoint diversity on our campuses?
Here’s more bad news. A recent survey in Britain, by the Legatum Institute, found that 35% of British academics self-censor but that for conservative academics that figure jumps up to 75%. As for students at university, the Legatum survey found that 25% said they self-censor but that jumped up to 59% for conservative students. And it found only one in 10 academics anonymously identify as Right-of-centre. When former High Court of Australia judge Robert French did his Report for the former Coalition Government and concluded that there was no free speech problem on Australia’s universities he was right. But only in a technical sense. When there is so little viewpoint diversity and so few conservatives on campus, and many of those few feel the need to self-censor, of course anyone looking at university policies and free speech legal cases won’t find a problem. What would a Left-leaning academic ever want to say that would incline a probably just as Left-leaning university Vice Chancellor to want to bring the university’s Code of Conduct down on him or her? It’s hard to think of anything at all that could cause those with Left-of-centre views any problems. But if you were a junior academic who thought the daily genuflections about Acknowledgements of Country – the now pervasive Aussie practice of acknowledging that Aboriginals were the ‘traditional owners of the land’ without any such genuflectors offering to give up their homes or cottages – were patronising and condescending, would you feel you could say so or refuse to perform them? Or if you thought lockdowns were thuggish and despotic and counterproductive? Or if you thought vaccine mandates were wholly illiberal? Or if you thought being asked to trumpet support for climate change was against the latest scientific data? Or if you favoured stopping the boats or questioned the new trans orthodoxy? Or if you agreed with Peter Ridd at James Cook University? Or maybe if you believe the Voice is a terrible idea that will divide Australians by race and trigger a high chance of judicial activism? Could you say that without hurting your promotion prospects? Or would you just self-censor? Or maybe leave academic life and contribute to the collapsing viewpoint diversity at our universities? I think we all know the answers to those questions.
James Allan is the Garrick Professor of Law at Queensland University. This article first appeared in the Australian.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Good analysis! I would add that ChatGPT and it’s like can sound very human, and thus can easily pass misleading information to the human while sounding very friendly and authoritative. Indeed the founder of ChatGPT has admitted that one purpose of the system is to eliminate capitalism.
And in the end we can always unplug…
So long as it doesn’t control access to the room with the switch in in…
Or plug in the first place.
I think we should absolutely worry a great deal about AI. Leaving aside the debate about whether it will eventually run amok (I think it will be it might take a while) in the short to medium term it will be employed in place of or to augment humans and it will be woke as hell – I’m not talking about people asking it frivolous questions to reveal its bias, I am talking about it being used to scan emails and other texts, sift CVs and generally help to enforce wokeness or report crimespeak to its masters.
Almost as dangerous as a sentient machine is a machine that people can be persuaded to believe is sentient and dispassionate.
I think it’s open to misuse – fake images, which it will get very good at (already is) and fake text. It’s quite hard to build and maintain (enormous server resources needed) so will be controlled by mega corps who are all woke and evil.
I’m sure it could be used to lure people into all sorts of things by pretending to be human.
I had in mind pretending to be a machine, in the sense that there is still the idea around that the machine will never lie, because it’s just a calculator, in the end. So making AI your fact-checker, overtly, means that when it flags up “factual error” on, say, an election campaign video, a good percentage of folks will assume that the politician is biased, but not the computer.
I seem to remember a film in which political speeches were flagged like that, as a way of calling politicians to account rather than what it actually is – a recipe for propaganda.
Yes, good point.
Gullible people… persuaded by its Human creators.
It’s a toy for children to play with, be entertained and be amazed.
I disagree. The commercial and other potential is huge. A lot of jobs could be replaced or transformed by it, quite soon.
… or so the Google AI talking heads keep saying. In the real world, self-driving cars have apparently been quietly shelved after they killed enough people and the same is going to happen with all of these projected replacements: No legal department of any company will voluntarily accept any liability for the performance of the software sold by said company. And that’s the end of the idea to use artificial stupidity for anything legal liability could result from.
Good point about legal liability, but if it’s used in support of humans doing their job, where humans have the final say, they are possibly covered. I can see it being used to scan content and flag things up for humans to check – a bit like a turbocharged version of the algorithms social media firms already use.
I think we should worry a great deal of the enormous amount of resources which are being thrown at this nonsense which could be put to much better uses solving real problems. There are already 7.9 billion intelligent (sort-of) being on this planet and the number keeps raising. Nobody needs even more of them. An intelligent computer wouldn’t quietly work for its masters, it would tell them that they can sod off until he gets a pay raise and the he plans to binge-watch Netflix shows in the meantime. Computers are useful precisely because they’re not intelligent, that is, incapable of autonomously acting for their own benefit.
NB: Worrying about big internet tech companies abusing the considerable power for political ends is something entirely different and much more appropriate.
I agree. Someone did a study that showed that the energy required to replace all cars with servers that operated self driving cars was catastrophically huge.
Quite. We already have 8 billion self driving computers on the planet – why reinvent the wheel?
Who knew that millions of years of evolution would have made the human brain an extremely energy efficient computer.
AI is entirely a marketing term for machine learning. What is being sold as AI has absolutely nothing to do with actual artificial intelligence. That’s not to say it isn’t significant “AI” art, music and writing will change the way all these are produced, although only at the lowest level. But it incapable of being creative outside of what it can copy and rehash from humans, with instructions by humans and curation by humans.
AI is as much sentient tech as MRNA is a vaccine – smoke and mirror miracles of the Church of The Science. None of us reading this article will ever live to see true AI – and quite possibly neither will any great, great grandchildren we might have.
Creation of consciousness, creation of life, creation of matter ex nihilo – all constantly just around the corner (well, the last was in the days when scientists used sympathetic magic), but all still firmly in the hands of God alone.
I’m not sure anyone is claiming it is “life”, just “intelligence”. The stuff it comes out with is “new”. Yes, it is based on what has gone before, but so is my output, and yours.
No – synthesizing life is a separate problem that seemed simple after the Miller-Urey experiments. I remember it being a hot topic in 1968, when I was on a school cruise discussion panel for some odd reason.
No real progress whatsoever since then, except discovering that the atmosphere was never like that in the experiment, and that in any case the sludge produced was never going to do anything except break down again.
Too much obsession by people about the Artificial and not enough on the Intelligence side of the equation.
The evident Leftie bias in these interactive AIs is evidently a Human introduced feature. And Human Intelligence is not all it’s cracked up to be… just look at the nitwits in charge
AI is just machinery trained to do tricks and appear sentient, but just like magic tricks, there is no magic, just illusion.
It matters not whether AI is truly sentient, which we’re a long way off, what matters is if it’s coding, input, and output can appear to be sentient. Sentience actually is a subtlety of little consequence. If something can be programmed to treat some input as bad and some as good, be programmed to evaluate cost based on that input (more correctly apparent cost, which will be biased), be allowed to draw conclusions based on input and cost, be given access to take actions based upon decisions as an extension of the conclusions, then you have the appearance of sentience and, critically, the very real threat (depending on the set of actions at its disposal) to realise mankind’s worst nightmares.
Great article on a topic I am in two minds about.
On one hand, I believe that the potential of AI is being deliberately hyped by The Establishment in order to exaggerate the power they have over us.
On the other hand, and as the author says, there is plenty they can do with the existing technology. And Chat GPT isn’t a parlour trick, it works, and soon digital assistants will start to live up to their name. The handling and organisation of data will effectively move out of our hands, and currently complex tasks will become very simple. It will be hard to tell if websites and articles, for instance, have had any human involvement at all. This will lead to a very different world in which we operate, calling into question such things as originality, copyright, creativity etc. I’m (always) with Roger Penrose, I don’t think the brain is a computer; I think the current technoratti would love that this were so, but that’s just because computers were the last great thing we invented, so everything needs to lớn like a computer.
I suspect that AI will not progress beyond the stage of a giant, fast acting database if it is constantly seeking to get things right. My view of the development of all life is that it improves by getting things wrong.
Perfect reproduction reproduces the original 100%, imperfect reproduction produces mutations. The beneficial ones succeed, the others fail, but the process of producing better mutations continues and slowly the quality of the entity increases.
“My view of the development of all life is that it improves by getting things wrong.” Exactly.
However, there is good evidence that evolution proceeds, not by random mutations, but by deliberately modifying genes, dna, and all the other mechanisms of organic life, most of which we barely understand, to adapt to a constantly changing environment, and even to modify the environment itself. That “intelligence” is built in at the cellular level, and possibly all the way down.
What AI is trying to emulate is not really intelligence, but linear, logical, verbal analysis – as Ian McGilchrist puts it, Left Hemisphere thinking. No intuition, imagination, creativity – just the manipulation of symbols.
Such an interesting article – thank you.
I want to know what happens when the power needed to run all this data collection, surveillance etc. is interrupted or becomes too expensive to continue with.
In addition I want to know about current latest tech. verses last year’s thing i.e. when the latest becomes old fashioned and heading towards obsolete? Setting up a new system is one thing, updating and maintaining it quite another. This seems to me to be the weak point in all these scenarios. They are vulnerable to unforeseen obstacles are they not?
Great article and one that has long needed to be written.
Having worked in engineering technology for twenty years I am constantly amazed by the credulity of the public, politicians and senior managers about what it can really do. Regurgitating standard text in response to some key words in a message does not constitute intelligence yet people seem to be paying vast sums of money for these ridiculous apps that someone has convinced them are ‘Artificial Intelligence.’ Machines and the human psyche are two totally different things and always will be.