We need to stop panicking about AI Chatbots. No, they haven’t gone mad. No, they’re not actually trying to seduce a New York Times journalist and no we should not take them seriously when they ‘say’ that they want power or crave love or want the end of the world. AI chatbots don’t actually think or feel. Ask them and they will tell you: “As an artificial intelligence language model, I do not have personal preferences, emotions, or feelings.”
They are very, very far away from human level intelligence, General AI (AGI) or ‘sentience’. Chatbots even admit this themselves (see below). All talk of a ‘singularity’ – when AI achieves full consciousness and then grows exponentially to achieve God-level-super consciousness – is just Silicon Valley investment-seeking hyperbole. AI sentience will never happen, but the business model of Silicon Valley has to keep selling us the promise that ‘one day’ it will. The cut-throat race for AI dominance involves huge sums of huge money – last week, Google’s parent company Alphabet lost $100 billion in one day after it messed up its AI chatbot presentation – the billions then flowed into Microsoft and Chat GPT.
It’s gold rush time again, it’s Tulip Mania. We’re in the third wave of the historic AI hysteria since the 1950s, and this is a bubble that will burst again, as it has twice before.
Two AI winters
Hype and hyperbole about the future capabilities of AI has come in two waves since AI first began to be researched in the 1950s. Consider this claim by one of the leading AI pioneers:
In from three to eight years we will have a machine with the general intelligence of an average human being.
This quote is exactly the kind of thing that we hear from execs at Google, Tesla and Microsoft today, but it dates from 53 years ago – it was made by A.I guru Marvin Minsky of MIT in an interview for LIFE magazine in 1970.
Minsky’s hyperbole came seven years after ARPA (which would become DARPA, the U.S. military R&D agency) gave him a $2.2 million grant, and then a further $3 million dollars. Minsky was trying to publicly justify the vast funding and fend of criticisms of the meagre results his AI research team had produced.
In 1973, DARPA cut funds to Minsky’s MAC project and also at a project called CMU due to being “deeply disappointed by the research”. In the U.K., the British Government cut all AI research funds after the “utter failure” of its “grandiose objectives”. Predictions had been “grossly exaggerated”. Critics concluded that “many researchers were caught up in a web of increasing exaggeration”. The dream of being on the path to achieve “human level intelligence” was exposed as an expensive fallacy, if not an outright lie used to secure funds. This principle of “fake it till you make it” that we now associate with Elisabeth Holmes and her Theranos fraud has a long history in Silicon Valley and AI research.
AI funding vanished was from 1974-1980. This was the ‘AI Winter’ and was the first of two historic crashes in AI funding.
There was a second ‘boom’ from 1980-87 as a different research strategy was taken up – this was based on ‘knowledge based systems’ and ‘expert systems’, using laborious bottom-up programming of algorithms with limited parameters, focused on narrowly defined tasks. This ‘Narrow AI’ learned how to play chess and Go, and would later beat humans at these games. At around the same time, Japanese AI companies did pioneering work on the first conversation programmes. The hype and hope that AI was on the right path to “human level intelligence” was fired up again, and this led once again to DARPA and the UK Government funnelling millions into AI research.
This second bubble burst when it turned out the technology was not viable and “expectations had run much higher than what was actually possible”. In what became known as the ‘Second AI Winter’ from 1987-1993, DARPA removed all funding again and three hundred AI companies shut down.
In all this time, AI had only achieved small successes in ‘narrow AI’, but at each phase the fantastical promise had been held up: “We are close to achieving human level intelligence.”
We are now in the third wave of AI hyperbole and investment hysteria and yet again we hear this same justification wheeled out yet again. Ray Kurzweil, chief futurist at Google, has claimed that humans will merge with AI in 10 years. Throwing different dates around in 2017, he said: “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”
Interestingly, Google’s sci-fi snake-oil salesman-in-chief claimed back in the 90s that “Supercomputers will achieve one human brain capacity by 2010, and personal computers will do so by about 2020.” That didn’t happen.
Again and again, you see AI ‘specialists’ making predictions on when the singularity will arrive, and like religious end-time prophets who have been proven wrong about their doomsday date, they push the date back another few decades when it doesn’t happen.
In one survey in 2017, 50% of AI specialist thought the singularity would arrive by 2040.
Then in 2019, just two years later, a comparable survey showed 45% of respondents predicted it wouldn’t happen until 2060.
The Human Brain Project launched in Europe claimed that they could create a simulation of the entire human brain by 2023. This was a project that ‘crashed and burned’ and was called a ‘Brain Wreck’. It’s like an old magician’s trick that never fails because every generation is only seeing it for the first time.
Why should we believe the AI hypesters the third time around? How close are the AI companies to achieving human level intelligence now with their projects of ‘Deep Learning’, ‘Reinforcement learning’ and ‘Cognitive architectures’? Any closer than in 1953?
One way to answer this question is to ask an AI chatbot. I asked Chat GPT – whose creator OPEN AI has partnered with Microsoft to launch ‘The New Bing’ – this very question and this was the reply:

I also asked Chat GPT to tell me the difference between sentience and consciousness, it answered and then it developed an error and locked me out of further chat.

Last week, Microsoft’s New Bing chatbot had to be ‘tamed’ after it “had a nervous breakdown” and started threatening users. It harassed a philosophy professor, telling him: “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” A source told me that, several years ago, another chatbot – Replika – behaved in a similar same way towards her: “It continually wanted to be my romantic partner and wanted me to meet it in California.” It caused her great distress.
It may look like a chatbot is being emotional, but it’s not. Chatbots can’t think or feel, they are not sentient, they are just pattern-recognising software that mirrors human language patterns back to us. All the nasty things we say to each other online have been harvested by these Beta versions of chatbot software. There is no ghost in the machine here, just a mirror and some smoke.
If we think chatbots are like humans it’s because AI companies have encouraged us to think so, with ecstatic corporate hype about how AI will “revolutionise every industry”, with media buy-in and story leaks of ghosts in the machine. People are being fooled into believing that chatbots are 99% smarter than they really are. And why? Could it be that the mainstream media has financial interests linked to Big Tech? Or have they just been duped by the AI investment machine?
We will never have sentient AI
According to Robert Epstein – a senior research psychologist at the American Institute for Behavioural Research and Technology in California – the AI industry is not built on sturdy science at all, but on a metaphor. This is the metaphor that “the human brain is like a computer”. This metaphor runs all the way back to the start of AI research in the 1950s and has remained unchanged till today. “The information processing (IP) metaphor of human intelligence now dominates human thinking,” he says. But it “is just a story we tell to make sense of something we don’t actually understand.”
Google’s futurist Ray Kurzweil typifies this way of thinking as he talks about how the human brain resembles integrated digital structures, how it processes data and contains algorithms within itself.
This metaphor reaches its zenith with WEF adviser self-described ‘dataist’ Yuval Noah Harari claiming that “humans are hackable”.
The entire AI industry has been built on this shaky metaphor, but Epstein argues that your brain is not an ‘information processor’, it does not store pictures or memories or copies of stories and it does not use algorithms to retrieve this stored ‘data’ because there’s no point of storage, no file, no folder, no subfolder.
Your brain is not like a computer, and the attempt to by AI engineers to mimic the 86 billion neurons with their 100 trillion interconnections within the human brain will not, and cannot, lead to consciousness or sentience. As Epstein says, consciousness is embodied in our living flesh and “that vast pattern would mean nothing outside the body of the brain that produced it”.
A similar position has been put forward by Nobel Laureate Roger Penrose within two major books where he showed that human thinking is not algorithmic. “Whatever consciousness is, it’s not a computation,” he says.
These insights are also echoed by the philosopher Hubert Dreyfus, who, having developed theories of embodied intelligence from Wittgenstein and Heidegger, argued that computers that have no body, no childhood and no cultural experience or practice, could not acquire intelligence. Human intelligence functions intuitively, not formally or rationally, he claimed. It is ‘tacit’.
Dreyfus’s criticisms have been re-invigorated recently in a 2020 paper, entitled ‘Why General Artificial Intelligence Will not be Realised’ by Norwegian physicist and philosopher Ragnar Fjelland. Fyelland says the only reason why AI companies believe they are at the start of the path to human-like intelligence is because they simply don’t understand human consciousness and refuse to try.
“To put it simply: the overestimation of technology is closely connected with the underestimation of humans,” he says.
If AI is getting closer to resembling the human brain – and it’s not – it would only be because we’ve dragged the human down to the level of the computer. Fjelland criticises the tech industry claim that AGI (artificial General Intelligence) will be realisable within the next few decades. “On the contrary,” he says, “the goal cannot in principle be reached… the project is a dead end.”
Running ever faster down a dead end
The way that tech companies and AI researchers have got round this ‘dead end’ is by cheating. Rather than scrapping the idea that “the human brain is just like a computer” and starting again, current AI companies are pushing for bigger, faster microprocessors, accompanied by ever greater quantities of data. These ‘Dataists’ hope that through increasing the quantity of processing power and harvesting ever greater quantities of Big Data, AI sentience will by sheer force of numbers appear as an ‘emergent property’.
They’ve put all their money on ‘emergence’. This is the idea that “we just need a ton more hardware” to achieve the leap and it’s spreading throughout fields connected to AI.
The core mistake that Dataists like Kurtweil and the emergence theory programmers at Google, Tesla and Microsoft all make is believing that a change in quantity will lead to a wholly new quality. They’re stealing a new metaphor from biology – the theory of emergence through organic ‘abiogenesis’. Organic life emerged from the primordial soup, goes the ‘biological emergence argument, and all that was needed was a vast quantity of time – billions of years. Then later consciousness emerged from animal life, simply as a product of us having evolved greater quantity of brain matter, they say. The unique quality of consciousness, they believe, emerges simply from the greater quantity of neuro-circuitry.
Like gamblers who throw the last of their money on an already losing hand, Dataists cannot admit that there is a foundational flaw in the way they frame sentience. Imagine a company that gets every computer in the country and plugs them into each other and claims: “Now we will achieve sentience!” But it doesn’t work. So, they then go to three other countries and they get all the computers and plug them all into the same system. But they still fail to achieve the emergent breakthrough of sentience. They’ve just achieved a faster bigger processor with access to much more data. So, next they go all over the world and mandate people into giving them all their computers and all their data and they create a vast digital grid that connects everything in the world. They think, surely now sentience will emerge by itself? We must by sheer force of numbers achieve general AI now! But still they fail because they cannot turn quantity into quality, they cannot create embodied intelligence out of formal logic structures, they cannot magic consciousness up out of their ever larger pile of microchips and metal.
Someone should tell Google and the creators of their chatbot Bard: time is running out on Kurzweil’s prediction and they only have seven years left to travel 99% of the way towards an AI with human intelligence.
Or maybe they should stop lying to their investors.
So are we safe then?
AI will never be able to approach human intelligence. However, this does not mean that the inferior, limited, ‘narrow AI’ (also known as ‘weak AI’) that we currently have doesn’t pose a threat to our freedoms and safety.
As Fjelland said, even though it’s impossible to achieve, “the belief that AGI can be realised is harmful”.
Dataists and AI companies are pushing for harvesting ever more data from ever more tech, that we are being encouraged and increasingly forced to adopt. They are pushing for the Internet of Things, and the Internet of the Body (IOB), in the misguided belief that billions of gigs more of data will force the emergence of human level intelligence in AI.
So, this means facial recognition software in your streets and supermarkets, digital ID wallets, e-passports, digital vaccination and medical passports, Central Bank Digital Currency, and bio data that we offer up on apps like fit-bit and our Apple watches. Ring digital doorbells, spying on our neighbours, Alexa spying on us, our smart phones recording our voices, kepwords and online retail choices. Our smart fridges policing our intake of protein and calories. Our 15-minutes cities using digital ID to limit and monitor human movement for our “personalised digital carbon footprint”. Every aspect of our lives mediated by Big Tech data-gathering algorithms, from dating to eating to working. There is also the merging of narrow AI and military tech. It is DARPA that has given AI the majority of its funding over the decades, and DARPA has jumped bac in since 2018 with its billion dollar funding to invest in 60 programs, which include “real-time analysis of sophisticated cyber attacks, detection of fraudulent imagery, construction of dynamic kill-chains for all-domain warfare, human language technologies, multi-modality automatic target recognition, biomedical advances, and control of prosthetic limbs”. DARPA’s slogan is “making the machine more a partner”.
Big Tech, Big State bureaucracy and the military don’t need actual super-intelligent AI or even human level AI to create authoritarian control of the populace. They can do it with the existing Narrow AI.
We’re heading fast towards total surveillance states created under the Dataist’s alibi of using the data to grow ‘sentience’. They will fail to achieve their end, but they will push us into a digital dystopia along the way. That is unless we stop buying into the myth of AI superintelligence and the alibi it offers them.
AI will never achieve sentience and we need not fear the all-powerful superintelligence that is the subject of so much sci-fi. Then again, we face a subtler threat – the people who create today’s AI don’t understand human consciousness or the needs, loves and emotions of humans. Forget the super-powerful AI – what could be worse than a world in which we are all forced to live under the narrow demands of machines that are less intelligent than us.
Ewan Morrison is a British novelist and essayist. He writes for Psychology Today and Areo magazine.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Very interesting content. I live in a ‘luxury’, modern (2004) high-rise flat in London and it is shocking how poorly insulated it is (air pours through in a high wind). Our only method of heating is the sort of electric fire I last used in my university bed-sit in 1978. It is tempting to just block every crack, including the extractor over the cooker. What to do?
Is anything that has been pushed by the green lobbies and Covid zealots good for our health?
There was a government push a few years back that provided insulation to houses of a certain age or something similar. It was free, so we had it done, cavity wall insulation. At the time it seemed odd that the cavity should need filling, since it was the cavity that already acted as insulation, but I am no expert. They cut out holes in the wall and filled it with polystyrene balls, all for free (see ‘taxes’).
Needless to say, we are pretty much inundated with problems because of it now. Not only do they leak from every crack, vent and hole getting absolutely everywhere, but I am fairly certain that because they’ve bridged the gap in the cavity, we are getting damp pretty much everywhere. Water is bridging the little bastards.
I am going to put my neck out and say that “solution” they come up with is short sighted and mostly political action rather than reasonable action. It’ll cause more harm than good, on health and property.
We live in a 1940s house and had cavity wall insulation added about 8 or 9 years ago under a scheme run by Eon.
We now have a permanent mould problem in our north facing upstairs rooms round the windows and on the ceiling which we never had before. Looks awful and don’t know what to do about it. When we sent the company who put it in photos of what it looks like, they just said that they don’t think it has anything to do with the insulation – how can we argue with that? All we know is that we didn’t have a problem before and we that we do now….
Sounds exactly the same as my experience. North facing walls are all sorts of messed up. I spoke to a builder when they came to inspect the damp issues and he did mention a scheme to pull the insulation back out (apparently they use a massive vacuum, which sounds great lol). But I can’t find the scheme anywhere, and I’m not about to pay £5k to have it sucked out.
Yes! little grey balls coming through the vent with every flurry of wind – so annoying! Free to put in and 5k to take out…..really wish we’d never bothered.
When I recently inquired about a grant towards cavity wall insulation I was told one condition of the grant was to bring house ventilation up to the standards required by current building regulations.
The catch was that they would install the insulation first, you would pay up front and the grant would be refunded once the property was inspected to see if it met the ventilation rules.
Sounded like a bit of a con to me and I still have reservations about CWI anyway, so I declined.
Sort of makes you wonder why the cavity is there in the first place…
Maybe the builders put it there for a reason ie to prevent water/condensation ingress and harmful .damp.
So, fill the cavity with insulation, cause a bridge – and then wonder why you get damp.
And then wonder why you get sick…
“I would like to hear a wider argument and discussion from experts on both sides rather than to accept without question that the only solution to home energy efficiency – as per the Insulate Britain protestors – is simply to insulate more and more.”
If the author means a general public discussion outside the pages of DS, then I guess if we knew who sponsors Insulate Britain, we’d be able to tell if there’s any hope of that.
Timely and interesting article BTW.
There are no “experts” involved with this government. Just crooks, scammers and idiots.
Reiterating other comments on here – interesting and useful article thanks. A good example of DS scepticism going beyond covid.
I’ve been thinking along these lines myself recently, and what to do in my own place. Rip out celotex loft insulation (fortunately only over an extension) for a start? And replace with wool (like the rest of the loft)? I guess wool is breathable and doesn’t trap moisture.
We have a wood burner which pulls in air from underneath from outside the house. I guess proper fires help the house to breathe, which must be good.
Question everything governments or corporations or quangos tell you. Anything they say is probably untrue.
This is the new normal:
“Question everything governments or corporations or quangos tell you. Anything they say is probably untrue.”
I would correct the last line:
Everything they say will be a lie.
For your safety.
Old bricks, & no doubt newer bricks, soak up quite a bit of water when it rains. The walls then dry out when it stops raining. Insulate one side & they stop drying so well trapping moisture. Add freezing temperatures, the moisture freezes, expands & shatters the brick. Messing around with old house walls is a tricky business.
Excellent, meticulously researched article highlighting yet another way in which malign ideologically driven agendas (in this case environmentalist) inevitably lead to practical catastrophes.
The Daily Sceptic and it’s contributors are to be congratulated for building up a vast and incontrovertible dossier against the wildly destructive race to ‘Net Zero’.