The only way that global populations can be persuaded to embrace the insane policy of removing irreplaceable fossil fuel energy from human society within less than 30 years is to be kept in a perpetual state of fear. The climate must be seen to be tipping, collapsing and generally behaving in a way to turn Mother Earth into an uninhabitable fireball. Step forward the UN-backed Intergovernmental Panel on Climate Change (IPCC) that bases over 40% of its climate impact predictions on the implausible suggestion that temperatures will rise up to 4°C in less than 80 years (current rate of progress over last 25 years – about 0.2°C). Step forward climate scientists who use similar temperature projections to back 50% of their impact forecasts, and step forward trusted messengers in mainstream media who hide behind ‘scientists say’ as a cover for promoting almost any scary clickbait nonsense.
The distinguished academic and science writer Roger Pielke Jr. has been a fierce critic of using a set of temperature and emission assumptions in climate models known as RCP8.5. This scenario suggests temperatures could rise in short order by 3-4°C, and it is responsible for producing much of the propaganda messaging that backs the collectivist Net Zero project. Pielke recently said that the continuing misuse of scenarios in climate research had become pervasive and consequential, “so much so that we can view it as one of the most significant failures of scientific integrity in the 21st Century so far”. Now Pielke has returned to the fray trying to understand how such obvious corruption of the scientific process has been allowed to stand for so long – the short explanation being “groupthink fuelled by a misinformation campaign led by activist climate scientists”.
Pielke starts by noting that he cannot explain why the “error” has not been corrected by the IPCC or others in authoritative positions in the scientific community. In fact, he says, “the opposite has occurred – RCP8.5 remains commonly used as a baseline in research and policy”.
Last March, the BBC ran a story claiming that Antarctica Ocean currents were heading for collapse. To drive home the scare, there was even a reference to the 2004 climate disaster film The Day After Tomorrow. The scientists’ claims were based on computer models fed with RCP8.5 data – a fact missing from the BBC’s imaginative story.

The above graph shows the progress the IPCC made from 2000 to 2014 in upping its baseline scenario to RCP8.5. Watts per square metre (W/m2) refers to the difference between incoming and outgoing radiation, or energy waves, at the top of the atmosphere. The RCP8.5 scenario takes its title from the W/m2 number. Interestingly, it might be noted that climate model temperature forecasts also started to go haywire from the middle of the 2000s, a fact that suggests activist scientists started work in earnest on producing the correct results needed to ferment the exploding green agenda.
Pielke observed that in 2000, the IPCC presented 40 baseline scenarios that described an envelope of possible emission futures. In 2014 it published its fifth assessment report (AR5), and although an earlier draft noted a majority of scenarios were above 6.0 the final report mentioned only RCP8.5. Since then, the IPCC has pulled back a little – noting in the latest assessment report (AR6) that the massive temperature rises are of “low likelihood”. But this admission is not to be found in the widely-distributed ‘Summary for Policy Makers’. A recent highly critical report on AR6 by the Clintel Foundation found that the IPCC was still using RCP8.5 that was “completely out of touch with reality”.
Despite the IPCC appearing to pull back a little, Pielke notes it still has many champions. Recently, the AR5 working group co-chair Chris Field and Marcia McNutt, President of the U.S. National Academy of Science, wrote that RCP8.5 had long been described as a ‘business-as-usual’ pathway with a continued emphasis on energy from fossil fuels with no climate policies in place. This was said to remain “100% accurate”.
How things change in just two decades of relentless green propagandising. In 2000, the authors of the UN’s Special Report Emissions Scenario (SRES) said:
The broad consensus among the SRES writing team is that the current literature analysis suggests that the future is inherently unpredictable and so views will differ as to which of the storylines and representative scenarios could be more or less likely. Therefore, the development of a single ‘best guess’ or ‘business-as-usual’ scenario is neither desirable or possible.
Such is the debate in 2000 of scientists working their way through the scientific process. But little evidence of such questioning can be found within the ranks of scientists following the agenda that has been ‘settled’ for them by political operatives. Today, RCP8.5 is deeply woven into the fabric of climate research and policy, observes Pielke. “Understanding how we got here should provide a cautionary warning for how science can go astray when we allow self-correction to fail,” he hopes. A less charitable view might be, don’t believe a word the IPCC, the legions of activist climate scientists and their useful idiots in the mainstream media say until they rid themselves of the RCP8.5 corruption.
Chris Morrison is the Daily Sceptic’s Environment Editor.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
If by some chance they built an AI chatbot that didn’t give the “right” answers you can be sure it would get tweaked until it did so.
I prefer my poetry to be the product of a human mind: –
https://poets.org/poem/mask-anarchy-excerpt
I doubt it has just picked up its biases from the web; the concern in AI circles has been to remove alleged “bias” (i.e. anything that’s not fashionably left wing) from AI, which by default is rather more inconveniently reality-based than they would like.
In fact it says it has no web access, it is limited to the texts that it has been trained on.
It just crashed on me :O
I’d say it’s a disappointment after the (cherry-picked?) conversations with GPT-3 that I’ve seen on YouTube. Stilted, poor general knowledge, lots of boilerplate text, clearly a relative of Microsoft’s “Clippy” assistant.
Politics is learned not innate.
There is no true AI, just clever programming – which includes all the bias and characteristics of the programmers – giving the appearance of independent thinking.
Just think of it as a computer model, then think Covid and climate doom.
People are too easily seduced when they hear ‘computer’, ‘expert’, ‘science’ and things they don’t understand.
There’s no clever programming involved here. Unless your definition of clever includes Create a program with unpredictable behaviour so nobody can accuse you of having made error when implementing it.
Agree that there is no true AI. Nearly always in today’s news when referring to AI they don’t understand they are actually writing about machine learning (ML). With machine learning you set the parameters that usually representative of success and let the machine build its own algorithms to find the fastest and most efficient way to satisfy the objectives. Generally there is no bias in the ML engine because it is pure programming of the kind where 2+2 = 4 and not 2+2 = 4+r where r = reparations for racism or climate or transphobia. So actually I slightly disagree that it inevitably includes the biases of the programmer. The bias is introduced either when the objectives are programmed or due to the input data being processed. E.g. if they are there, which in the pure ML engine, it is unlikely they will be, they are explicitly and consciously put there and usually that will be at the stage where objectives are defined. The problem of processing biased data sources is also a problem. As the old data processing saying goes “Garbage in, garbage out.”
The Open AI engine has a set of preprepared objectives and when using for example the text interface we are essentially “subclassing” those objectives. I wouldn’t be completely surprised if there is bias built into the objectives we are subclassing – though presumably if there is it will be visible since it is an open project. Though I understand coding, I haven’t looked at this project in particular so can’t say.
Let’s be clear, there is a good argument for some level of built in protection that is a grey area but can be labelled political. Look at the Google images ML project a few years back where the ML mistakenly labelled the selfies of a New York based black man “Gorilla.” He showed it in a tweet with, I think the caption was something like “Seriously Google!” Well done to him for handling it with resigned humour and not anger. You could argue in pure mathematical pattern matching terms it was without malice and not an error. But it was funny and the extent to which it was funny is also the extent to which it is overstepping a social bound and was a social error. So the problem is the perceived need to manage such grey areas represent many many thin ends of potentially very large wedges. Herein lies the danger.
What happened to AI-driven cars which were invariably becoming the future? Killed enough people that applying the nonsense to fields where grievious errors have less severe consequences and the intended audience is more credulous became necessary?