There is little evidence of the climate becoming more adverse or more harmful. Small changes in, for example, extreme rainfall or storms, are not discernible above the noise of natural changes in the weather.
However, as everyone knows, climate scientists say we should still be frightened, because of what they learn from their climate models. These giant computer simulations of the atmosphere and ocean are at the centre of the global warming scare, and are behind every claim that droughts will get worse, storms are caused by SUVs and that we are all going to hell in a handcart. We are told, endlessly, that we should trust these prognostications because the models are ‘based on basic physics’.
But what if that wasn’t strictly true? What if climate models were actually junk? A new paper from Net Zero Watch shows, somewhat alarmingly, that this is indeed the case.
The author, Willis Eschenbach, is an experienced computer programmer and a long-time writer on all things climate change. His paper, entitled Climate Models and Climate Muddles, reports on what he found when he examined the computer code inside NASA’s Model E climate simulation. It is positively astonishing. While programmers have indeed attempted to base the model on basic physics, time and again they have run into problems. For example, Eschenbach describes the problems they have had with polynyas, pools of meltwater that sit on top of the polar ice caps. These are important in determining how much of the sun’s heat is reflected straight back out to space again, and thus how much global warming affects the earth’s temperature.
Being made of water, polynyas should of course freeze once temperatures fall below zero. However, the code reveals that, in the artificial world of the simulation, they failed to freeze even at temperatures far colder! Imagine that – fresh water that refuses to freeze. However, instead of working out what was wrong with the physics, NASA’s scientists simply decided to insert some code that forced it to freeze if the temperature fell too far below zero. So, in this area at least, the model is not based on physics so much as on fudge.
The problem with the polynyas is just one example. Eschenbach shows that in another area of the model scientists were forced to insert code that dealt with the problem of cloud cover becoming negative. Clearly, if a model can have less than 0% cloud in some parts of the world, the physics on which it is based is very faulty.
In recent years, we have seen the difficulties into which politicians can get when they trust scientists’ models unquestioningly. We are still dealing with the consequences of SAGE’s eccentric guidance and Professor Ferguson’s prognostications of doom from 2020. We still do not fully understand the cost to society.
But we do understand that the trust that the Government placed in the experts at the start of the pandemic was wholly mistaken. If the terrible dark cloud of the Covid disaster is ultimately to have a silver lining, then it must come from our politicians learning the necessary lessons: that they have to question the guidance of experts and that they should seek a variety of opinions. And above all, that they must instinctively distrust anything that comes out of a computer model.
Andrew Montford is Director of Net Zero Watch.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Our Governments have been bought and paid for. Its now up to the people to refuse to follow this garbage while we still can.
Too much detail for most. Media and politicians want the desired headlines and are not bothered about the need to be able to defend the calculations behind the headlines.
The BBC/Guardian have the template “hottest MMM in PPP since records began” ready made , then each year await instructions from the Met Office on what the cherry-pickers have chosen to fill in the gaps, initiated by suitable torturing of the data and then hiding the fudge. for example, “Hottest day in Lincolnshire since records began” derives from “hottest minute recorded by a temperature sensor just after a jet took off from an adjacent tarmac runway at RAF Coningsby. Or “Hottest Year in The World since records began” derives from putting a mathematical “best fit straight line” between the “average global temperature” at the end of the Little Ice Age and the present. Don’t even get started on modelling, or kriging, or “adjustments”.
I don’t understand the criticism. Climate models are there to model the climate not the behaviour of water. The fact that water freezes at zero degrees centigrade is an input to the model not a conclusion. So you would expect some code on the lines of “if temperature is zero degrees centigrade or below then water freezes”.
This relates specifically to polynyas, not water in general. Polynyas are a special case for water in Arctic locations as they are fresh water not saline, and hence freeze at a different temperature from the sea. The polynyas are important because they are better reflectors of solar radiation than ice.
And they could be so clean that the freezing point is well below zero celsius – see my comment above.
Understood – but so what? My point is that the temperature at which water freezes in Polynyas is an input to the model – not something it would be expected to predict. So you are bound to get some code on the lines of “if temperature is X then water freezes”.
You don’t understand code do you.
You code what the business logic dictates. If the logic is faulty, the code is faulty. If the variables are many to many, the code is unlikely to be accurate. If the entire process or system to be coded is unknown or in flux your code will be inaccurate.
I work in IT and I see this all the time – GIGO.
I have offered Mann and others free code reviews. It would take 1 day to find that the code is junk and fails an audit. 1 day.
… and the programmer is fired.
You forgot the most important bit – the need for a scapegoat.
It is never the fault of the business that the development team produced junk. Changing the requirements 11 months into a 1 year project is a failure of the business analyst – not the management.
I watched from the sidelines as colleagues attempted and failed to capture the slippery project requirements of management teams. They took respectable commercial ERM systems and mangled them to try to meet the requirements until they could not work. Seriously: the management asked the dev team to create an internal currency unit with which to manage the business.
My point is that I can’t see anything wrong with the logic – nothing to do with the code (I worked in IT all my life).
There’s obviously nothing wrong with the informal logic. Just with the implementation of that. Assuming (principle of charity) the author wanted to make a statement which makes sense, there must be some other code which ought to have resulted in frozen polynyas below a certain temperature but didn’t. Instead of fixing that, someone inserted additional special-case code to hide the error. I remember seeing something similar in a device driver (NIC) in the past, annotated with a comment which ran somewhat like this:
/*
should really have this value,
no time for this right now, just set it
*/
I’ve been coding half my life, preceded by various roles in modelling (as it is now called) electronic circuits and physical systems, including trying to simplify complex non-linear poorly-characterized weather phenomena (but not climate) in order to create engineering solutions to real-world problems. I have also dabbled in creative coding, to produce pretty imagery, just for fun, though with some basis in physics and maths (think Mandelbrot and Julia, or convection and fluid flow). Others, more skilled than me, dabble in games, aircraft simulators and suchlike.
The crucial point here is the distinction between coding and modelling.
I can easily code something that creates a fascinating pretty picture, or gives me the answers that confirm my preconceptions about the behaviour of some model. The coding may well be “correct”, in that it correctly predicts the behaviour of that model, or gives pretty imagery, but that doesn’t mean that the model is correct, or that it sufficiently accurately complies with, or is even derivable from, the “laws of physics”. The fundamental test is whether the predictions of the model are compatible, to an acceptable degree of accuracy, with reality. In particular, ideally, this is done by experiments (be they “real-world” or computational) which are capable of discrediting the model; experiments which can only corroborate a model are of limited value in assessing the correctness of the model, though they may be useful in acquiring funding for “further research” if the results are consistent with one’s paymasters.
NASA/GISS and other climatologists, indeed most practitioners of “climate science” seems to have lost their way in the last few decades, by blurring the distinction between model and code, and between data and predictions. The situation is made worse by the ease with which coding these days can introduce an unlimited number of “parameters” into a model, resulting in an attack of “von Neumann’s elephants“, resulting in a model that fits past data perfectly yet has zero competence in predicting the future.
To mis-quote Richard Feynman: ‘It doesn’t matter how financially attractive your model is; it doesn’t matter how smart you are at attracting big money. If it doesn’t agree with experiment, it’s wrong.’
I am sure you are very well qualified to talk about the use and abuse of models and if there was space and time we could not discuss this for a long time. But I don’t see the relevance to the point about the freezing point of water in Polynyas being an input to the model not a conclusion/prediction. Perhaps you could explain?
I’ve already explained this (to the people you’re trying to confuse, mind you, not specifically to you). There are obviously two sets of code involved here, one which should cause simulated Polynyas to freeze but doesn’t, and some special-case code for hiding this error.
Who are you really MTF?
Every single time, no matter the article, you’re here with your contrary posts. There’s a pattern. Strange…..

Just a retired bloke who thinks it is a more useful and interesting to comment on a forum where I largely disagree with the content than just be part of an echo chamber. Is that so strange?
That really does not make any sense MTF. Water freezing at zero degrees is a basic part of Physics and Physics is what weather is at its heart. A model which cannot, without a fudge, freeze water at zero degrees is completely and utterly useless: how could it possibly predict snow or even frost? It wouldn’t know the water is supposed to now be Snow!
What makes you think it was a fudge? I don’t expect the model to model the molecular structure of water. It just has to recognise when water freezes and as described it appears the code does just that.
And as they’re supposed to model a climate catastrophe, one would also expect some code on the lines of “if the outcome is not armageddon, print ARMAGEDDON!!! instead.” That’s exactly the same kind of Supplant what happened but shouldn’t have with what was supposed to happen instead correction. It’s also the natural modus operandi of incompetent programmers working with a codebase nobody understands. Must be a little like AIX development in India. Not exactly functional or brilliant but it prints the proper error numbers in the right situations and – boy – it’s really cheap!
The “print ARMAGEDDON!!! instead” reminds me of the comments in the Harry Read Me issue back in the days of Climategate. The poor programmer trying to understand the code was struggling with comprehending what all the undocumented or mystery code was trying to do: legitimately sanitize some pretty unreliable weather station data, or mischievously and force the code to create the politically-required printout.
The point is that the model does not work.
If the model worked then water would freeze but it doesn’t according to the model, so there are problems with the model.
But the code that says water freezes when gets below zero is part of the model! A climate model is not expected to predict the freezing point of water from first principles.
Negative cloud cover seems like a great way of causing global warming. So it is anthropogenic after all!
Test.
This “Post Normal Science” comes about when something other than normal science is deemed necessary for public needs. In other words, facts are very uncertain but decisions for political purposes are URGENT. So instead of “science” we get “advocacy”. ——–Modelling is really just “Virtual Science” corrupted by the need to provide excuses for political agenda’s, namely Sustainable Development, which requires there be a climate crisis or it all falls apart. The global climate cannot be subjected to controlled experiments and most of the “science” is about projections, in other words the unknowable. None of this would matter too much if it were just inquiry, but whole political agenda’s involving astronomical costs and interference in the global economy with massive lifestyle changes and lowering of standards of living rely on this virtual or “official science”. Science cannot be a dictatorship used for political and social goals and models are NOT even science, nor are they evidence of anything.
A wee bit of chemistry is appropriate too. Absolutely pure water won’t freeze until the temperature drops down to around -40 °C (or indeed -40 °F; same number). Perhaps they assume that there are enough particles to allow freezing to occur – but they don’t know if the air is “dirty” enough to make it happen. So, they’re guessing – and ought to specify it as an assumption.
The air doesn’t have to be dirty, just moving would be enough. The tiniest ice crystal landing in such a pool will trigger very rapid freezing.
The narrative from the warm mongers is that surface pollution in the Arctic is changing the albedo of the ice and is speeding up warming. Where does this ultra pure water come from?
It also tends to snow in the Arctic and even one snow flake will trigger instant freezing of the supercooled water.
In my view if a model cannot accurately predict the beginning and end of a known event then it is less than useless. Pantsdown never predicted anything accurately in his life, remember all the dead Sheep in pyres all across our fields?
When he presented his falsified report (courtesy of Good Old Uncle Bill) he should have been told to go away and run it for a prediction and outcome of the 1969 Flu epidemic. The beginning, the end and the figures are all known entities. If his model cannot be accurate within +/- 5% then it is rubbish.
This approach should be mandatory with all modelling. The model has to be able to prove it can do what its Soon To Be Millionaires claim it can do.
Also, all modelling software should be fully Open Source and therefore it can not do fudges because someone out there will find the fudge. Open Source also has the advantage of bugs are found faster and are fixed correctly. There is nowhere to hide dodgy code in Open Source.
Given what we know now, it seems obvious that Ferguson was chosen precisely because all his previous forecasts were grossly inaccurate. His capacity to produce worst, worse case scenarios was exactly what TPTB needed to “scare the pants off everybody”
Start this code back at historical conditions and predict today. How does it compare? Start removing parts of the code and see if the predictions improve.
Earth’s climate is a very complex non-linear system with infinite constantly changing inputs that is impossible to duplicate in a computer program. It is impossible to duplicate any facet of earth’s climate in a lab due to the number and size of the variables. Trying to build a computer model that accurately predicts earth’s climate is impossible. All computer modeling functionality must be verifiable in a lab but with climate models that is impossible.
Suspect Eschenbach’s investigation is valid concerning fudge factors and “tuning”. However, when I tried to research other critical sources, I came up with a corroborative paper by Judith Curry: https://judithcurry.com/2021/10/06/ipcc-ar6-breaking-the-hegemony-of-global-climate-models/ , but then I looked at this AMS report, which is what the mainstream will read, saying they’ve now fixed the GSS Model E: https://journals.ametsoc.org/configurable/content/journals$002fclim$002f19$002f2$002fjcli3612.1.xml?t:ac=journals%24002fclim%24002f19%24002f2%24002fjcli3612.1.xml I’d be most interested to see Eschenbach’s response to this particular article