On Sunday’s BBC Politics, Luke Johnson asked for evidence that the recent Dubai flooding was due to climate change. Chris Packham glibly responded: “It comes from something called science.”
This simply highlighted his poor scientific understanding. The issue is his and others’ confusion over what scientific modelling is and what it can do. This applies to any area of science dealing with systems above a single atom – everything, in practice.
My own doctoral research was on the infrared absorption and fragmentation of gaseous molecules using lasers. The aim was to quantify how the processes depended on the laser’s physical properties.
I then modelled my results. This was to see if theory correctly predicted how my measurements changed as one varied the laser pulse. Computed values were compared under different conditions with those observed.
The point is that the underlying theory is being tested against the variations it predicts. This applies – on steroids – to climate modelling, where the atmospheric systems are vastly more complex. All the climate models assume agreement at some initial point and then let the model show future projections. Most importantly, for the projected temperature variations, the track record of the models in predicting actual temperature observations is very dubious, as Professor Nicola Scafetta’s chart below shows.
For the climate sensitivity – the amount of global surface warming that will occur in response to a doubling of atmospheric CO2 concentrations over pre-industrial levels – there’s an enormous range of projected temperature increases, from 1.5° to 4.5°C. Put simply, that fits everything – and so tells us almost nothing about the underlying theories.
That’s a worrying problem. If the models can’t be shown to predict the variations, then what can we say about the underlying theory of manmade climate change? But the public are given the erroneous impression that the ‘settled science’ confirms that theory – and is forecasting disastrously higher temperatures.
Such a serious failing has forced the catastrophe modellers to (quietly) switch tack into ‘attribution modelling’. This involves picking some specific emotive disaster – say the recent flooding in Dubai – then finding some model scenario which reproduces it. You then say: “Climate change modelling predicted this event, which shows the underlying theory is correct.”
What’s not explained is how many other scenarios didn’t fit this specific event. It’s as if, in my research, I simply picked one observation and scanned through my modelling to find a fit. Then said: “Job done, the theory works.” It’s scientifically meaningless. What’s happening is the opposite of a prediction. It’s working backwards from an event and showing that it can happen under some scenario.
My points on the modelling of variations also apply to the work done by Neil Ferguson at Imperial College on catastrophic Covid fatalities. The public were hoodwinked into thinking ‘the Science’ was predicting it. Not coincidentally, Ferguson isn’t a medical doctor but a mathematician and theoretical physicist with a track record of presenting demented predictions to interested parties.
I’m no fan of credentialism. But when Packham tries it, maybe he needs questioning on his own qualifications – a basic degree in a non-physical ‘soft’ science then an abandoned doctorate.
Paul Sutton can be found on Substack. His new book on woke issues The Poetry of Gin and Tea is out now.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.