Advocates of Net Zero policies repeatedly reassure the public that they are advancing cheap, green energy with the promise of vast numbers of lucrative green jobs and world leadership for the U.K. in selected green technologies.
Unfortunately for the hard pressed British electorate, ‘cheap, green energy’ is nothing more than an empty political slogan arrived at by a dishonest sleight of hand. Specifically, the politicians simply ignore the true costs of the ‘cheap, green energy’ which soon becomes very expensive when factoring in all costs.
The chart below is from the U.K. Government’s own Electricity Generation Costs 2023 report and forms the centrepiece of the green propaganda. It purportedly shows the ‘levelised cost of electricity’ (LCOE) for different generating technologies. The Government uses deceptive calculations to arrive at a distorted cost for electricity generated by wind-power over the lifetime of a plant, by which means the Government can falsely claim that offshore wind is 2.5 times cheaper than dirty old gas generation (CCGT).

The most outrageous trick in this ‘analysis’ is that the Government has treated completely reliable electricity generated by gas (‘dispatchable’, in the trade jargon) in exactly the same way as extraordinarily unreliable electricity generated by a wind farm. This is not even an apples and oranges comparison, this is an elephant and microchips comparison. As a simple thought experiment, how much of a discount would you require for a car, cooker or washing machine whose operation was controlled externally and is very hard to predict versus the same equipment where you decide when you use it? For most people the answer would be an enormous discount, indicating that the true utility of variable electricity supply is very, very low.
There is really no comparison between an inexpensive tried-and-tested reliable power source and an exceptionally unreliable megawatt hour. We really are being had for a patsy.
To make costs comparable, you would need to state them on a comparable dispatchable basis, which means combining a wind farm with a battery storage facility to produce stable supply. At the moment there are only a handful of battery projects, which are very costly and provide only a short period of supply for their catchment area. As an indicator of the scale of battery backup required to ‘plug the gap’ when wind and solar power falter due to weather conditions, the Australian city of Melbourne has installed a ‘battery farm’ at Hornsdale at a cost of A$90m, covering 2.5 acres that can provide 28 minutes of electricity if there is a total failure of ‘renewable’ energy supply.
For reference, calculations by Bjorn Lomborg, the President of the Copenhagen Consensus Centre, indicate that the EU’s entire battery capacity is enough to cover one minute and 21 seconds of average demand. At this stage, it isn’t possible to calculate a cost per MWh (megawatt hour) for offshore wind farm plus battery installation, but it is clear that the resulting levelised cost would be very high and orders of magnitude (i.e., multiples of 10) higher than in the chart above (£44 per MWh).
In prior years, the Government at least made a half-hearted stab at acknowledging the enormous difference in utility between reliable and unreliable electricity by considering various downstream impacts of variable green electricity. The methodology was not fully disclosed, but it involved adding additional costs onto wind farms to adjust their levelised costs and then deducting costs from the levelised cost of combined cycle gas turbine plants, which have delivered uninterrupted, cheap energy for decades. That analysis resulted in a somewhat complex chart showing ‘enhanced levelised cost of electricity’, which we have simplified below.
It is plain to see that gas (CCGT) is the cheapest form of electricity generation on an enhanced levelised cost basis even when accounting only in this partial way for downstream network costs.

The Government has not included any such assessment of downstream impacts in the 2023 analysis because so called ‘balancing costs’ have been shifted on to the consumer. This is an arbitrary accounting convention and ignores the fact that the same costs will need to be incurred, regardless of whom they are charged to.
Note also that between 2020 and 2023 assessments, the Government massively and somewhat arbitrarily inflated the cost of gas (CCGT) based on a very much higher ‘carbon costs’ from £32 per MWh to £60 per MWh (the carbon costs are a somewhat arbitrary value that the Government places on the ‘social cost’ of carbon emissions). By moving this assumption up, or down, the Government can itself alter CCGT levelised costs and attractiveness relative to other forms of generation. This huge increase has distorted the 2023 outcomes to make gas appear much less attractive compared to wind. Ultimately though this is a policy assumption rather than a physical or market factor and is driven by value judgements rather than scientific reasons.
We have illustrated that for the 2023 assessment, by using the Government’s own earlier method of analysis we have gone from a position where offshore wind appears to be 2.5 times cheaper than gas to a position were gas is the cheapest form of generation when factoring in the system impacts as they were accounted for in the 2020 assessment.
There are a number of other questionable assumptions that unsurprisingly all work towards inflating the levelised costs of electricity from gas and reducing the levelised cost of wind. The main such assumption being a very high 61% load factor for offshore wind, which as far as we are aware has not ever been achieved anywhere in practice (the ‘load factor’ is the amount of electricity produced by a wind farm over a year as a proportion of how much it would produce if the wind was always favourable). The Government itself shows actual load factors for offshore wind farms were in the range of 39% to 47% up to 2017.
If you were to go a stage further and construct a truly representative scenario where gas power generation was replaced by wind, you would have to factor in the reality that you would need to effectively keep your old gas generation in reserve in order to have an uninterrupted supply of electricity. This leads to suboptimal operation of the gas plant and very high unit costs with lower output on the same fixed cost base.
In the scenarios that we looked at, any saving from ‘low cost’ wind, primarily lower due to carbon costs, would be more than offset by the very high unit cost of electricity from gas which would have to be purchased at enormous ‘standby’ costs to cover every period of low or excessive wind to prevent blackouts.
Real world data – the acid test

Observed data is always the acid test and in an excellent report by Mark P. Mills from the Manhattan Institute, he includes a chart of residential electricity prices versus wind and solar capacity per capita across European countries. Doubtless there may be some cofounding factors, but overall the trend is crystal clear: more ‘cheap’ renewable energy is directly linked to higher residential energy prices. Very much as we expected and the opposite outcome to that promised by the U.K. politicians.
The technocrats only real answer to the problem of unreliable renewables is to limit the ability of the consumer to use electricity. This is probably the main reason that smart meters and smart appliances are being rolled out to the end of 2025. Again the needs of the citizen will effectively be made subordinate to the needs of the system, itself dictated by ideology rather than supply problems or verifiable scientific reasons – a new and very unhealthy direction of travel.
Conclusions
You can see how easy it is to go from the fantasy of ‘cheap wind power’ promoted by the political class to the reality of very expensive wind, simply by including the real downstream costs. We have identified that the necessity of maintaining gas backup to wind power means that any savings from wind will often be more than negated by the costs of backup power. This proposition ties in with the observed reality that countries with higher levels of wind and solar capacity tend to have higher residential electricity prices.
The real problem though is the hell-for-leather dash for Net Zero and the accompanying plans produced by the unelected and unaccountable Climate Change Committee. This Soviet-style planning coupled with the Department for Business, Energy & Industrial Strategy arbitrating between different technologies and handing out billions in support of its favoured solution has all the characteristics of an accident waiting to happen. Bjorn Lomborg warned that “we are now going from wasting billions of dollars on ineffective policies to wasting trillions”.
The accelerated implementation of unreliable wind power will almost certainly lead to higher costs, lower living standards and the erosion of competitiveness and not to the green nirvana dishonestly promoted by Westminster politicians.
Alex Kriel is by training a physicist and was an early critic of the Imperial Covid model. He is a founder of the Thinking Coalition, which comprises a group of citizens who are concerned about Government overreach. Duncan White is a retired nurse with extensive experience of healthcare management and international health consultancy. He has researched Government carbon related policies for a number of years in cooperation with several U.K. groups representing the interests of motorists. This article was first published on the Thinking Coalition website. Sign up for updates here.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
It’s deja vu all over agin.
In order to arrive at an accurate prediction of the future, we’ll consult astrologers, haruspexes and augurs for input and then pick-and-chose from their predictions in whatever way we want until a consenus on how the future will become has emerged!
In modern pseudo-tech speak Modelling is considerably more robust when more than one model (ideally a minimum of three) is considered and a consensus is built and agreed across a broad community.
When trying to predict the future in three different ways leads to three different results, at least two of them must be wrong. As nobody knows which two are wrong, the only sensible course of action is to discard all three as there’s no point in basing decisions on information that’s at least 2/3 wrong.
Sometimes, I yearn for the times when superstitious people trying to influence others with their gobbleygook ended up being burnt by the church.
The climate change liars carefully selected 187 models predicting ‘global warming’ versus CO2 emissions from the many which came out of their modelling apparatus.
They were all wrong by some margin when compared with observation from the real World.
What I find puzzling, is that no matter how many times the ‘experts’ and their prediction are wrong, the nitwits in charge and other useful idiots still believe whatever else they come up with.
I think what you mean JXB, is that the nitwits in charge are perfectly happy to have a set of results that they can use to fill their pockets with for absolutely no benefit to the citizens.
If there are at least 187 substantially different ways to do the same, namely, simulate the weather in the future, this clearly communicates that the people who came up with all of these have absolutely no clue how what they’re trying to simulate actually works.
They were alll wrong by some margin is nothing but pseudo-learnt sounding way of saying They’re all wrong.
In fairness, one set of model results produced a curve below all the others, although still generally higher than recorded temperatures (themselves manipulated to exaggerate warming).
Which set of results, then, have been least infected by GangGreen lunacy?Why, those from a big Country with lots of coal, oil, gas to sell.
The Russian model. Go figure.
“When trying to predict the future in three different ways leads to three different results, at least two of them must be wrong.”
Why aren’t all three wrong?
It’s perfectly possible and actually, even very probable that all three are wrong. But that’s a tangential point when trying to highlight the complete idiocy of this statement: When three different computer programs generate three different answers to the same question, two of them must necessarily be wrong. Hence, using all three to come up with a meta-result which is then used for actual decision making has – at the very best – garbled the one accurate result.
That’s a classic example of throwing sand into people’s eyes by making something more complicated despite this exact process actually guarantees that the combined output won’t make any sense.
And if all three are wrong?
“All models are wrong, some are useful.” George Box
Exactly so
Mathematical models are easily demolished by one inconvenient problem.
If mathematical models have any predictive power then some bright spark would have applied them to the stockmarket and, he or she, would have quickly become the richest person on planet earth.
The fact that this has not been done tells you everything you need to know about the ‘power’ of these models.
As I already wrote last time: People are applying this to the stock market all the time and fool themselves into believing that the outputs make sense despite they’re nowadays even forced to include warnings to the contrary when advertising stuff like investment funds.
Have you not heard of Long-Term Capital Management with its Nobel prize winners?
Meanwhile…I like this graph of Joel Smalley’s. It looks nothing whatsoever like lockdown-fanboy Ferguson’s predictions either;
”In England & Wales during spring 2020, over a thirteen-week period, there were almost:
52,000 ‘COVID’ deaths
Mainstream Media reported each and every death daily as justification for the most illegitimate denial of British civilian liberties in history.
In England & Wales during winter 2021, after the introduction of a miracle “vaccine” and after such a massive depletion of the vulnerable/susceptible population, over a thirteen-week period, there were almost:
70,000 ‘COVID’ deaths
In winter 2023, with conspicuously few ‘COVID’ deaths, over a thirteen-week period, there were almost:
55,000 ‘winter’ excess deaths”
https://metatron.substack.com/p/all-deaths-are-equal-but-some-deaths
And the true number is about 10% and people who were going to die anyway.
Non-deterministic models are as useful as tits on a bull.
Arguably if the variations are not significant then it’s not a huge issue. The bigger issue, IMO, is that the models are not re-checked against reality and revised/ditched when it turns out that they are rubbish at predicting what actually happens.
Arguably if the variations are not significant then it’s not a huge issue.
I think you’re misunderstanding the term. Non-deterministic means the output cannot be predicted based on the input, ie, that this is essentially a random process driven by its inputs in an unknown (and unknowable) way. This means one can as well resort to guessing or rolling dice to generate outputs.
But you can keep running them until the “required results” are delivered.
I know what deterministic means. My point was that if the result each time is within a small margin of the other results, the randomness is arguably not that important. I just think that it’s not the most glaring problem with modelling – the bigger issues are not sufficiently questioning the assumptions around the inputs and most damningly not revising the model based on its abject failure to predict anything.
I know what deterministic means. My point was that if the result each time is within a small margin of the other results, the randomness is arguably not that important.
That’s Ferguson’s point: Run it mutiple times and average the results, it’s meant to be stochastic, anyway! And this doesn’t make any sense because non-deterministic means the output cannot be predicted based on the input. This is literally the equivalent of rolling a dice three times to ‘predict’ an unknown number between 3 and 18 and then boldy claiming that this is a valid method for predicing numbers because The average error is only about 1/3!
Let’s not forget that one thing modellers can do well, is to buy bigger, much more expensive computers to run their fanciful programmes (based on multiplying wild assed guesses together and applying lots of tweaks and fudges).
These huge increasingly fancy and expensive supercomputers, paid for by you, dear readers, and using more and more fossil fuels to run ’em, have been very effective in getting completely false results much faster.
The MET Office loves them.
They are perhaps the reason why the forecast for tomorrow is doubtful, that for four days is almost certainly wrong. Selwyn Froggatt with his pinecone and piece of seaweed did better.
The standards of the computer are irrelevant when the code is written by rank amateurs
https://www.telegraph.co.uk/technology/2020/05/16/coding-led-lockdown-totally-unreliable-buggy-mess-say-experts/
GIGO
I find stochastic models very useful. After all they should contain all your deterministic models, each with their own probability of occurring.
“…the impact of public health measures, including border measures,..”
I don’t think birds can read.
Ha Ha. Good point. Well, they”ll keep out the birds that travel by lorry, plane and ship. Any measure that reduces the opportunities for the virus to spread has got to be worth it. Isn’t that what we’re told by public health ‘experts’?
Can someone please permanently unplug Ferguson’s computer.
I suggest unplugging the guy himself. He’s probably just an AI chatbot.
That doesn’t seem to be the attraction for his married ‘friend’. They don’t call him Professor Pantsdown for nothing.
Next time the Guvmint wants a ‘projection’ from him, it should be strictly on the ‘cock on the block’ basis. Guvmint with (for once) hatchet in hand.
IMHO, the attraction for his married ‘friend’ is that he’s one of her business contacts. Practically, the difference between incel and professor of something having a fullfilling albeit somewhat limited private life is one of spending power.
:->
[This is a conjecture based on absolutely no real information save my general distrust in people.]
Neil Ferguson – paid to lie
Stand in the Park
Make friends & keep sane
Sundays 10.30am to 11.30am
Elms Field
near Everyman Cinema & play area
Wokingham RG40 2FE
Hi LS just one question why 10:30 ours starts at 10:00? I’m always getting flak for being late!!!
If computer modelling had been used in Salem to determine who was a witch, I’m pretty sure all the women and half the men in town would have been burnt at the stake.
Plagiarising J K Galbraith – there are only 2 types of computer modellers, those who are wrong and those who don’t know they are wrong
That’s why they should my “suggestion” above.
Exactly
Two words for this lot of fear mongering clowns….
…. OFF!!!!
As a retired dairy farmer, I find it incomprehensible that this numbskull Ferguson should be ever believed. What he cost dairy farmers in anxiety and reputation with BSE and the deaths of 100s of thousand of perfectly healthy cattle with the F&M debacle. This was followed by the Bird Flu and Sine Fever which was believed by Bliar and then Brown and since then 5 more PMs beggars belief that Governments are so stupid.
One has to wonder why the Governments believed Imperial College predictions over the Oxford University and people like Prof Sunetra Gupta who had been the forerunners of the unit that was created at Imperial College.
As soon as I knew on 3rd week of March 2020, that Ferguson was involved then I emailed my Tory MP and gave him lenthy reasons as to why this was a mistake and it would costly and wrong. The base figure Ferguson used was wrong which means that the outcomes become extrapolated to an advanced and ridiculous degree which is what happened in all his previous predictions and sure enough, it happened again.
This man ought to be charged with fraud but treachery would be a good charge for which he so richly deserves.