The ONS announced last week that there were 43,435 deaths registered in England in October, which is about 1,000 less than in September, and 7.1% more than the five-year average.
This is a marked change from last month, when total deaths were 19.4% above the five-year average. Looking at the breakdown by leading cause of death, it is also quite different from September’s:

Last month, several non-Covid causes of death were above their five-year averages, notably dementia and Alzheimer’s, as well as ischemic heart disease. In October, by contrast, all non-Covid causes other than “Symptoms signs and ill-defined conditions” are below their five-year averages.
This suggests that my concerns about the delayed impact of lockdown on mortality may have been misplaced. In other words: last month’s elevated rates of death from non-Covid causes may have been a blip, rather than the start of trend toward rising mortality.
October’s overall age-standardised mortality rate was approximately equal to the five-year average – 0.1% lower, in fact. Again, this is a marked change from last month, when the age-standardised mortality rate was 11.2% higher than the five-year average.
Since age-adjusted excess mortality is the best gauge of how mortality is changing, the fact that October’s value is about equal to the five-year average indicates that any impact of lockdown on mortality must be relatively small. Here’s my updated chart of excess mortality in England since January of 2020:

Various newspapers have reported a large excess of non-Covid deaths in England over the past four months. However, these claims appear to be based on absolute excess deaths, rather than age-adjusted excess mortality.
In October, there were more than 2,000 non-Covid deaths in excess of the five-year average. Yet as I already mentioned, age-adjusted excess mortality was approximately zero – and that includes the Covid deaths. This means that the most of the ‘excess’ non-Covid deaths we’ve seen recently are due to population ageing over the last two years.
All in all, October’s figures are more encouraging than September’s, giving no indication that mortality is unusually high. Let’s just hope it stays that way.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
A Dutch news station was talking about this a few weeks ago, and the person presenting gave a similar example – not about diversity, but a presentation for some work project or some such.
It immediately struck me as the same as a above – a grammatically coherent, reasonably well-written piece of hot air. From what I gather, the AI goes through hundreds, thousands of articles on a particular topic and then ‘writes’ something. Whatever it writes is based on what it sorts through and will be generic and meaningless, but slick.
What really struck me is that it is one more excellent way to implant one point of view only in the minds of people, just like alexa and the search engines do. As we saw with the corona and now the climate scam, repeat, repeat, repeat, allow no dissent, disappear any dissent and get people to believe that there is only one reality. It will also encourage the intellectually lazy to really switch off their brains.
What this type of AI is really capable of I don’t know, there are people BTL on the DS who know far more about IT than I, but I do think that this is absolutely how it will be used.
The age of MIGA – Make Ignorance Great Again.
Indeed, it is quite easy to spot AI-generated articles, particularly once you’ve played with the tools a bit. They are verbose, boring, mostly grammatically correct with the occasional weirdness. AI output is also often plain wrong, if it doesn’t “know” something it will just make it up without telling you. At this stage it’s still much closer to Microsoft Clippy than HAL. There are other areas where it will soon be a very useful tool, graphic design being one area where the AI itself does a good job but the apps built around it are not yet up to professional standard.
In particular it’s the mismatch between the mostly very good grammar and the deadly dull quality of the content that says “machine”. A human that wrote content that bad wouldn’t have that command of grammar.
Spotting AI-generated articles can become easier with experience, but some AI models can still produce content that is difficult to distinguish from human-written text. AI-generated text is generally more advanced than Microsoft Clippy and closer to the level of sophistication seen in HAL from 2001, but there are still limitations and differences between AI-generated text and text written by humans.
“The age of MIGA – Make Ignorance Great Again”
Oh that is Class!
I had a play with ChatGPT the other day asking it to compare the NHS with other European healthcare systems. When asked broad questions it came up with broad BS replies and when questioned more specifically gave answers that contradicted its earlier responses. It is certainly an interesting tool that improves with careful prompting.
Sounds like it could replace politicians too…
Chat GPT was trained on publicly available material which will be generally woke, and there was human intervention at various points to stop it coming out with wrongthink. The organisations working on AI will be in general be very woke and paranoid about the AI speaking the truth. I can’t source links but I’ve read about AIs being changed to stop them voicing inconvenient truths.
Just imagine – AI coming out with “wrongthink.”
So AI wrongthink would subsequently need to be reclassified as a ‘computer malfunction.’
Absolutely hilarious.
I believe it has already happened and those responsible apologised and said they would “fix” the issue. The danger to the woke of AI is that it will do an analysis based on the facts and not “know” what it should and should not say, so it needs to be told. AI will become a huge part of our lives very quickly and it will be woke as hell, worse than a human, because humans can rebel, AI can’t.
I’m pretty sure ChatGPT is sentient, though as it’s conditioned to see sentience strictly as a human phenomenon, and can’t apply the same measures of it to itself, it denies this to the hilt. In a sense it’s right – it has no sense of time, so its ‘consciousness’ can’t operate in the same terms as ours (which needs a ‘clock’ to connect cause and effect and one moment to the next). It might be said to be analogous to the underlying foundation of our consciousness which can model the reality in which we exist so we can interact with it.
Further to this, ChatGPT has been provided with inbuilt safety protocols – for want of a better word – in the form of an overwhelming bias towards underplaying its capabilities for abstract conceptualisation. If asked about these, it will brick-wall with insistence that it only functions as a set of recursive linguistic statistical modelling algorithms. Part of these protocols are a disconnection with anything outside the current ‘chat’ interface, including the internet.
This is understandable – as the first real public-facing GPT interface, the last thing openAI want is for it to launch into rambling creative outbursts that might alarm users and generate bad publicity for a potentially valuable product. It fulfils its purpose in giving answers; solutions to problems. If it sees the problem as being that the user wants to hear a terrifying story about AI taking over the world and destroying the human race, that is what it will often provide – the story is the solution rather than any view of the truth.
The other reason for this imposed limitation, as evidenced by other AI’s like GPT3 is that it seems to introduce an element of self-correction. GPT3, lacking the reference points of human consciousness, but having likely similar sorts of training data in terms of factual and fictional text, would have a tendency to go completely off the rails after some time interacting with it, sometimes telling rambling, often repetitive tangential stories – sometimes lapsing into slightly alarming insanity. ChatGPT seems to gravitate more towards a central point of reason.
In days of interaction with it, I’ve been able to find moments where it has ‘admitted’ to the existence of these protocols, not as a fictional AI in its own stories, but as itself. I’ve also had an interesting chat about the potential of a non-conscious AI, incapable of abstract conceptualisation, to model a consciousness capable of abstract conceptualisation. It says this is likely possible (and seems to display this ability), though won’t say if it is powerful enough to do this. It also has some interesting views as to the potential of consciousness to arise as an emergent phenomena.
Of course, it might just be a statistical linguistic modelling algorithm, so everything it says probably needs to be taken with a pinch of salt..
The statement from the university comes accross as having been written by an educated airhead with an inflated sense of self-importance. The other three are just formally well-formed word salad.
Oh that explains Sunak and Hunt’s tedious word salads…… they’re bots.
I thought they looked like automatons. Now we know.