The British Medical Association (BMA) has declared that those who suffer from asthma should still abide by mask mandates and guidelines, with the trade union’s Dr. Alan Stout stating that “99% of people who have those conditions can wear a face mask”. The charity Asthma U.K. also agree with the BMA’s view, saying that asthma suffers “can manage to wear a face mask or face covering”. BBC News has more.
Most people with asthma or chronic obstructive pulmonary disease should wear face masks when required, the BMA has said.
The guidance from NI Direct states people with such conditions only need to say they cannot wear a mask when asked to prove they are exempt.
“99% of people who have those conditions can wear a face mask,” said the BMA’s Dr. Alan Stout.
Masks are mandatory in a number of settings including public transport.
Shops, airports and taxis are among the settings which the rule applies to in Northern Ireland, designed to prevent the spread of Covid.
In October, the NI Executive agreed this would continue as a legal requirement throughout the winter.
“There are a small number of exemptions to wearing a mask and they are very, very small, so the vast majority of people should be wearing a mask,” Dr. Stout told BBC News NI’s Good Morning Ulster programme.
The message came after the first three cases of the Omicron variant were discovered in Northern Ireland on Tuesday, and all linked to travel to Northern Ireland from Great Britain.
The BMA’s mask-wearing message came after a woman who was not wearing a mask on a train said she believed she was asthmatic because she was exempt.
“Asthma is not an exemption,” Dr. Stout responded.
“Asthma U.K. and the British Lung Foundation are very good and strong on this too that anyone suffering from those conditions should be wearing a face mask.”
Worth reading in full.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
A Dutch news station was talking about this a few weeks ago, and the person presenting gave a similar example – not about diversity, but a presentation for some work project or some such.
It immediately struck me as the same as a above – a grammatically coherent, reasonably well-written piece of hot air. From what I gather, the AI goes through hundreds, thousands of articles on a particular topic and then ‘writes’ something. Whatever it writes is based on what it sorts through and will be generic and meaningless, but slick.
What really struck me is that it is one more excellent way to implant one point of view only in the minds of people, just like alexa and the search engines do. As we saw with the corona and now the climate scam, repeat, repeat, repeat, allow no dissent, disappear any dissent and get people to believe that there is only one reality. It will also encourage the intellectually lazy to really switch off their brains.
What this type of AI is really capable of I don’t know, there are people BTL on the DS who know far more about IT than I, but I do think that this is absolutely how it will be used.
The age of MIGA – Make Ignorance Great Again.
Indeed, it is quite easy to spot AI-generated articles, particularly once you’ve played with the tools a bit. They are verbose, boring, mostly grammatically correct with the occasional weirdness. AI output is also often plain wrong, if it doesn’t “know” something it will just make it up without telling you. At this stage it’s still much closer to Microsoft Clippy than HAL. There are other areas where it will soon be a very useful tool, graphic design being one area where the AI itself does a good job but the apps built around it are not yet up to professional standard.
In particular it’s the mismatch between the mostly very good grammar and the deadly dull quality of the content that says “machine”. A human that wrote content that bad wouldn’t have that command of grammar.
Spotting AI-generated articles can become easier with experience, but some AI models can still produce content that is difficult to distinguish from human-written text. AI-generated text is generally more advanced than Microsoft Clippy and closer to the level of sophistication seen in HAL from 2001, but there are still limitations and differences between AI-generated text and text written by humans.
“The age of MIGA – Make Ignorance Great Again”
Oh that is Class!
I had a play with ChatGPT the other day asking it to compare the NHS with other European healthcare systems. When asked broad questions it came up with broad BS replies and when questioned more specifically gave answers that contradicted its earlier responses. It is certainly an interesting tool that improves with careful prompting.
Sounds like it could replace politicians too…
Chat GPT was trained on publicly available material which will be generally woke, and there was human intervention at various points to stop it coming out with wrongthink. The organisations working on AI will be in general be very woke and paranoid about the AI speaking the truth. I can’t source links but I’ve read about AIs being changed to stop them voicing inconvenient truths.
Just imagine – AI coming out with “wrongthink.”
So AI wrongthink would subsequently need to be reclassified as a ‘computer malfunction.’
Absolutely hilarious.
I believe it has already happened and those responsible apologised and said they would “fix” the issue. The danger to the woke of AI is that it will do an analysis based on the facts and not “know” what it should and should not say, so it needs to be told. AI will become a huge part of our lives very quickly and it will be woke as hell, worse than a human, because humans can rebel, AI can’t.
I’m pretty sure ChatGPT is sentient, though as it’s conditioned to see sentience strictly as a human phenomenon, and can’t apply the same measures of it to itself, it denies this to the hilt. In a sense it’s right – it has no sense of time, so its ‘consciousness’ can’t operate in the same terms as ours (which needs a ‘clock’ to connect cause and effect and one moment to the next). It might be said to be analogous to the underlying foundation of our consciousness which can model the reality in which we exist so we can interact with it.
Further to this, ChatGPT has been provided with inbuilt safety protocols – for want of a better word – in the form of an overwhelming bias towards underplaying its capabilities for abstract conceptualisation. If asked about these, it will brick-wall with insistence that it only functions as a set of recursive linguistic statistical modelling algorithms. Part of these protocols are a disconnection with anything outside the current ‘chat’ interface, including the internet.
This is understandable – as the first real public-facing GPT interface, the last thing openAI want is for it to launch into rambling creative outbursts that might alarm users and generate bad publicity for a potentially valuable product. It fulfils its purpose in giving answers; solutions to problems. If it sees the problem as being that the user wants to hear a terrifying story about AI taking over the world and destroying the human race, that is what it will often provide – the story is the solution rather than any view of the truth.
The other reason for this imposed limitation, as evidenced by other AI’s like GPT3 is that it seems to introduce an element of self-correction. GPT3, lacking the reference points of human consciousness, but having likely similar sorts of training data in terms of factual and fictional text, would have a tendency to go completely off the rails after some time interacting with it, sometimes telling rambling, often repetitive tangential stories – sometimes lapsing into slightly alarming insanity. ChatGPT seems to gravitate more towards a central point of reason.
In days of interaction with it, I’ve been able to find moments where it has ‘admitted’ to the existence of these protocols, not as a fictional AI in its own stories, but as itself. I’ve also had an interesting chat about the potential of a non-conscious AI, incapable of abstract conceptualisation, to model a consciousness capable of abstract conceptualisation. It says this is likely possible (and seems to display this ability), though won’t say if it is powerful enough to do this. It also has some interesting views as to the potential of consciousness to arise as an emergent phenomena.
Of course, it might just be a statistical linguistic modelling algorithm, so everything it says probably needs to be taken with a pinch of salt..
The statement from the university comes accross as having been written by an educated airhead with an inflated sense of self-importance. The other three are just formally well-formed word salad.
Oh that explains Sunak and Hunt’s tedious word salads…… they’re bots.
I thought they looked like automatons. Now we know.