Amazon shut down a customer’s ‘smart home’ for a week after the delivery driver claimed he heard a racial slur coming through the doorbell, even though no one was home. The Mail has the story.
Brandon Jackson, of Baltimore, Maryland, came home on May 25th to find that he had been locked out of his Amazon Echo, which many devices, including his lights, are connected to.
He would later learn that Amazon locked him out of his account after a delivery driver dropped off a package the day before. Jackson, an engineer at Microsoft, said “everything seemed fine” after the package arrived at his home and had initially thought he was locked out because someone had tried to “access my account repeatedly, triggering a lockout.”
But none of that was true. A representative directed him to an email he received from an executive that provided a phone number to call. When he called the number, he was told in a “somewhat accusatory” tone that the driver had reported “receiving racist remarks” from his doorbell.
“This incident left me with a house full of unresponsive devices, a silent Alexa, and a lot of questions,” he wrote on Medium.
Jackson, who is black, said most of the neighborhood and its delivery drivers are also African American and it would be “highly unlikely that we would make such remarks.”
“Finally, when I asked what time the alleged incident occurred, I realised it was practically impossible for anyone in my house to have made those comments, as nobody was home around that time (approximately 6:05 PM),” he wrote.
After reviewing the doorbell footage, he learned the device had given an automatic reply, saying: “Excuse me, can I help you?”
He also said the video showed the driver had been wearing headphones during the interaction and “must have misinterpreted the message”.
The next day, he was completely locked out of his account and it was “rendered unusable during their internal investigation”.
The Baltimore native claimed to have sent over video evidence to Amazon and didn’t hear back for days. When he did, he was told it would take two business days to resolve.
It wasn’t until six days later he regained access to his account, with “no follow-up email to inform me of the resolution”, which has “led me to question my relationship with Amazon”.
Who needs the CCP when you have Amazon?
Worth reading in full.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Due to this experience, I am seriously considering discontinuing my use of Amazon Echo devices
Hopeless moron. If you keep paying the company after this incident, you deserve anything which happens to you in future.
If you install these devices in your home in the first place “you deserve anything which happens to you in future.”
But it’s so convenient to be able to switch my lights on when I’m away on holiday…
As an aside, surely it’s possible to protect yourself from this possibility by using the Tuya Smart app rather than Alexa to control your lights etc? This works alongside Alexa but is independent of Amazon.
Isn’t there a manual override switch on devices controlled by these electronic systems?
That’s my opinion as well. OTOH, as my unfortunately long dead grandma used to say: Doof darf ma sein, man muß sich nur zu helfen wissen (It’s ok to dig yourself into a hole provided you eventually have the good sense to get out of it again) And that’s a real-world intelligence test this man has flunked.
All the warning signs have been there for ages. They can access any networked CCTV, lock you out of devises, turn your heating down, aircon up, stop your EV charging, and use your smart speaker to listen to every word you say. Not a bad tool to help implement a social credit system, and these idiots are buying this stuff out of their own money.
I wonder how long before all new homes have to be smart homes?
All in the interests of keeping us safe.
Judging by the raging success that is smart electricity meters, I’d say never.
Exactly. Sadly my brother has recently installed one of these damned things. I’m stunned at his stupidity.
You’re being overly optimistic. It’s impossible to fit the processing power and storage and cooling infrastructure necessary for real-time audio analysis in order to recognize spoken commands into an Alexa-sized device. This is literally nothing but a small computer with some high quality microphones attached to it which does 24×7 recording of everything the mikes can pick up in order to send it to some huge data centers over the internet where the analysis is done. It’s really 24×7 automated audio surveillance of a certain area by design. Install one and your living room effectively becomes a non-stop podcast of your private life to anyone who cares to access it, governments and marketing companies being the two most obviously interested parties.
That’s a bit similar to the universal movement tracking device known as mobile everyone carries around these days all the time. This device needs to associate with the currenly best, ie, closest signal receiving installation. Hence, every who has access to their diagnostic logs can reonstruct all of your movements with sufficient accuray to find you quickly all the time.
That’s why police love mobiles. Hard to live without one though.
I assume that this Echo thing is one of those Big tech listening devices that some people are unbelievably installing at home? Really? REALLY?
Katherine Watt’s latest substack on the WHO adoption of the Digital ID is pertinent to this ATL as is a curfew imposed on residents of an apartment block in Austin Texas. This is all leading to CCP style 15 minute ghettoes with digital prison walls.
https://bailiwicknews.substack.com/p/the-european-commission-and-who-launch
https://sagehana.substack.com/p/totalitarianism-in-texas
Thanks for these BB. Disturbing reports.
When TV shows were talking up being able to open and close your curtains and turn on lights while away on holiday back when I was a child in the 80s, they neglected to mention there would be an intermediary business involved to whom you would sell out your privacy.
In theory, there’s no need for an Amazon or Hive or Apple or whatever company to be the intermediary. With a bit of nous you can fit and run the relevant devices yourself. Amazon and its ilk were clever by conning people to buy their products at a more convenient price point and allowing the fantasists in the public to believe they’re living in ‘The Jetsons’ rather than ‘Nineteen Eighty-Four Meets Atlas Shrugged!’
You should never be in the situation where any company has that level of access to your home, where your usage of devices can be monitored.
Welcome to the future
If someone like us had proposed this as a possibility we’d have been accused of promoting conspiracy theories
Marianna!!!!!! Come quickly.
Excellent


Thank you.
I’ve been using the new Photoshop AI generative fill beta. It does all sorts of clever stuff. I tried enlarging the canvas of a picture and asked it to generate (fill in) an architecturally designed room in the new blank area of the image “canvas”. It kept insisting on putting an Indian woman in the picture. In one of the pictures she was working at a table, designing something with pieces of card, in all the colours of the pride flag. Asking it to not include any people failed to work, it still insisted on randomly including them in about 50% of cases (though not always). These kinks will be worked out I’m sure. As I was playing with it more, I realised that every time it was generating Indian or black characters. I asked it to include a blond Scandinavian woman. It generated a room with a woman in semi-Scandinavian clothing, but again Indian. I asked it to generate the room but containing a black man. This time it did include a man and he was black, but his face was looking downwards, and it ALSO included a black woman who was centre of focus “looking to camera”. Hmm it seems it follows absolutely the wokists Intersectional hierarchy. I asked it to generate a room including an English Knight. It generated another Indian woman carrying a sword, wearing ethnic Indian embroidered tunic weirdly crossed with a knights tunic, but in red and white.
White people do not exist in Adobe AI world. At all.
So now talk about this in front of Alexa and you will get cancelled. The world is truly going mad.
Those are not kinks – it’s designed in. AI is woke and evil – not the concept, but the execution. Most of the people and firms big enough to do serious AI are woke and evil.
The people involved with this couldn’t design a flushing toilet which works. Probably not even a light switch. They’re thoroughly stupid. Because of this, they vastly underestimate what intelligent beings have to be capable of to get through everyday life which its myriad of problems demanding real-time decisions with possibly life-threatening risks when getting it wrong, eg, horribly dangerous stuff like ascending stair cases or working in unsecured ladders or even crossing reasonably busy streets with more than one lane. And because they vastly underestimate this, they believe something fairly simple and repetitive must do if they just keep trying until they’ve burned through all of other people’s money.
AI is a scam that keeps reoccurring in IT since the mid-1960s whenever too many stupid rich people can be talked into wasting their money on that. One day, Google will be controlled by people understanding what ROI means. And this day, all AI fumblers will find themselves out in the cold again. Hopefully forever this time.
It most certainly is not a “scam” in so far as AI can do more than it used to do and does useful stuff and this will only continue, and the people involved are far from stupid. Equally it’s true that it will take a very long time before it’s able to match the real-time computing power of the human brain, combined with energy efficiency – the brain has had a long time to evolve.
People who believe computer programs with unpredictable behaviour must be useful for something because they’re terribly complicated are stupid. That’s the kind of stupidiy which made intellectually brilliant people search for the philosopher’s stone and engage on other alchemist’s quests for centuries, always believing the solution to their problems must just be one or two more complications away.
Well I guess you would classify me as stupid then because I “believe” that a computer program I helped write, which displays occasional unpredictable behaviour (as do most software systems) is useful to my firm’s customers – and they agree as they pay us money for them and use them. Maybe they are stupid too (IMO they often are but not in this regard).
Any tool should be used according to its capabilities. We’ve used ChatGPT at work to help us and it’s often useful. Not always, but often. Will it replace us, in the short to medium term? No. Do we follow it blindly? No. Doesn’t mean it’s not useful when used judiciously.
Well I guess you would classify me as stupid then because I “believe” that a computer program I helped write, which displays occasional unpredictable behaviour (as do most software systems) is useful to my firm’s customers – and they agree as they pay us money for them and use them.
That you believe this is useful and that your sales guys (whoever’s doing that) can talk other people into believing that as well doesn’t constitute proof that it is actually useful. Some people also believe that KN95 masks are terribly useful. However, that’s besides the point. Software systems often display unexpected behaviour because the code doesn’t really do what the people who wrote it or use it believe it does. Considering the generally shoddy quality of software, they actually do this really often[*]. But that’s something entirely different from software designed such that nobody can tell what it will do whose positive qualities are therefore necessarily something someone must believe in, usually based on statistics about the past behaviour.
[*] Imagine, if you can, some guy who’s positively incapable of designing and implementing control software for a computerized coffee machine which actually works but who firmly believes he can create intelligent programs. Does this really sound credible?
Well I would say this but as what I helped write, or some predecessor of it, replaced paper ledgers and letters I think there’s a good case for it being useful – probably not as useful as the sale guys say (actually we don’t have any) but useful nonetheless. Not as useful as it could be because of the frictional costs and barriers imposed by insane safetyism within the software and financial services industries, but useful nonetheless. Your argument is tending towards using no tool that is not 100% reliable. I mean – you’re using some operating system and browser to post here, and I bet they sometimes glitch or crash.
You’re confusing two entirely different things:
1) Something that’s designed to fullfill a specific function which may sometimes fail due to deficiencies in the implementation.
2) Something that’s designed to have unpredictable behaviour.
An everyday example of the latter category would be a dice. Nobody can tell which number will come up next. Would you use a dice as decision making tool? If so, what’s the rationale for that? God will make it come out right?
Addition: My official job title is senior software engineer (practically, mostly Linux) and hence, I know a bit about this topic.
Well you used the term “computer program” without specifying what kind. Yes there’s a difference between a closed system and one learns and adapts – human beings learn and adapt, and often if you ask the same person the same question at different times you will get different answers. I’ve no idea if AI will ever replace humans in any general way – we’ll be long dead before we know the answer.
There are no kinds of computer programs in the sense you believe them to exist. You have been sold an imaginary bridge.
I’ve not bought anything. We’ve used ChatGPT (currently free) and it has limitations but offers something useful that I’ve not seen before – a step change. I don’t know how much I would pay for it but I would probably pay something and it can only improve.
Not believe they must be useful. But know they ARE useful. Subtle and very big difference. For years I was very down on the concept of AI being useful. It has been the biggest money sink in tech. However great strides have been made with machine learning which was the required precursor walk before running step. And the point has now been reached where there is a feedback loop where self improvement and problem solving is happening. Not because an AI has emotions or consciousness but because AI’s evolve to occupy logical space and it turns out doing that still sets up a remarkably useful feedback loop that is in some ways akin to human social evolution and is producing some absolutely fascinating results.
Bear in mind when the human mind evolved stepping beyond apes some 80,000 or more years ago (our brains have been as they are now for about that long), the step that unlocked everything we have become in those 80,000 years was a simple next step along in the chain of evolution. It wasn’t some complex solution that spontaneously evolved all at once in anticipation of unlocking great social evolution. So the point is, what makes us distinctly human is bound to being a simple step on from who we were as apes. We over hype our difference. A human brought up with apes and not subject to the repository of social learning gets to state pretty much indistinguishable from apes.
Consciousness in evolutionary terms is a precious resource and actually the evolution that occurred, imo, involved precisely not having to use consciousness to quickly acquire and form habits.
AI’s are now mirroring a similar feedback loop and it is turning out that consciousness isn’t required AT ALL for that evolution to occur. That such a feedback loop has been unlocked is quite terrifying.
Anyway this topic deserves a whole book not just a comment. But based on your assertion, colour me stupid.
I – for one – am glad that the idea with the self-driving cars was quietly shelved. There are enough dangerously incompetent human drivers out there already. Apart from that, I work in the industry and this sales pitch doesn’t work on me.
Yes I read of some study that calculated the amount of electricity needed to run the servers to make every car self-driving. The results were predictable.
Yes the kinks I’m referring to are things like the fact it insists on generating people in 50% of cases when you specifically ask it to generate none. The racial mix is far from a kink. That’s planned. The kink is probably then a bug related from the very crass rules that that planning has brought about.
Good point. It can’t introduce non-white people into pictures if it doesn’t put ANY people in pictures. Probably they need to create some extra rules so it introduces other articles that are obviously not what white people would use or have, in the absence of people.
The message here is that you’re vastly overestimating the capabilities of this program which is really only capable of repeating a fairly simple set of patterns trained into it and with (obviously) no understanding of requests made to it at all. You asked for a knight. A search of related items known to the program brought up a sword (a lance would have been far more appropriate) and hence, the outcome was the same picture of an Indian woman which is always being generated, just with a sword added to it.
That’s basically Your car can have any colour provided it’s black on steroids: You AI-generated imagery can contain anything provided it’s Indian women with a limited set of randomly mismatched accessoires. Now, please imagine the bloodshed which would have resulted from this Indian women generator applying its non-abilities to correctly assessing situations occuring during road traffic and coming up with proper control reactions to them: Your AI-controlled can do anything provided it’s accelerating towards the most free section of the street, as determined by estimated hardness of any obstacles. Better stay clear of conrete walls, if in doubt, run over a woman instead of into a traffic sign. No reason to care for anything smaller than a large dog.
Actually I am by profession a computer annalyst and have a good understanding of machine learning and AI. I think this response is an over simplification as there are now some very interesting developments in AI which I don’t have time to go into here.
The final image i described, in it’s racist wokeness, is nevertheless extremely interesting. My suspicion is there are set of severely “brutish” constraining rules but the AI is nevertheless trying to resolve the request. So I suspect it has a raw administrator override rule “show Indian women x%, show black women y%, show Indian men z% etc”
Also a knight needs either armour or a tunic or a sword or other weapon. For simplification in my first comment I didn’t explain the “sword” generated is not actually a sword at all but a kind of dream like cross between a sword and a musical instrument.
I can also imagine there is an admin rule against depicting weapons and so this has resolved to a sort of dream-like conflagration with a musical instrument.
It is actually quite fascinating at an intellectual level. It seems to me there has been an attempt to resolve the request that has simultaneously been overridden by some pretty clumsy political rules/constraints.
Likewise the design of the Indian woman’s dress. It isn’t a knights tunic, but it is somehow reminiscent of one and totally unlike the dress style if any of the other Indian people depicted. And it is coloured red and a bit of white – matching the probably only allowed “English” motif. But it is “embroidered” with an ethnic pattern. Again I suspect a raw, out of place admin override of what the AI is attempting to do.
I actually regenerated these images to show them to a friend after accidentally losing a first set that were even more striking as to all these points. So I have seen it now, as expected, repeated the “political image shaping” twice over verifying it wasn’t a once off thing. The points I have raised are not selective apart from the fact that I took the most egregious of each of the three images it created for each prompt, but in the other cases it either had no people in the image, or just had an Indian woman totally unrelated to the request!
There is a huge overrepresentation of Indian people but actually I am now noticing, perhaps surprisingly – on the politics point, far fewer black people. And completely set white people – except perhaps the exception that some look Spanish or “mediteranean.” Haven’t seen any looking remotely Caucasian.
“So I suspect it has a raw administrator override rule “show Indian women x%, show black women y%, show Indian men z% etc””
I know nothing about how AI develops but I suspect the commands are more general than that. It will be rewarded for being generally anti-white.
More fool him for letting Big Tech run his home.
We all need a deep clean to get Zuckerberg, Bezos and gang out of our lives.
Don’t forget to blitz any Chinese tech with wifi which will opens the door to the CCP.
This is not a drill.
Read, learn. This is a company operated by the worlds richest RETAILER, who has taken it upon himself to control people’s lives if they do not conform to his and his disciples view of life. Sure he sent a photograph of his D— to a woman that was not his wife via his phone, and even if it had been his wife, its certainly a questionable activity. But that presumably is ok in his world view.
Unelected member of the billionaire boys and WEF club, gets to play God because he is richer than you or I, and in his and his “friends” minds that makes him better than you or I.
Ditch the Amazon devices, ditch using Amazon send a message.
Yes, but the delivery driver felt he had been insulted with a racial slur.
Alexa: “That’s it. Guilty as Charged.”
I appears Amazon can boycott us for a perceived insult.
Boycott Amazon instead.
Amazon was only able to do that because he had given it the ability to do it.
My iPhone has a setting for Face ID with a mask. Why?
Why would anyone want a piece of kit in their own home that listens to everything they say and do and can be controlled by a third party??
Unless they’re auditioning for a part in a real life 1984?
Well, delete your Amazon account eh? Read Shoshana Zuboff’s (horrifying) “Surveillance Capitalism” when it was released 2 or 3 years back. An investigation into how Big Tech steals and abuses our data.
Three days later, when my last Amazon order was delivered, I deleted my account.
Read the book, and if you don’t do the same – more fool you.
And of course, you’ve completely DeGoogled all your devices, haven’t you? And trade in your Smartphone for a Brick phone? No? Again more fool you. Your Smartphone is Big Tech’s delight.