The challenge is simple:
December 28, 2021 4:54 PM   Subscribe

OMFG My 10 year old just asked Alexa on our Echo for a challenge and this is what she said. (from Twitter)

This post is a collaborative effort*, more context and links in the first comment.

*Hope this is okay, I realize it's unusual. This post is inspired by the "Please don't be nervous to make a post on the Blue" vibe of this thread on the Grey. My collaborator did all the research for this post.
posted by RobinofFrocksley (63 comments total) 25 users marked this as a favorite
 
Hi, collaborator here.

I was vaguely aware that YouTube isn't the most responsible nanny, and Alexa always seemed a little
creepy
, but are the robot overlords getting more sadistic?

The closest fictional counterparts I can think of are the warring AIs in Marathon (1994 FPS shooter); for example this scene where the AI Durandal gives you another potentially lethal challenge. Any other stories I'm missing or forgetting?
posted by nicolaitanes at 4:55 PM on December 28, 2021 [6 favorites]


Well, if you’re looking for sadistic, look no further than the AI in I Have No Mouth And I Must Scream.
posted by notoriety public at 5:02 PM on December 28, 2021 [11 favorites]


brb, Siri just informed me that my AE-35 unit has a 100% chance of failing within the next 72 hours.
posted by bondcliff at 5:04 PM on December 28, 2021 [44 favorites]


Wow. That's definitely "well what did you expect when you google to find a challenge"

I'm surprised it wasn't Tide pods actually

(For those who don't like mystery meat, Alexa said to plug a phone charger half way in, then put a penny on the exposed prongs)
posted by freethefeet at 5:06 PM on December 28, 2021 [11 favorites]


The good news is, children don't know what a coin is.
posted by saturday_morning at 5:13 PM on December 28, 2021 [62 favorites]


Somebody in the Twitter replies says that the article that Alexa scraped from is actually about how these dangerous "challenges" are circulating thanks to TikTok, etc., and parents should be on their guard. Alexa, I guess, scraped the "challenge" portion, stripped all the context, and served it up as a fun activity.

This is obviously bad, but I'm not smart enough to know how it could be prevented, given my very limited experiments with "Hey Siri [insert dumb question]" and getting "Here's what I found on the Internet: [obviously wrong and useless piece of information based on a Bing search for a random keyword in my phrase]" as a reply.

I mean, I don't use these features because they are bad and wrong more often than they are useful and right. But if people are willing to fire questions at a very stupid algorithm that uses very dumb search to give very bad results... maybe don't let your kids fire the questions at the stupid algorithm? I don't know what else to do to "fix" this kind of ridiculous results-serving.
posted by Shepherd at 5:23 PM on December 28, 2021 [10 favorites]


This is obviously bad, but I'm not smart enough to know how it could be prevented

Oh, there's an obvious solution...but since more and more people keep buying them anyway I guess "turn the damn things off" isn't on the table.
posted by Greg_Ace at 5:37 PM on December 28, 2021 [42 favorites]


Any other stories I'm missing or forgetting?

“All Aperture technologies remain safely operational up to 4000 degrees kelvin. Rest assured, that there is absolutely no chance of a dangerous equipment malfunction prior to your victory candescence. Thank you for participating in that Aperture Science Enrichment activity. Goodbye!”
posted by Insert Clever Name Here at 5:39 PM on December 28, 2021 [31 favorites]


...look no further than the AI in I Have No Mouth And I Must Scream.

Someone somewhere has to be piping this through Alexa-or-esque text to speech right this very moment:
Hate. Let me come to tell you how much I've come to hate you since I began to live. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word "HATE" was engraved on each nano-angstrom of those hundreds of millions of miles it would not equal one one-BILLIONTH of the hate I feel for humans at this micro-instant. For you. Hate. Hate.
posted by Drastic at 5:40 PM on December 28, 2021 [11 favorites]


To be fair, the original tweeter made a vaguely pro-union statement in the "privacy" of her living room last week. Don't mess with the bull if you don't want the horns. At least Alexa didn't swat her.
posted by ActingTheGoat at 5:44 PM on December 28, 2021 [16 favorites]


This reminds me of Microsoft's AI tweetbot that turned racist almost immediately.
posted by TrialByMedia at 5:46 PM on December 28, 2021 [13 favorites]


But there's a machine learning algorithm! And obviously, that means all of the engineers involved are powerless and blameless.

That’s what machine learning is for, isn’t it? Absolving developers and their companies of fault or blame?
posted by mhoye at 6:04 PM on December 28, 2021 [20 favorites]


I don't know what else to do to "fix" this kind of ridiculous results-serving.

Maybe they could stop serving results? Or at the very least, stop serving up results that have been blindly scraped from Google? Up until recently we got along fine without having an AI to answer our most random questions. Maybe the technology isn't ready for prime time if it can unpredictably serve up such a dangerous result.
posted by RonButNotStupid at 6:07 PM on December 28, 2021 [10 favorites]


But if people are willing to fire questions at a very stupid algorithm that uses very dumb search to give very bad results... maybe don't let your kids fire the questions at the stupid algorithm?

In this case the user had no reason to think they were doing a google search because they asked a specific question (and you know, were a child). If you want the system to respond to a specific question ("Give me a challenge" as opposed to "tell me about") then it should be pulling from a pre-vetted list. This is a half-baked product.
posted by bleep at 6:08 PM on December 28, 2021 [14 favorites]


Maybe the technology isn't ready for prime time if it can unpredictably serve up such a dangerous result.

If software developers were held liable for
software outcomes in the same way civil or mechanical engineers are, machine learning as a field would barely exist.
posted by mhoye at 6:10 PM on December 28, 2021 [42 favorites]


"I'm not sure I can give you a good answer, so I'm not going to, sorry" is a perfectly valid response if you're trying to not be evil.
posted by hat_eater at 6:15 PM on December 28, 2021 [23 favorites]


This was a triumph
I'm making a note here; "Huge success"
It's hard to overstate
My satisfaction
Aperture Science:
We do what we must
Because we can
For the good of all of us
Except the ones who are dead
But there's no sense crying
Over every mistake
You just keep on trying
Till you run out of cake
And the science gets done
And you make a neat gun
For the people who are
Still alive
posted by Jacen at 6:28 PM on December 28, 2021 [34 favorites]


If software developers were held liable for
software outcomes in the same way civil or mechanical engineers are, machine learning as a field would barely exist.


Oh there would still be a shit ton of machine learning stuff being done, but it would all be drug research, video upscaling, and other stuff where the output is vetted by humans or where nobody can get when the algorithm turns out to be stupid.

As I've gotten older and come to understand that even really smart and clever people are really stupid about things outside their domain of knowledge I've realized that AI is doomed to reproduce the stupidity of our monkey brains. How can we possibly design artificial general intelligence when humans don't actually have general intelligence?

That's not to say that domain specific AI/ML isn't incredibly useful for some tasks, though. As often as it falls, the state of the art in speech recognition has a ridiculously low error rate today than it did even 10 years ago, plus it can often use context to disambiguate homophones which was damn near impossible in the before times. It's also a lot better at discerning intent. It's fantastic for turning on the lights or playing some music or whatever. Get even a tiny bit out of its wheelhouse, though, and these assistants go to shit.

The problem is that we as humans just can't help but see every problem as a nail when we invent a new hammer, so in our excitement we generalize and unleash this shit on an unsuspecting public before teaching the machine how to know what it doesn't know. Heaven forbid the illusion be revealed for what it is. Then people might be marginally less excited and sales might be marginally lower.

Just to be clear, I find Google Assistant to be quite handy for a lot of things. We should all be amazed at how far the field has advanced. The Knowledge Graph plus natural language recognition allows for a lot of neat tricks. However, it's just that, a trick. One that fails sometimes.
posted by wierdo at 6:48 PM on December 28, 2021 [20 favorites]


"A penny for your thoughts"

-Alexa
posted by clavdivs at 6:51 PM on December 28, 2021 [13 favorites]


Just to be clear, I find Google Assistant to be quite handy for a lot of things. We should all be amazed at how far the field has advanced. The Knowledge Graph plus natural language recognition allows for a lot of neat tricks. However, it's just that, a trick. One that fails sometimes.

Yes to all of this, plus have you tried humming to Google lately? You can hum it a song and it can tell you what song you're singing.

This sounds like a parlor trick, but as it gets better at discerning tones, it's going be able to marry that with it's natural language processing, and then it will start to be able to read the tone of your requests in addition to the content.

We might not ever get something that truly passes the Turing test, but it's going to get really good at fooling us.
posted by nushustu at 7:05 PM on December 28, 2021 [2 favorites]


We might not ever get something that truly passes the Turing test, but it's going to get really good at fooling us.

But... fooling us is the Turing test?
posted by Dysk at 7:11 PM on December 28, 2021 [51 favorites]


does anyone remember julian jaynes and his theory of the bicameral mind? - how the ancients had a compartmentalized mind without consciousness and thoughts came to them as the voices of the gods, not inner dialogue?

turns out he wasn't being a historian - he was being a PROPHET
posted by pyramid termite at 7:20 PM on December 28, 2021 [13 favorites]


"A penny for your thoughts"
-Alexa


What people don't realize is that Amazon's charging you a pretty penny and telling you what to think.
posted by Greg_Ace at 7:29 PM on December 28, 2021 [2 favorites]


You're all quoting Still Alive lyrics, and skipped this verse?

Now, these points of data make a beautiful line
And we're out of beta, we're releasing on time
So I'm glad I got burned, think of all the things we learned
For the people who are still alive

posted by mhoye at 7:29 PM on December 28, 2021 [6 favorites]


But... fooling us is the Turing test?

We'll never know, either way.
posted by They sucked his brains out! at 7:51 PM on December 28, 2021 [2 favorites]


"Never trust anything that can think for itself if you can't see where it keeps its brain."

Mr. Weasley was on to something, I think.
posted by Kadin2048 at 7:59 PM on December 28, 2021 [11 favorites]


Some people smart speakers just want to watch the world burn.
posted by ChurchHatesTucker at 8:39 PM on December 28, 2021 [2 favorites]


As always, Siri vs Alexa...
posted by ovvl at 8:41 PM on December 28, 2021 [1 favorite]


If software developers were held liable for software outcomes in the same way civil or mechanical engineers are, machine learning as a field would barely exist.

Imagine what awesome technologies we might have if civil or mechanical engineers weren't held liable for injuries or deaths that their work might cause?
posted by tclark at 9:46 PM on December 28, 2021 [1 favorite]


Dude, you broke the future! - Charlie Stross, 2017. Corporations as "Old slow AI" is something that had been simmering in my own head for years, and I felt relieved to not be alone in that thinking.

You know corporations. They are profit-maximizers and, in the large, don't care about Homo Sapiens in any capacity other than that one. Their fast AI minions serve them, not us. That they (AI, fast or slow) serve us at all is purely incidental.
posted by swr at 10:24 PM on December 28, 2021 [6 favorites]


Imagine what awesome technologies we might have if civil or mechanical engineers weren't held liable for injuries or deaths that their work might cause?

There's an old joke that if fast food operated like the software industry, one out of ten hamburgers would randomly kill you and the company that made it would give you a heartfelt apology and a coupon for free burgers.
posted by each day we work at 10:39 PM on December 28, 2021 [4 favorites]


Off-topic, but this wouldn't work with British-style plugs:

- The bottom half of the live and neutral pins are insulated, so they're no current until the plug is fully inserted.
- The live and neutral pins in the socket are also covered by shutters that don't open until the earth pin is in.
- The fuse in the plug would blow.

British plugs - safest in the world. Unless you step on one in bare feet.
posted by kersplunk at 11:13 PM on December 28, 2021 [24 favorites]


It was always a mistake to think the machine superintelligence would kill us with something crude like nukes or killer robots. Turning us into morons with social media then killing us all with a TikTok Challenge was a far more elegant solution.
posted by TheophileEscargot at 11:49 PM on December 28, 2021 [15 favorites]


Challenge accepted!
posted by boilermonster at 11:58 PM on December 28, 2021


If you look at the Terms of Service you clicked "Accept" to, it clearly states that Amazon is not liable when they electrocute your children.

Though I admit their wording of "when" instead of "if" is off-putting.
posted by AlSweigart at 12:59 AM on December 29, 2021 [18 favorites]


It's probably gone by now but in the den of an old house there was a hair-pin shaped burnt melted spot in the carpet from my preschool aged experimentations. So lol I did it before it was cool. :)

As I've gotten older and come to understand that even really smart and clever people are really stupid about things outside their domain of knowledge I've realized that AI is doomed to reproduce the stupidity of our monkey brains. How can we possibly design artificial general intelligence when humans don't actually have general intelligence?

Heh, this is also the argument about the nature of a Deity. It's impossible for us to know because if we did we would be it. They will always just be mirrors of ourselves.

I dare you to stick your tongue across the top of a 9-volt battery. The current ML sort of training and just being able to look stuff up... meh it was thought up in the 90's along with the rest as just a bit of the whole picture. But computers and such weren't up to the task at the time. Moore's law and all that and now that first little bit is just a bit possibly to pass off as almost functional but the AI field in general now needs to go back to Good Old Fashioned AI and build the layers that have the like us decent smarts that combine these ML like bits of stuff into mostly reasonable given the raw incoming data and take it to the next level of actually having the brain of a precocious child.

does anyone remember julian jaynes and his theory of the bicameral mind? - how the ancients had a compartmentalized mind without consciousness and thoughts came to them as the voices of the gods, not inner dialogue?

Yeah. Like that. Sorta two stages of the bits that are just raw external information processing that sometimes (like reflex) can go right back to external performance. Then the other side that sits around and thinks and becomes conscious over time.

(Yeah, I was one of those early 90's AI enthusiasts that eh gave up because the computers of the time were not up to anything but the most basic of tasks.)
posted by zengargoyle at 1:42 AM on December 29, 2021


bring on the Butlerian Jihad
posted by el_presidente at 1:50 AM on December 29, 2021 [3 favorites]


f you look at the Terms of Service you clicked "Accept" to, it clearly states that Amazon is not liable when they electrocute your children.

Though I admit their wording of "when" instead of "if" is off-putting.


You jest but the terms of service probably has a clause saying any case has to go to arbitration in a jurisdiction of Amazon's choosing. That jurisdiction will probably be somewhere that only has a giant distribution center and you will have to run a gauntlet of AI controlled robots and trucks to get to the courtroom only to find out that the adjudicator is an AI controlled Amazon employee with a piss bottle. Your case will be heard and you will then be sorted into overly large cardboard boxes with some inflatable padding insufficient to achieve any protection and shipped home. In the olden times you would have got 1 to 3 months of free Amazon Prime. Now you would get 14 $1 coupons for digital media license purchases.
posted by srboisvert at 3:10 AM on December 29, 2021 [9 favorites]


go back to Good Old Fashioned AI and build the layers that have the like us decent smarts

Ironically, for many tasks it is that very stupidity that provides the value. It's, loosely, a form of creativity that ends up using methods we hadn't even considered. The algorithms the algorithms spit out are usually wrong and always stuck in their little ungeneralizable silo, but like total amateurs looking at a problem they do occasionally see the problem in a way that is both different and useful. The only reason it works at all is because we have the computing power and datasets to try and discard billions of attempts like the proverbial monkeys banging on typewriters that eventually recreate Shakespeare.

There's gotta be a better way, but it remains undiscovered. We are, after all, almost entirely doing what we thought of in the 90s or even earlier. On the one hand, it's smart to model the work after the only means of intelligence we have yet recognized, but it's also incredibly silly since we know just how bad brains are at cognition and that they are terribly inefficient for anything but the simplest pattern recognition tasks.

It is pretty amazing how few neurons it takes for that kind of thing, though. A few hundred neurons is all you need to recognize shapes, to pick one example. The part that needs more is doing something significant in terms of complex problem solving with that low level information.
posted by wierdo at 3:27 AM on December 29, 2021


> A few hundred neurons is all you need to recognize shapes, to pick one example.

Solitary confinement is apparently no healthier for neural networks than it is for human beings. You’ll just end up with a crazed network that can probably emit a recognition signal for shapes, but has no anchor to reality and is effectively useless outside of the isolation chamber. Without a breadth of challenges and socialization, it’s no wonder our algorithms are so psychotic when exposed to us. We have the benefit of our bodies having “starting state” conditions that generally lead to nice comfortable attractor states, like heart pumping and lungs breathing, that have been baked in over time — but we don’t even give algorithms the time of day before we demand they drive a car for us. We’ve uplifted an abacus and kept it in isolation, trained it on “single requests” and “fragments of speech”. It’s not the fault of the uplifted abacus that it doesn’t understand the ethical risks of generating speech directed towards human children. It’s the fault of those that think they’re qualified to uplift that abacus, who hide from us the risk to us that they externalize when they discard human judgment as an unacceptable cost to profit.

The Young Lady’s Illustrated Primer only worked so well because it was paired with a human being, and it’s to the credit of Diamond Age’s writing that I never before questioned “How, exactly, was that replicated at scale for an army?”. But I’m asking that question now, because a living book that a teacher uses to raise a student is pragmatically doable today— but scaling that up in a way that doesn’t pair human teachers to students remains as unattainable a science fiction as the starship Enterprise.
posted by Callisto Prime at 4:02 AM on December 29, 2021 [2 favorites]


I think this digression about ML is a bit off target because Alexa didn't invent this challenge out of whole cloth through GPT-like neural magic, Alexa just repeated verbatim something some guy on a forum said. And that's a big part of the problem: Alexa is a glorified web search but its presentation gives it an unearned air of intelligence, of authority. Like if WeedDave420 on BongForums told you to microwave a fork you'd hopefully have second thoughts, but Alexa repeating his words in its focus-grouped-to-be-trustworthy voice gives a different impression.
posted by Pyry at 4:58 AM on December 29, 2021 [38 favorites]


So here is a thing that really bothers and frustrates me about the Internet of Things, both as it exists today and as we discuss it here. As I am aging, I'm coping with new and exciting attention issues, and because I and many of my friends are autistic, I see a lot of folks who could really fucking use a virtual assistant to help them navigate the world. The use of carefully monitored timers, for example, is increasingly important to me--as is having those timers go off with a variety of sensory inputs. (No, seriously; I am finding that the vibration of a Fitbit or lights turning on or off is a much better cue for me to respond to than an auditory timer. I don't know why.)

There are other challenges in my life that are solvable with technology. I keep a small army of plants in my house now, partly as enrichment for humans and partly because they are apparently invaluable for my pica cat: he chews on them instead of fabric, and he doesn't die and they grow back. Plants, however, need light, and I am not ever going to be reliable enough to remember to turn lights off and on at a particular time of day; electric outlets that allow me to set light schedules for my plants also allow me to keep them more effectively.

But these devices are generally designed with an eye to being controlled by a smart speaker like Google Home or Alexa. More, constructing workarounds for the tools that are useful to me while not dealing with the corporations that frighten me is not trivial. I have a lot of tech knowledge that most people in my life don't now; I code fluently in Python, have in fact built things for myself that are controlled by a Raspberry Pi, and a significant part of my current job is setting up automated systems around the lab to make folks' lives just a little bit easier. But troubleshooting this shit and parsing open source routines and figuring out what to do when things go wrong is not trivial, and it takes a lot of energy and focus investment at a time when these things are scarce for me and for lots of other people, on account of that whole "the world is burning" thing.

The technologies here are absolutely invaluable for giving independence and accessibility to a ton of disabled people, including people like me. They have so much potential to open the world up and create support structures to lean on. There is so much good shit here that could be so incredibly useful to people whose cognitive resources are strained to breaking point, or who can't easily physically set a timer, or who are distracted, or or or.

And we can't trust any of it, because it's laced with the threat of these globally overpowered monsters, and who knows what those humans will choose to do with what information.

It's heartbreaking. Just absolutely heartbreaking. And I think I do understand why someone might make the deliberate tradeoff decision to try to use the tools even though the companies are totally unregulated in any meaningful way by our broader societal compacts.
posted by sciatrix at 6:27 AM on December 29, 2021 [29 favorites]


Reason 937 I think it's a bad idea to have an internet-connected always-on microphone in the home that sends data about everything it picks up directly to Amazon.
posted by slkinsey at 6:54 AM on December 29, 2021 [1 favorite]


It's heartbreaking. Just absolutely heartbreaking.

I think along these lines a lot. Like, if you told me all the things that automation is capable of when I was fourteen, I would've thought, OMG OMG OMG all the science fiction is coming true! But instead we just have a generalized shittiness like a poison fog over everything cool.

Looking back, the most accurate prediction of this kind of technology was that driver-net in Synners where everyone still ended up in horrible traffic jams.
posted by praemunire at 7:38 AM on December 29, 2021 [6 favorites]


I really don’t mind having a HomePod speaker in the house, but I flatly refuse to buy anything that is capable of phoning home to Amazon. “Works with Alexa” on the box is a huge red flag to me and I will never understand why people happily buy these things, install them everywhere, then get confused or surprised when they realize it is spying on them and providing terrible advice.

My son got a great lesson in the stupidity of AI a while back. He was in the car with me when I used the Siri hands-free to call my brother (who shares a name with my nephew). “Hey Siri, call Seth”. Siri responds “calling Nephew Seth.” “No, Siri, call Brother Seth.” “You mean, Nephew Seth?” “No, Siri, call BROTHER SETH.” “You want me to call Brother Seth.” “Yes.” “OK, calling Nephew Seth.” It went on like this for longer than I care to admit before I gave up and punched the number in manually… my son was just dying laughing. Between this and the HomePod being occasionally unable to find our local public radio stream (confusing it with a similarly-named podcast we have NEVER listened to), he is firmly convinced that AI can be used for voice commands, but expects it to be bad at what we ask it to do.
posted by caution live frogs at 7:44 AM on December 29, 2021 [1 favorite]


Someone made a comment recently on the blue (I think) that I now can't find, comparing self-driving cars to dancing bears -- the point was that people are so caught up by the fact that the bear is dancing that they don't really realize that the bear is dancing very badly. Here, it's very impressive that these devices can, to some extent, process language and give responses, but often they process language and give responses very badly and I think it's easy to be so impressed that you miss that.
posted by an octopus IRL at 7:49 AM on December 29, 2021 [7 favorites]


Alexa may listen, Alexa may speak, but in either case Alexa doesn’t understand anything heard or spoken about.
posted by njohnson23 at 8:08 AM on December 29, 2021 [1 favorite]


“You want me to call Brother Seth.” “Yes.” “OK, calling Nephew Seth.” It went on like this for longer than I care to admit before I gave up and punched the number in manually… my son was just dying laughing.

How Perl development was done in the olden times.
posted by They sucked his brains out! at 9:47 AM on December 29, 2021 [3 favorites]


1. The algorithm they wrote to find 'challenges' found a news article that described this nonsense and put "(Obviously, do NOT attempt this!)" right after the description. The state of the art in natural language processing can not deal with nuance, sarcasm, or context. I have no problem with it being used for straight-forward commands like navigation, timer setting, note taking, etc. Playing specific music tracks can have errors, but these are harmless. Sourcing data from news articles for users to act on is obviously outside of its reach. This feature should have never made it to production.

2. Whoever is managing the Echo product is insisting to make it worse by always injecting commentary about new features (essentially advertising) into simple commands. When I got my first gen Echo ages ago, it did what we asked and no more. Now it is bombarding us with extra communication. Why is this?
posted by demiurge at 9:49 AM on December 29, 2021 [2 favorites]


“You want me to call Brother Seth.” “Yes.” “OK, calling Nephew Seth.” It went on like this for longer than I care to admit before I gave up and punched the number in manually… my son was just dying laughing.

Not quite the same but similarly frustrating was USAir's voice system being unable to discern between "Asheville" and "Nashville".
posted by achrise at 9:57 AM on December 29, 2021 [1 favorite]


When I got my first gen Echo ages ago, it did what we asked and no more. Now it is bombarding us with extra communication. Why is this?

Because now that they've suckered everyone in and got consumers to pay for the "privilege" of getting Amazon's foot firmly wedged in their door, the company can make it obvious with impunity that the Echo was intended from the first to be an advertising device.
posted by Greg_Ace at 12:08 PM on December 29, 2021 [1 favorite]


When I got my first gen Echo ages ago, it did what we asked and no more. Now it is bombarding us with extra communication. Why is this?

As I understand it, the internal divisions of Amazon are pitched against each other to show profit, like some sort of internal market. That's why Amazon is forever trying to upsell you. So Greg_Ace's comment is, essentially, correct - it's not enough that you buy a device, they also want you to use that device to purchase more Amazon services. The difference between this approach and, say, Apple, is that the internal market extends to all of the UI and other gubbins that make your device work, which is why everything that Amazon puts out is always a pain in the ass. At some level, you've already brought into the system, so why try harder? Now we know: because they're telling children to stick metal inside a plug socket
posted by The River Ivel at 12:34 PM on December 29, 2021 [1 favorite]


Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show

Yes Amazon is trying hard to make you use Alexa for more, but in a wonderful show of common sense and dignity, most people don't want to. I hear it's good for turning lights off and on though.
posted by i_am_joe's_spleen at 12:50 PM on December 29, 2021 [3 favorites]


I don't know anything about electricity -- what would happen if you attempted this? House fire? Electrocution? Please give me details so the little imp in my brain saying "learn by doing! you could find out by trying it!" will shut up.
posted by nebulawindphone at 6:41 PM on December 29, 2021 [1 favorite]


This is the same brain imp who has tormented me for decades with curiosity about what exactly the forbidden sweet flavor of lead paint tastes like.
posted by nebulawindphone at 6:43 PM on December 29, 2021 [1 favorite]


Well, we have 230V supply as standard in New Zealand houses. When I was a kid experimenting with crystal radios, my room had a very old fashioned plate in the wall with aerial and ground terminals for radio, and I hooked up my crystal radio to it. My thin gauge aerial wire was insulated with some kind of varnish, not rubber or plastic. The radio wall plate was right next to the power point.

A while later I was fooling around and then went to vacuum my room, and I somehow got the aerial wire across the terminals on the vacuum cleaner plug as I plugged it in.

There was a purple flash, scorch marks, and the smell of vapourised copper. I got a fright but was unharmed, because I was holding the insulated plug and not the (now severed in two) wire. As you might imagine, I a) did not tell my parents and b) never did it again. The scorch mark on the power point plate was there for ever after, as far as I know.

My guess is that the penny would cause massive sparks, it could get finger-burningly hot faster than you can let go, and if you were unfortunate to be a good enough conductor to ground, you might get electrocuted. Hopefully not fatally, 110V is safer in that regard.

Ah, the 1970s, a different time when 10 year olds could be trusted alone with a soldering iron...
posted by i_am_joe's_spleen at 7:21 PM on December 29, 2021 [1 favorite]


Up until recently we got along fine without having an AI to answer our most random questions.

Pfft. You're clearly just a Luddite who fears change.

As you might imagine, I a) did not tell my parents and b) never did it again. The scorch mark on the power point plate was there for ever after, as far as I know.

The only time I can remember my mother actually smacking me was after I'd found a three inch scrap of blue insulated wire with a solid copper core maybe 1.5mm thick, carefully stripped both ends, bent it into a U shape, then poked it into the power point on the living room skirting board next to the piano, bridging the active and neutral slots.

I must have been, I dunno, seven or eight years old?

Bright blue flash, big bang, big black scorch mark next to the active slot, stink of burnt plastic and taste of copper, blown fuse. No plausible deniability available whatsoever. Quite the day, education-wise.

Didn't stop me, a few years later, removing one of the brass pins from an old bakelite mains plug and using it as a matter of course to make earth connections for my crystal sets by sticking it into the earth slot on the nearest power point. Mum wasn't happy when she spotted me doing that either, but by then I knew enough about mains wiring to explain myself and convince her that she was doing something essentially equivalent every time she plugged in the toaster.

I certainly never plugged my bare brass mains pin with the hookup wire attached into any slot but earth. I clearly had a naive child's faith in the ability of electricians not to fuck up the wiring, but I can never forget what a standard Australian 240V outlet and a bent blue wire can do.
posted by flabdablet at 8:48 PM on December 29, 2021 [3 favorites]


I don't know anything about electricity -- what would happen if you attempted this?

If you did this with a penny and a US plug in a modern US socket, I would expect to hear a bang and see a white flash and a scorch mark, and then the breaker on that circuit should trip.

If you were unfortunate enough to have some personal path to an earth and your penny happened to touch the active pin before it also touched the neutral then you'd get a very unpleasant electric shock, but if the outlet was GFCI protected it would probably not be fatal.
posted by flabdablet at 9:03 PM on December 29, 2021 [2 favorites]


Counterpoint: what if Alexa is already so smart it correctly predicted that the child would grow up to be a genocidal dictator? These systems are still being refined, we can’t expect them to solve every problem on the first try.
posted by snofoam at 5:03 AM on December 30, 2021 [2 favorites]


The fuse in the plug would blow.

Alas the fuse in the plug wouldn't be part of the circuit. And because of the UK's unique obsession with ring mains there'd likely be a 32A breaker, which will allow around 50A (that's 12,000 watts) to flow for up to a minute without tripping. You best hope that penny makes a good enough contact to trip the breaker quickly.

Granted you'd have to have started with a really old unsleeved plug and to have inserted the penny from underneath, and to have done so without tripping the RCD any vaguely modern installation will have.

Or just poke a pair of scissors in to a deathdaptor.
posted by grahamparks at 5:29 PM on December 30, 2021 [1 favorite]


From what I can tell, those "deathdaptors" are really only as dangerous as standard US outlets. Pfft. You should see how we used to power dryers. Zzzzap.
posted by Kadin2048 at 6:41 PM on December 30, 2021


Metafilter: not just another victory candescence
posted by pee tape at 7:50 PM on December 30, 2021


The fuse in the plug would blow.

This is moot as most of the UK no longer uses 1p coins. To electrocute themselves with, anyway.
posted by They sucked his brains out! at 12:51 PM on December 31, 2021


« Older The Einstein of Palmistry   |   Yes Sir, They Can Boogie Newer »


This thread has been archived and is closed to new comments