Warning Signs
December 14, 2018 1:29 PM   Subscribe

Michelle was 31 and 5'6, the perfect age and the perfect height. She had thick, straight hair, which wasn't a must-have, necessarily, but it was certainly nice. She smiled in every picture, a wide, inviting smile. She had a fine sounding job as a Project Manager and went to a college he had heard of. He messaged "Hey." This wasn’t his best work but it was usually good enough.

A short story by Emily J Smith.
posted by smcg (36 comments total) 52 users marked this as a favorite
 
Good girl, Lucy!
posted by praemunire at 1:46 PM on December 14, 2018 [13 favorites]


Daaaamn.
posted by bunderful at 1:50 PM on December 14, 2018


Love the concept, hate the fact that it would never be profitable enough for a huge tech corporation to develop
posted by Redhush at 1:58 PM on December 14, 2018 [5 favorites]


Had some trouble with the shifting POV. It's an OK idea, but it would have been more interesting if Roy wasn't a total tool.
posted by SPrintF at 2:10 PM on December 14, 2018 [4 favorites]


Love the concept, hate the fact that it would never be profitable enough for a huge tech corporation to develop

Your grossly underestimate how many total money-losing project get built out by big companies.
posted by GuyZero at 2:43 PM on December 14, 2018 [9 favorites]


Your grossly underestimate how many total money-losing project get built out by big companies.

*cough*GOOGLEPLUS*cough*
posted by hanov3r at 2:47 PM on December 14, 2018 [3 favorites]


I remember reading somewhere about the alarming proportion of men who spoke to Siri, Alexa, etc. with viscous misogyny. It does seem like a good warning sign.
posted by Jon_Evil at 2:48 PM on December 14, 2018 [39 favorites]


Yup. Remember Microsoft and the Tay Tweets fiasco?
posted by Melismata at 2:51 PM on December 14, 2018 [2 favorites]


The only thing that can stop a bad power-mad surveillance-state Silicon-Valley capitalist is a good power-mad surveillance-state Silicon-Valley capitalist.
posted by Black Cordelia at 3:04 PM on December 14, 2018 [18 favorites]


"I remember reading somewhere about the alarming proportion of men who spoke to Siri, Alexa, etc. with viscous misogyny. It does seem like a good warning sign."

I've read and seen waaaaaaay too much SciFi then. I'm usually exceedingly polite to Siri and Alexa. All they're programmed to do is help...

(I may also be guilty of anthropomorphizing them as well)
posted by junyatwin at 3:18 PM on December 14, 2018 [1 favorite]


(I may also be guilty of anthropomorphizing them as well)

Yeah, they hate it when you do that.
posted by q*ben at 3:26 PM on December 14, 2018 [31 favorites]


junyatwin: I'd spin it the other way. Vicious misogyny aside, anyone who isn't polite to their AI personal assistant, it seems to me, hasn't watched enough sci-fi.
posted by jurymast at 3:32 PM on December 14, 2018 [23 favorites]


I remember reading somewhere about the alarming proportion of men who spoke to Siri, Alexa, etc. with viscous misogyny

Not to defend misogynist men, but I (a woman) will often yell ‚oh shut up, alexa‘ when I don‘t like the news. Liberating, I recommend it.
posted by The Toad at 3:39 PM on December 14, 2018 [1 favorite]


"Her consent is too important for me to allow you to jeopardize it."
posted by Freelance Demiurge at 4:00 PM on December 14, 2018 [6 favorites]


I love this story. So much.
posted by limeonaire at 4:04 PM on December 14, 2018


I don’t have Alexa, and I don’t use Siri that often other than to ask “what is this” when I hear a song somewhere and don’t know what it is (and I have Siri set to a British man’s voice because somebody here made a case that having all these women as assistants was sexist), but I have a weird emotional thing about “robots” and it’d make me sad to be rude to them. I tend to get emotionally attached to inanimate objects so it’d break my heart to be rude to them, and really hurt my feelings if somebody else was.
posted by gucci mane at 4:27 PM on December 14, 2018 [5 favorites]


And honestly I think if somebody was being mean to one of these devices, which are ostensibly suppose to be mimicking human beings, I’d think it was a red flag for sociopathy, but I’m not a psychiatrist so ¯\_(ツ)_/¯
posted by gucci mane at 4:29 PM on December 14, 2018 [9 favorites]


(I may also be guilty of anthropomorphizing them as well)

So are the men who are doing the viciously misogynistic treatment.
posted by the agents of KAOS at 5:30 PM on December 14, 2018 [24 favorites]


I didn’t read the [more inside] before I RTFA and I was terrified it was a news article about a serial rapist in the Bay Area with names changed to protect the innocent.

Surveillance-state aside (and that’s a whole ‘nother discussion) so happy with the way it ended and more so that it was fiction.
posted by bendy at 5:43 PM on December 14, 2018 [3 favorites]


I enjoyed that rather a lot.
posted by The Underpants Monster at 7:24 PM on December 14, 2018 [1 favorite]


Annoyingly overdone and obvious. The predators are always the guys next door, the nice ones. The ones everyone likes. The ones who know how to abuse plausible deniability. They're you.

But this part did crack me up: " Of course it was all men." One clever fragment. Too obvious, again.
posted by liminal_shadows at 7:50 PM on December 14, 2018 [2 favorites]


Love the concept, hate the fact that it would never be profitable enough for a huge tech corporation to develop

The profit comes from your network of private prisons.

The story is fine, but it takes the easy route by making the guy an obvious villain to promote surveillance technology. It may be cynical of me, but I think a real Lucy the Protector would be pointed outside to run facial recognition on the neighbors, especially ones who "don't look like they live here."
posted by betweenthebars at 8:35 PM on December 14, 2018 [7 favorites]


Now that Alexa is a beloved member of the family — my psychopathic sons seem to have more affection for this being that provides Drake songs and farts on command than they do for the family pet — I am intrigued by the idea of programming her to have a moral compass consistent with my wife and my values. Surveillance state aside, I’d appreciate the hell out of her interjecting the occasional “That’s not a very respectful way to talk to your mother” or “You absolutely DID call your brother a doo-doo head before he started crying and your father came running into the room.”

Unfortunately, I think it’s far more likely that Alexa is going to let my health insurance company know that I ordered pizza and beer delivery 4 times last week or will post to Facebook the embarrassingly high number of time I listen to ZZ Top’s Eliminator and that’s why I still prefer the family cat to Alexa.
posted by Slarty Bartfast at 9:08 PM on December 14, 2018 [10 favorites]


Apparently there is a mode for Amazon's Alexa that requires you to say "please" or it will make a point of ignoring your request. (Or at least positively reinforce being polite, I'm not sure.) It's for kids. To try and slow them becoming sociopaths or wannabe slavelords, I guess.

There is also a mode that eliminates most of the voice prompts with chimes, allegedly for brevity but I suspect also to appeal to the people who find the anthropomorphism uncomfortable.

If enough people think the technology is creepy, it won't go there. Conversely, if enough people think a certain thing represents the future incarnate, then the technology will always go there. Voice assistants seem to be on the razor's edge between the two.
posted by Kadin2048 at 11:53 PM on December 14, 2018 [1 favorite]


Interesting concept and engaging story, but semi-sentient Lucy sounds like a first cousin to Minority Report techniques.

Is an eavesdropping I'll Tell Mom! device inside your home a legal hornet's nest?
Amazon’s “Alexa”: An At-home Dream or Free-speech Nightmare?
University of Miami Law Review, 4/2/2017
...
Among other values, the First Amendment of the United States Constitution protects freedom of speech. The Supreme Court has struggled to define what constitutes protected speech.

“At the heart of that First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” wrote Amazon’s legal team in its Bates case motion. Amazon later explained that the protections for Alexa were twofold: “The responses may contain expressive material, such as a podcast, an audiobook, or music requested by the user. Second, the response itself constitutes Amazon’s First Amendment-protected speech.”

Citing a previous case involving Google, Amazon said that Alexa’s decision about what information to include in a response, like the ranking of search results, was a “constitutionally protected opinion” and was therefore entitled to “full constitutional protection.” In the Google case, the court noted that the company’s search results reflect “individual editorial choices” about both opinions and facts—two categories of speech that enjoy full First Amendment protection. By select[ing] what information it presents and how it presents it,” Google was exercising classic free speech, not unlike a newspaper editor might.

Amazon also cited a 2010 ruling in its favor, in which the company was joined by the ACLU and argued against the North Carolina Department of Revenue’s attempt to acquire customer purchase histories. “The fear of government tracking and censoring one’s reading, listening and viewing choices chills the exercise of First Amendment rights.”

Another argument aiding Amazon is that companies are undeniably accorded free speech rights. “Of course, Amazon itself has free speech rights,” noted Toni Massaro, professor at the University of Arizona College of Law, in an interview with FORBES. “As long as Alexa can be seen as Amazon, there is a protected speaker here.”
If your private (but potentially incriminating) speech is passed onto a corporation by its own device, who owns and protects your speech at that point?

If significant risk can be perceived by a listening device's algorithms — with a not insignificant risk of error (see autonomous vehicles) — should the parent corporation notify law enforcement?
posted by cenoxo at 1:10 AM on December 15, 2018


my psychopathic sons seem to have more affection for this being that provides Drake songs and farts on command than they do for the family pet

If it's not too Black Mirror, maybe your kids treat Alexa as a family pet.
posted by rhizome at 2:21 AM on December 15, 2018 [2 favorites]


Well, that's one way of making sure the creeper sexual harasser guy at your company doesn't countersue...
posted by jonp72 at 2:05 PM on December 15, 2018 [1 favorite]


Just to clarify, is there a connection between the storyline where the female boss kicks him off the development team and the one where Lucy eventually stops his date rape? Or are they just two parallel threads of him getting his ass handed to him?
posted by Omnomnom at 6:23 AM on December 17, 2018


I was torn between interpreting that (a) Lucy-reported warning signs told the boss that he was a human resources risk so she pre-emptively got him off her team or (b) the female boss corrrectly observed that he would object to the new algorithms preventing assault and removed him so he wouldn't sabotage feature development in that direction.

I put courtesty to robots in the same category as using my turn signal in an empty parking lot. It's not necessary, but it's easier to leave some behaviors as reflex. If I make using my signal a conditional decision instead of a reflex, then I risk forgetting when it does matter. Similarly, keeping baseline courtesy scripted keeps me from being "that person" when I'm exhausted but still have to deal with people.
posted by Karmakaze at 6:40 AM on December 17, 2018 [3 favorites]


The line

"She opened her file of select employees, making sure he had received the update."

says to me that she could see, based on his creepy behavior, that he could assault someone in the future, so she not only kicked him off the team but also secretly updated his unit so that it would call the police.
posted by Melismata at 7:46 AM on December 17, 2018


ooooooh! thank you Melismata, that makes sense!
posted by Omnomnom at 8:46 AM on December 17, 2018 [1 favorite]


says to me that she could see, based on his creepy behavior, that he could assault someone in the future, so she not only kicked him off the team but also secretly updated his unit so that it would call the police.

So I don't think it's a bad story and everyone likes bad-guy schadenfreude but:

a) an employee having access to specific user's data without them disclosing it first
and
b) the unit autonomously calling 911 without the owner opting in first

are such gross privacy violations that it really overshadows the rest of the story. The next chapter of this story is where the device calls ICE because it hears parents speaking Spanish to children in a household.
posted by GuyZero at 9:28 AM on December 17, 2018


What? I've absolutely run internal beta programs where I manually added users to a whitelist before. The story explicitly tells you that he knows he is getting employee-only updates. Where are you seeing any privacy violation?
posted by the agents of KAOS at 11:23 AM on December 17, 2018


If the manager is able to see any details of the user's device beyond simply knowing they're on a beta whitelist, it's a problem. If it's supposed to just indicate that he's on a list with a bunch of other people yeah, that's a thing that actually happens. I got the vibe that he thought he was getting an internal build but that the manager put him on an even more specific build, which is pretty questionable. Employees stalking and harassing other employees is a thing that happens.
posted by GuyZero at 11:26 AM on December 17, 2018


I got the vibe that he thought he was getting an internal build but that the manager put him on an even more specific build, which is pretty questionable. Employees stalking and harassing other employees is a thing that happens.

Well, yea. And "I manage our beta programs and assign people to different internal pools" is something that could be exploited for evil, but hey, so is "I manage our desk assignments".
posted by the agents of KAOS at 11:34 AM on December 17, 2018


Yeah there is a certain point where eliminating the possibility of someone taking advantage of their position for ill just isn't practical; you have to decide when you're going to trust people to do the right thing. That they could hypothetically do something crappy is worth thinking about in terms of who gets that power, but it's not necessarily something that's worth eliminating in every instance.
posted by Kadin2048 at 12:56 PM on December 17, 2018


« Older Data is Beautiful!   |   There is no fear in love Newer »


This thread has been archived and is closed to new comments