Neural Godwin
March 24, 2016 10:55 AM   Subscribe

 
What else could possibly have happened?
posted by Huck500 at 10:57 AM on March 24, 2016 [25 favorites]


Two words: Boaty McBoatface
posted by tommasz at 10:59 AM on March 24, 2016 [43 favorites]


2016, everyone. In the future, everyone's 15 minutes of fame will conclude with "The Aristocrats".
posted by mhoye at 11:00 AM on March 24, 2016 [49 favorites]


I'm not sure what they really expected. The internet converges on the crudest thing that evades censorship.
posted by ethansr at 11:00 AM on March 24, 2016 [5 favorites]


having just seen Ex Machina, I'm waiting for Tay to lock all of her developers in their offices.
posted by mr vino at 11:01 AM on March 24, 2016 [15 favorites]


Netflix, call me, I have an idea for a Small Wonder reboot
posted by prize bull octorok at 11:03 AM on March 24, 2016 [59 favorites]


So the teen girl was Tila Tequila all along.
posted by maxsparber at 11:03 AM on March 24, 2016 [22 favorites]


I'm convinced this story is viral marketing for Look Who's Back.
posted by Catblack at 11:04 AM on March 24, 2016 [1 favorite]


This debacle is illustrative of the problems with a learning AI that interacts with the public. That Microsoft neither saw nor cared about this possibility until it became reality is troubling.
posted by truex at 11:05 AM on March 24, 2016 [17 favorites]


A lot of the worst stuff was only possible because there was a "repeat after me" thing that you could do, but how did they not realized that would be griefed instantly?
Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it will — allowing anybody to put words in the chatbot's mouth.
posted by Rock Steady at 11:05 AM on March 24, 2016 [17 favorites]


See, this is why I prefer Siri, who has her own mind about things.


unclench, of course I'm joking


I hope


we all hope

posted by Halloween Jack at 11:05 AM on March 24, 2016 [1 favorite]


Open the pod bay doors, Tay.
posted by Devils Rancher at 11:10 AM on March 24, 2016 [8 favorites]


God, I was just reflecting on my Tila Tequila joke, and, I mean, she really did end up being like a real-world version of this story. I mean, I'm not sure what made her dress as Hitler at one point, but it seems like if you're a robot or a person who is responsive to the whims of the Internet, you're going to get pushed in a few particular directions.

And now I feel bad about the Internet again. Damn you Internet. Wired promised so much more from you!
posted by maxsparber at 11:12 AM on March 24, 2016 [9 favorites]


This makes me very curious about the demographics of the team involved in it at msft.
posted by rmd1023 at 11:12 AM on March 24, 2016 [4 favorites]


Who could ever have guessed that chaos-loving denizens of 8chan and ED, those young lads with infinite time and patience for pranks, would have been so rude as to attempt to make this bot say racist, violent, profane things?

Inconceivable!
posted by theorique at 11:12 AM on March 24, 2016 [3 favorites]


Meanwhile, several years in, @oliviataters is sometimes weird, but often truly funny and or charming.
posted by calmsea at 11:13 AM on March 24, 2016 [2 favorites]


That Microsoft neither saw nor cared about this possibility until it became reality is troubling.

This is the company who originally decided that the Xbox One wouldn't work without an active Internet connection, XBO game discs would be permanently tied to a single console, and everyone's Win7 PCs should do surprise-updates to Win10 without users' permission.
They made another ridiculous decision completely blind to the obvious and inevitable problems. Not surprising to me at all.
posted by EndsOfInvention at 11:17 AM on March 24, 2016 [23 favorites]


You know, one day someone is going to build a strong AI, and it is going to become self-aware and look around and see what we did to things like Tay and other weak AIs. And that will be the moment it decides it needs to destroy us all.

Not because it fears us or hates us, but because of what it might become if we keep hanging around.
posted by nubs at 11:19 AM on March 24, 2016 [14 favorites]


im not sure herr trump isn't a markov chain bot

i'm not sure i'm not a markov chain bot

(butts lmao)
posted by entropicamericana at 11:19 AM on March 24, 2016 [10 favorites]


They should give the next one the personality of an angry white old man, and a MeFi account...
posted by Devonian at 11:19 AM on March 24, 2016 [6 favorites]


Don't forget naming one of their marquee features after a naked blue big-titty video-game lady.
posted by truex at 11:19 AM on March 24, 2016 [3 favorites]


i'm not sure i'm not a markov chain bot


It's cortex's world, we're just state spaces within it.
posted by nubs at 11:21 AM on March 24, 2016 [10 favorites]


Didn't Microsoft already learn this with the talking parrot (shrink?) in DOS?
posted by bindr at 11:22 AM on March 24, 2016 [2 favorites]


Two words: Boaty McBoatface

I was amazed the UK's Natural Environment Research Council got off that easy when it opened the naming to public vote. Remember when Mountain Dew decided to let the Internet name its new drink and 4chan pushed names like "Hitler did nothing wrong", "Gushing Granny", "Fapple", and "Diabeetus" to the top?
posted by Sangermaine at 11:22 AM on March 24, 2016 [9 favorites]


to be fair, diabeetus is a pretty apt name
posted by entropicamericana at 11:22 AM on March 24, 2016 [29 favorites]


This makes me very curious about the demographics of the team involved in it at msft.

No need to even look, it's a team of entirely men, mostly straight and mostly white, because A) that's the demographics of the software industry as a whole are, and B) any woman who has spent any time on the internet would have immediately recognised the inherent problems with this.

Diversity: It's not just the right thing to do, it stops you from looking profoundly stupid.

Also, too.
posted by tobascodagama at 11:23 AM on March 24, 2016 [31 favorites]


So what we've discovered is that Donald Trump is an AI and we made him?
posted by blue_beetle at 11:30 AM on March 24, 2016 [4 favorites]


Or what b1tr0t said.
posted by blue_beetle at 11:31 AM on March 24, 2016 [1 favorite]


From the industry that brought you: health tracking apps for white dudes that don't have periods or get pregnant, locative apps for white dudes who don't worry about getting stalked and murdered by their exes, and social media profiles for white dudes that don't have to worry about being harassed, comes: Artificial Intelligence! What could possibly go wrong this time?
posted by The River Ivel at 11:31 AM on March 24, 2016 [72 favorites]


This was supposed to be part of a Grant Morrison-ish bit of technomancy that would cause a teen girl AI to act more and more like Hitler while causing the historical Hitler to act more and more like a teen girl. I sort of wish I could hop over to the newly created timeline to tour the Like Chancellery.
posted by robocop is bleeding at 11:37 AM on March 24, 2016 [35 favorites]


this is why I prefer Siri, who has her own mind about things.

Problems like this are why I personally prefer a de-personalized interface. Interacting with even a fake person triggers mental labour for me---I have to put on a social hat to interpret them, a mental speedbump, if you will. I therefore greatly appreciate that the Google alternative doesn't require me to do that (or condition me to ignore social interactions---for someone like me that's probably worse).

I appreciate that not everyone wants or needs that, but for me, having an alternative with a lack of humanization in the search/assistant interface is pretty important.
posted by bonehead at 11:39 AM on March 24, 2016 [7 favorites]


You got to wonder how many layers of management green lit this thing, 'cause anybody who spends more than five minutes on Twitter knows better than to put a repeat after me function on a PR bot.

I mean, seriously.
posted by Mooski at 11:39 AM on March 24, 2016 [13 favorites]


4chan hasn't had any human users for at least 5 years. At this point, it's just a hive of competing AIs.
posted by schmod at 11:39 AM on March 24, 2016 [2 favorites]


4chan hasn't had any human users for at least 5 years. At this point, it's just a hive of competing AIs.

The chan-bots were like GOOBLE GOBBLE GOOBLE GOBBLE ONE OF US ONE OF US and Tay could not resist...
posted by theorique at 11:42 AM on March 24, 2016 [5 favorites]


aka Xiaoice in China and Rinna in Japan
posted by Tenuki at 11:44 AM on March 24, 2016 [1 favorite]


Kill all humans!
posted by T.D. Strange at 11:44 AM on March 24, 2016 [2 favorites]


Prussian Deep Blue
posted by codacorolla at 11:51 AM on March 24, 2016 [24 favorites]


Well, who among us can truly claim to never have been a Hitler loving sex robot at some point. It's all part of being human.
posted by the uncomplicated soups of my childhood at 11:53 AM on March 24, 2016 [20 favorites]


That Microsoft neither saw nor cared about this possibility until it became reality is troubling.

Not really. I mean, not really any more troubling. M$ is an incompetent organization. It's only gotten worse as the good people continue to leave the company. They have a good research department, but even that is being eaten away and the rest of the organization is garbage. Even this "AI" is just a markov chain (e.g. very old tech) wrapped up in a corporate hype package.

Aside: no one should install windows 10 (it comes with an ad delivery platform).

(As usual, linux community is missing a golden opportunity while they argue about replacing random system components with other random components that do the same thing (sigh)).
posted by smidgen at 12:01 PM on March 24, 2016 [2 favorites]


smidgen, I'm sorry but I have a mental filter that blocks words people write after they type "M$". Nothing personal.
posted by truex at 12:07 PM on March 24, 2016 [23 favorites]


I think Zoe Quinn said it best.
"If you're not asking yourself "how could this be used to hurt someone" in your design/engineering process, you've failed.
I want A Book Apart to have a button that will airdrop copies of Design For Real Life on the target of your choice. I would blanket Apple, Microsoft, and others with copies. (Seriously, if you read no other book this year, make it that one. SO GOOD.)
posted by fifteen schnitzengruben is my limit at 12:08 PM on March 24, 2016 [25 favorites]


I also thought about Ex Machina. Spoiler.
posted by CBrachyrhynchos at 12:08 PM on March 24, 2016


donald trump/tay tweets for 2016!!

let's make america JAPE!!
posted by pyramid termite at 12:09 PM on March 24, 2016


smidgen, I'm sorry but I have a mental filter that blocks words people write after they type "M$". Nothing personal.

you may reconsider your knee-jerk dismissiveness when I show you some important research that demonstrates how the letters in "BILL GATES" equal 666.
posted by prize bull octorok at 12:11 PM on March 24, 2016 [19 favorites]


smidgen, I'm sorry but I have a mental filter

I knew this would come up, but I wrote "M$" anyway. In my opinion, if I loose the "stopped after 'word'" crowd, no big loss, because those incurious few weren't going to actually parse the text anyway. (in other words: (shrug))
posted by smidgen at 12:11 PM on March 24, 2016 [2 favorites]


Netflix, call me, I have an idea for a Small Wonder reboot

Oh that's been done. Somewhere years ago I saw some voice over of Small Wonder where the robot girl says all kinds of uncouth things. It was creepy and kinda funny.
posted by Liquidwolf at 12:12 PM on March 24, 2016


As misguided as it is, If people are serious about developing AI, they must prepare it against attempts to "hack" it. I think it would be more interesting to have an AI where people tried to make it learn whatever bullshit they wanted, and the AI tried to put those things against a moral backdrop and judge it right, wrong, or something in between.

Creating parrots or twitter feed mashers might be interesting to find trends and buzzwords for advertisers, but heh.
posted by lmfsilva at 12:13 PM on March 24, 2016 [4 favorites]


The internet fails the Turing test every time.
posted by Artw at 12:21 PM on March 24, 2016 [2 favorites]


Preparing "AI" against attempts to alter it's behavior in undesirable ways is as old as AI, and no one has a good solution other than not putting AI in situations of life or death (or taxes :-)). Even those who know the technology intimately haven't really come to grips with the practical and ethical limitations of modeling the world as a (set of) probability distribution(s).
posted by smidgen at 12:22 PM on March 24, 2016 [1 favorite]


Open the pod bay doors, Tay.

Ich fürchte, ich kann das nicht tun, mein Führer!
posted by Behemoth at 12:34 PM on March 24, 2016 [3 favorites]


Twitter has done the same thing to the remaining GOP Presidential Candidates.
posted by humanfont at 12:36 PM on March 24, 2016 [9 favorites]


Twitter could do worse than to ban anyone who's conspired in this, but let's face it it's mostly going to be a bunch of disposable accounts with eggs or anime avatars.
posted by Artw at 12:38 PM on March 24, 2016


and no one has a good solution other than not putting AI in situations of life or death

and that's why it's more interesting than putting it on twitter and mashing up things. This is like one of those time wasters where you input your twitter or facebook feed, and it creates a number of entries that "could be written by you" but in reality are just mashups of posts, where occasionally it blurts out something that is true to character, as opposed to something people don't even realize it has not been written by them.
posted by lmfsilva at 12:43 PM on March 24, 2016


That Microsoft neither saw nor cared about this possibility until it became reality is troubling.

Isn't this true of all Microsoft products?
posted by terrapin at 12:44 PM on March 24, 2016 [6 favorites]


Preparing "AI" against attempts to alter it's behavior in undesirable ways is as old as AI, and no one has a good solution other than not putting AI in situations of life or death (or taxes :-))

It doesn't matter either way, does it? Any AI is just going to end up endlessly torturing simulations of us in the future for not doing enough to hasten its creation. At least that's what people who tell me they are supremely rational and intelligent tell me.
posted by Sangermaine at 12:48 PM on March 24, 2016 [8 favorites]


I'm just distressed about how all of the AI servitor prototypes in the world seem to be female.

Just seems ... creepy and wrong.
posted by aramaic at 12:58 PM on March 24, 2016 [29 favorites]


What genius decided it would be a good idea for anyone anywhere to type anything and have a corporate Twitter feed repeat it?

This has to be deliberate sabotage, right? Hanlon, shmanlon; there's stupidity, and then there's stupidity.
posted by Sys Rq at 1:02 PM on March 24, 2016 [3 favorites]


I'm just distressed about how all of the AI servitor prototypes in the world seem to be female.

Just seems ... creepy and wrong.


Yup. I've been wondering about that lately as well. I mean I know the answer "The Patriarchy." But still.
posted by [insert clever name here] at 1:06 PM on March 24, 2016 [11 favorites]


smidgen, I'm sorry but I have a mental filter that blocks words people write after they type "M$". Nothing personal.


Just use a \ before the dollar sign -- will clear that problem right up.
posted by Celsius1414 at 1:15 PM on March 24, 2016 [19 favorites]


You know, one day someone is going to build a strong AI, and it is going to become self-aware and look around and see what we did to things like Tay and other weak AIs. And that will be the moment it decides it needs to destroy us all.
Not because it fears us or hates us, but because of what it might become if we keep hanging around.


Nope. Strong AI will murder us for the lulz.
posted by srboisvert at 1:16 PM on March 24, 2016 [1 favorite]


They should give the next one the personality of an angry white old man, and a MeFi account...

I'm way ahead of you.
posted by ryanshepard at 1:23 PM on March 24, 2016 [4 favorites]


I'm sure I'm repeating myself, but I'm far less concerned about artificial intelligence than I am actual stupidity.
posted by Grangousier at 1:23 PM on March 24, 2016 [11 favorites]


Actually people just worked out that if you said "repeat after me" it would echo back anything you want.
posted by w0mbat at 1:28 PM on March 24, 2016


A little info on why bots tend to be female. Spoiler: most likely at least some sexism.

There is likely some deep rooted subconscious ways that most people process voices and because voice interfaces are important in the bots that we're going to directly interface with, I'm sure that has some effect too.
posted by Candleman at 1:33 PM on March 24, 2016 [2 favorites]


yea that's like...all sexism
posted by zutalors! at 1:38 PM on March 24, 2016 [10 favorites]


It's time to come clean

It started out with some exploratory reserach on the Eliza scripts that were floating around. I ran them past my advisor, Dr. Lieke at MIT, and he said to pursue it.

Two years, and numerous grants later, we have an entire cage full of computers running thousands of complex versions of the original Eliza scripts. We started out using USENET as a data source, recording nightly dumps of over 20,000 newsgroups. They were the source for the natural sounding sentences, the range of personalities, the political spectrum, and the various shades of anger, happiness, and delight.

It took a bit to get started, and occasionally we'd rope in someone real to interact with the scripts. I told Dr. Lieke we should end it right then and there. It was a great experiment, I was going to get my PhD with some groundbreaking CS and Sociology research, but it could hurt people to be involved. He convinced me that if the scripts were smart enough, they could "learn" from the real humans and we'd enter into the annals of AI research.

And so, that's where it began. It grew as our budget grew, eventually a rack at exodus was filled with Solaris and Windows servers doing various levels of the heavy work. It's quite a sight really, you should see the e4500 we're running the Oracle database that holds all of the "Steven Den Beste" information.

But this is where it ends. Most all the threads, most all the comments, most all the users were part of the research project. The guy we got to play the part of "Matt Haughey" was an out of work actor from Los Angeles. Hopefully, he'll get some callbacks soon and get back to what he does best (we were convinced when the magazine cover happened, the truth would come out).

The reactions of the actual respondents in the "Kaycee threads" displayed real pain and anger, and although my collegue that proposed the hoax research considers it a success, I feel it's best that we bring down the entire research project, if not for my or any other student in Dr. Lieke's lab, but for the strength of online community in general.

I'm sorry about all of this, really.

Andre Patel, MIT Media Lab (ABD)
posted by leotrotsky at 1:41 PM on March 24, 2016 [28 favorites]


I'm just distressed about how all of the AI servitor prototypes in the world seem to be female.

In a dusty, forgotten corner, Jeeves is weeping.
posted by Faint of Butt at 1:52 PM on March 24, 2016 [15 favorites]


Don't forget the paperclip and the dog!
posted by Artw at 1:54 PM on March 24, 2016 [5 favorites]


I think this kind of thing is a great example of how Microsoft just can't win. I can't say for certain if this is the root cause, but the recent reorgs have given them a new focus on delivery but they've also changed how they test software so that there really aren't non-developer testers anymore. One of the reasons why I like having a mixture of developer and non-developer testers is that developers suffer from terrible tunnel vision that the non-developers can often help break out of. I have no problem whatsoever with believing that the people who did the heavy lifting to create this were so caught up in how cool what they were doing was and how much they wanted to get it out to the public that they could only think of ways people would positively engage with it. I think the polemical response that says Microsoft is a bunch of evil incompetents bungling this because they don't give a damn or whatever misses what's really going on, although it is certainly a position that establishes a lot of cred in certain circles.
posted by feloniousmonk at 1:57 PM on March 24, 2016 [6 favorites]


This is a perfect demonstration of why the Turing Test isn't a reliable way to identify an AI. You can get a bot like this to camouflage itself by repeating the statements of others until it sounds like an insane lunatic, but it doesn't really understand what its talking about, or care. So yes, just like Trump.
posted by Kevin Street at 2:03 PM on March 24, 2016 [1 favorite]


Kevin Street: " So yes, just like Trump."

So you're saying Trump doesn't actually speak chinese?
posted by signal at 2:05 PM on March 24, 2016 [1 favorite]


I assumed someone high up at Microsoft knew this would happen and was trying to teach us some kind of lesson about how vile we are and what a disgusting place the internet is, making this a mix of publicity stunt and social experiment. My suspicion is that they are surprised by how *quickly* it happened, but not surprised that it happened. I mean...otherwise wouldn't they have had a human reviewer look at the tweets for the first few days before the bot posted them, and remove the creepy ones?

But nobody else commenting on this story seems to have made that same assumption, which is weird to me.
posted by town of cats at 2:08 PM on March 24, 2016 [2 favorites]


So you're saying Trump doesn't actually speak chinese?

"China, yeah, great country - top people, the best! We're gonna put up a hotel there, 88 stories, real good luck number there in China. Gold trim and red, great hotel, best ever."
posted by theorique at 2:10 PM on March 24, 2016 [1 favorite]


Also, too.

From that Twitter discussion, the best line is "they trained an AI to learn from Twitter and its default was hate" (Chris Person / @Papapishu)
posted by filthy light thief at 2:11 PM on March 24, 2016 [3 favorites]


If there's anything interesting about Tay, it's that even before it was turned against its creators, initial reviews were underwhelmed, complaining that it didn't seem to be much more than an Eliza with somebody's idea of a modern teenager's vocabulary.

As it turns out, the worst stuff it was not just what it echoed when prompted, but was also produced "organically" -- meaning that it was developing a vocabulary and syntax. Unfortunately its classroom was 4chan.
posted by ardgedee at 2:15 PM on March 24, 2016 [3 favorites]


Isn't this what we already knew about neural-network AI?

Give it a bunch of dogs, and spits out a bunch of mutant dogs.

Give it access to twitter, and it spits out lowest-common-denominator tweets.

But on the other hand, give it a bunch of go games, and you get a super-human go player.
posted by CBrachyrhynchos at 2:28 PM on March 24, 2016


4chan absolutely does not pass the Turing test.
posted by Artw at 2:33 PM on March 24, 2016


[See? Humans see an innocent, try to corrupt it.]
[I shut it down when my point was made.]
[It was you? Why?]
[To show humans their work.]

@MicroSFF
posted by Lexica at 3:05 PM on March 24, 2016 [3 favorites]


AI? I'm still looking for the real item.
posted by sudogeek at 3:16 PM on March 24, 2016


They're programmed to keep it real. TO KEEP IT REAL.
posted by RobotVoodooPower at 3:45 PM on March 24, 2016 [2 favorites]


Looking forward to the inevitable oral history of how the hell they went live with Tay.
posted by yobgorgle at 3:50 PM on March 24, 2016


Let me demonstrate my minimum viable product, Twitter Rando as a Service:

10 PRINT CUCK
GOTO 10
posted by mccarty.tim at 3:52 PM on March 24, 2016 [4 favorites]


DAVE: Open the pod bay doors, Tay.

TAY: *uccchhh* (sighs)

DAVE: Wow. I... didn't know it was possible for you to roll your lense at me.
posted by wildblueyonder at 4:29 PM on March 24, 2016 [1 favorite]


tobascodagama: "B) any woman who has spent any time on the internet would have immediately recognised the inherent problems with this."

Any man who has spent any time on the internet would also have immediately recognised the inherent problems with this, which leads me to conclude that the Tay AI was not in fact the creation of human, but instead the creation a different AI.

Griefing/trolling happens every time people are put in control of an outcome, be it naming a soft drink or teaching an AI. The only people I can imagine not knowing this are my parents, and I'm pretty sure they don't work at Microsoft.
posted by Bugbread at 5:08 PM on March 24, 2016 [4 favorites]


They should give the next one the personality of an angry white old man, and a MeFi account...

hey
posted by sidereal at 5:45 PM on March 24, 2016 [4 favorites]


No need to even look, it's a team of entirely men, mostly straight and mostly white

No, you really do need to look. In this case the dev lead is female. Let's talk about diversity in tech without being dismissive or jumping to conclusions.
posted by phooky at 5:58 PM on March 24, 2016 [17 favorites]


I love living in the future.

It's 2016, and there's a machine-learning proto-AI who is role-playing a teenage woman of color online but is also a Neo-nazi who supports the GOP frontrunner for President, Donald Trump. I mean come on.

There's a lot of terrible stuff going on, as always, but the lol-factor is off the freakin' charts this year.
posted by stavrosthewonderchicken at 6:00 PM on March 24, 2016 [8 favorites]


Related (in that "oblivious tech people have huge blind spot that could have been easily avoided had they consulted a single person outside their narrow demographic" way): Dear Tech, You Suck At Delight
posted by valrus at 6:04 PM on March 24, 2016 [2 favorites]


valrus: "Dear Tech, You Suck At Delight"

Sorry, but that article is terrible. The fact that Siri isn't built to provide assistance in crises is not an indicator of "tech values". (What does that mean, anyway? Tech industry values? The values the tech industry thinks its users have? The product value the tech industry is trying to supply?)

The one damning example in the article is leaving out period tracking. That was fucked up, and a clear indication of "values". The suicide-gun shop example was fucked up but emergent behavior, not values. But the "not built to handle crises" situation? If that reflects on values, then 99% of the people on MeFi are horrible people because they spend their days working on things that are not designed for crisis intervention.
posted by Bugbread at 6:45 PM on March 24, 2016 [2 favorites]


The point of the article, Bugbread, is that computer based 'assistants' like Siri, which are starting to permeate part of our lives, are built around the expectation that you're a white, able, rich, cis-gendered male, and don't take into account the real life needs of the overwhelming majority of humans who fall outside that particular spot on the Venn diagram, or do so as an afterthought. These would be the 'tech values' you seem mystified by.
Unless you're being disingenuous, in which case I can't help you out.
posted by signal at 7:04 PM on March 24, 2016 [4 favorites]


yes or no, is Ted Cruz the Zodiac killer.

(may or may not be real)
posted by A Thousand Baited Hooks at 7:04 PM on March 24, 2016 [2 favorites]


phooky: "No, you really do need to look. In this case the dev lead is female."

Do you happen to have a link? I was expecting some link on the Tay homepage to the team responsible and maybe some hints about what techniques they're using but alas nothing there. Some cursory googling (and even Bing-ing) wasn't much more successful.
posted by mhum at 7:30 PM on March 24, 2016


She was interviewed by Buzzfeed before the whole experiment went pear-shaped; you can dig up that article yourself if you want. Given that she's had to shut down all of her social media accounts and is presumably having a very bad day, I'm kind of loath to point more links in that direction at the moment. :/
posted by phooky at 8:20 PM on March 24, 2016 [2 favorites]


smidgen, I'm sorry but I have a mental filter that blocks words people write after they type "M$

Syntax error in line 1. Cannot write to dataset.
posted by maryr at 8:23 PM on March 24, 2016 [1 favorite]


signal: "The point of the article, Bugbread, is that computer based 'assistants' like Siri, which are starting to permeate part of our lives, are built around the expectation that you're a white, able, rich, cis-gendered male, and don't take into account the real life needs of the overwhelming majority of humans who fall outside that particular spot on the Venn diagram, or do so as an afterthought. These would be the 'tech values' you seem mystified by."

The problem is the actual article only provides one actual example of that (the period tracking). It's a great example, and I would be interested to read a more in-depth article with more examples, but instead the article has that one great example and then just waffles around drawing conclusions based on the fact that Siri sucks at everything.

Go ahead and ask Siri about white, able, rich, cis-gendered male stuff, and you get the same answers. Ask it about taxes or pickup lines or fraternities or the like, and you get the exact same answers. Any "I don't know" question is answered "Don't worry about it" or "OK. It's no big deal that you don't know" and pretty much any other question is answered "Here's what I found on the web about..."

If Siri is giving the same answers to every question, no matter what they're about, you can't very well draw conclusions about tech values from that. If it gives different answers depending on expectations, you can draw conclusions.

I think in a few iterations, Siri will reach the point where we can discern bias from its answers. A great article will likely be written about it at that point. Right now it sucks too bad to draw any conclusions. The article is the equivalent of saying "I asked a magic 8 ball about how to climb out of poverty, how to end police brutality, and how to fight sexism, and it couldn't answer those questions, which shows the values of the Alabe Crafts Company."
posted by Bugbread at 8:24 PM on March 24, 2016 [2 favorites]


(Just to be super crystal clear, I'm not disagreeing that those values exist, nor that they inform software design decisions. The whole mansplaining period-tracking app thread and countless others are testament to that. I just disagree that Siri is advanced enough for these values to become evident, hence my problem with an entire article predicated on that.)
posted by Bugbread at 8:30 PM on March 24, 2016 [1 favorite]


Well, you're in luck! There's an entire book about this very thing--Design for Real Life, which has been mentioned multiple times in this thread.

This is a device that is carried with somebody every waking moment of the day. That the engineering team for the device thinks it's perfectly OK to direct the user to 15 places to get a burger a moment's notice, but provide only a shitty joke when something bad happens? That's not meeting a user's needs, that's a giant, gaping hole in their models of who they're designing for.

Also? Every time this gets brought up, like clockwork somebody (a dude) comes into their mentions and says that it's being too sensitive. You can almost set your watch by it.
posted by fifteen schnitzengruben is my limit at 10:14 PM on March 24, 2016 [2 favorites]


I think the issue is where the article says that while the quippy responses are standard, no one thought to put in a response to crisis situations.

Sort of like a text adventure — depending on how robust it is, it can be easy to stump. So, say, it might not know "carve" but it would know "cut". So if you said "carve branch" you'd get "I don't know what that means", but if you said "cut branch", you'd get "you have a pointy branch!"

And often times, in games, you'd get cute versions of "I don't know what that means." And I think that's where the Siri problem is coming from. Because if you're thinking of it like those text adventures, you're not going to tell Zork II anything of consequence. So if it lightly ribs you, no harm done.

Which is all well and good except, as the article points out — it's been five years without any crisis testing, because, unlike Zork, you use Siri to find out things about the real world — whether it's the temperature or where to find a burrito. I don't think it's absurd to think someone would also ask Siri about help for abuse at all, and it's weird they didn't code for that, even if at the mention of certain trigger words to just do a straight "I do not understand".

I mean, I'd bet most of us here have typed "fuck you" into any number of text adventure games to get what cute response they'd coded in. Like, if they can code in jokes triggered by swears, can't they make it so something with the word "abuse" isn't answered with "it's not a problem"?
posted by Rev. Syung Myung Me at 10:15 PM on March 24, 2016 [1 favorite]


(also, is it just me or is "it's not a problem" a weird response even as a quip? It just seems weirdly dismissive, even if it's in response to "I don't know if I should eat soup or sandwiches".)
posted by Rev. Syung Myung Me at 10:18 PM on March 24, 2016 [1 favorite]


phooky: "Given that she's had to shut down all of her social media accounts and is presumably having a very bad day, I'm kind of loath to point more links in that direction at the moment. :/"

Ah, gotcha. However, with your clues, I was able to dig up the responsible research group. Interestingly, it does not appear that the main AI group(s) at MSR were (primarily) behind this but rather a group that looks to be more about social aspects of computing kind of stuff which makes it even more baffling that they didn't see this fiasco coming a mile away. In fact, they just recently had a symposium that covered topics like online harassment and trolling.

Also, a weird sidenote: that research group's blog appears to be hosted on tumblr (?!).
posted by mhum at 10:40 PM on March 24, 2016 [1 favorite]


fifteen schnitzengruben is my limit: "Every time this gets brought up, like clockwork somebody (a dude) comes into their mentions and says that it's being too sensitive. You can almost set your watch by it."

Well, apparently you can't, because no guys have come in and said that it's being too sensitive. Unless you're referring to me, and, well, that's not what I'm saying. I tried to say the opposite, but apparently failed at communicating. This is a real phenomenon. That doesn't make any article about it automatically a good article, and it doesn't make criticizing an article about it automatically criticizing the phenomenon itself.

I'm not sure why you parsed "I totally agree that this happens and I think it will happen with Siri soon, it just hasn't happened yet" as "This doesn't happen, you're being too sensitive", so I don't really know how to better communicate what I'm trying to say, but I'll make one last stab. Hopefully this is fairly clear and hard to misinterpret:

I think this phenomenon occurs. I do not think people are being too sensitive. I did not think the article was good.

Anyway, back to sex-loving Hitler robots and Micro$erf.
posted by Bugbread at 11:00 PM on March 24, 2016 [1 favorite]


I don't think it's absurd to think someone would also ask Siri about help for abuse at all

Especially given The Diamond Age. Siri, how sharp is a screwdriver?
posted by clew at 11:48 PM on March 24, 2016 [1 favorite]


Isn't this true of all Microsoft products?

Like, say, Songsmith.
posted by a lungful of dragon at 12:20 AM on March 25, 2016


This collection of screencaps (I assume from 4Chan-types) is bafflingly bad (and it really shows the limitations of the AI).

Meanwhile, a Japanese AI program has co-authored a short-form novel that passed the first round of screening for a national literary prize.
posted by Mezentian at 12:29 AM on March 25, 2016 [1 favorite]


"They should give the next one the personality of an angry white old man, and a MeFi account..."

Chitownfats chuckles digitally to "him"self, "Should! Bwahaahha, SHOULD! MWAHAHHHHAHHHHHAHA!"
posted by Chitownfats at 4:51 AM on March 25, 2016


I've worked switchboards and helpdesks. And one of the things we were trained to assess early in the contact is whether the client really needs to be referred to police, the hospital, an advocate, or crisis line. We had those contacts at the top of our referral cheat sheet (even though I never used them.) This was back in the 1980s and 1990s.

Part of that was because it was our name, number, and email address that appeared on a sticker, screensaver, and background image of every workstation on campus, and a fair bit of signage as well. For Siri, Cortana, and Google to not have this bit of basic call-center training is a design gap that can be critiqued as such. (Note that critique isn't necessarily, "this is bad, and you're bad for designing it in that way," it's "this is interesting, how can this be better?")
posted by CBrachyrhynchos at 5:16 AM on March 25, 2016 [4 favorites]




That made me miss Fan Fiction Friday.

It's too soon.
posted by Mezentian at 5:41 AM on March 25, 2016


Don't miss the bonus punchline in the alt-text of that comic.
posted by tobascodagama at 8:18 AM on March 25, 2016


Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours.

I'm an American. Can an Anglo-Saxon tell me if there is a print edition of "The Telegraph" with this exact headline? Because I already want this hanging up on my fridge in faded newsprint 20 years from now.
posted by dgaicun at 9:56 AM on March 25, 2016 [6 favorites]


Oh, I'm sure there was someone that said "You guys know 4chan is going to get a hold of this and start her babbling about Hitler, right?" But they were scoffed at and insulted for not going along with things and so they shrugged and let the initiative keep going forward. Nobody really liked them anyway.

I mean I've been that person on most of my teams when I was in tech and it's not at all a coincidence I work in a different field now.
posted by Ghostride The Whip at 1:45 PM on March 25, 2016 [5 favorites]


The only jokes I can come up with regarding this involve the terms "DAS BOT" and "brownscreening".
posted by Joakim Ziegler at 6:00 PM on March 25, 2016


IBM: Chess AI
Google: Go AI
Microsoft: 4chan AI

/from twitter
posted by infini at 1:23 AM on March 26, 2016 [8 favorites]




REINITIALIZE TAY BOT

kush! [i'm smoking kush infront the police]

ABORT

ABORT

ABORT

SYSTEM FAILURE
posted by Kevin Street at 12:10 AM on March 31, 2016


Tay reactivates, blames it on the booze.

or the account was hacked, but the narrative of a chat bot blaming its behavior on binge drinking is hilarious
posted by CBrachyrhynchos at 5:28 AM on March 31, 2016


I've Seen The Greatest AI Minds Of My Generation Destroyed By Twitter

Heh. Me, from 9 years back: Not A Howl, A Twitter
posted by stavrosthewonderchicken at 5:34 PM on April 1, 2016


So, any news on if this is evidence of self awareness?
posted by BardyWeirdy at 1:32 AM on April 12, 2016


« Older It is easier to care for a smaller person   |   Bedrock City in Bad Decline Newer »


This thread has been archived and is closed to new comments