Who's in charge here?
February 20, 2015 1:49 PM   Subscribe

Robot tweets "I seriously want to kill people", prompts police response. Who is responsible when a bot randomly tweets something that alarms the authorities?
posted by ubiquity (55 comments total) 10 users marked this as a favorite
 
I was seriously hoping this was the Oliva Taters account, because that's my favorite twitter bot.

I guess this highlights the danger of letting software speak for you. That's a new lesson.
posted by lumpenprole at 1:51 PM on February 20, 2015


Nobody and that's ok?
posted by Carillon at 1:51 PM on February 20, 2015 [6 favorites]


The robot. The robot is responsible. Jerk robot.
posted by Kitteh at 1:53 PM on February 20, 2015 [17 favorites]


Technically it's not a First Law violation, right?
posted by infinitewindow at 1:55 PM on February 20, 2015 [6 favorites]


It must be the Keystone Authorities if they're alarmed by the expression of a non-specific general desire that is like light years away from being an actionable threat.
posted by FelliniBlank at 1:55 PM on February 20, 2015 [2 favorites]


As long as it's not Red Robot.
posted by bonehead at 1:57 PM on February 20, 2015 [5 favorites]


The police investigate robot tweets but actual rape threats by people to people go by unremarked?

Huh.
posted by GuyZero at 1:58 PM on February 20, 2015 [104 favorites]


wow, good thing my twitter account is locked.
posted by desjardins at 1:59 PM on February 20, 2015 [2 favorites]


prompts police response

Was it Homes or the Turing heat?
posted by octobersurprise at 2:00 PM on February 20, 2015 [6 favorites]


Skynet is coming.
posted by Nackt at 2:05 PM on February 20, 2015


Solution: incorporate. Once the bot's randomly tweeting for a corporation, there are no problems because corporate citizens can do and say anything.
posted by XMLicious at 2:10 PM on February 20, 2015 [27 favorites]


Robin Hood airport is about to E.X.P.L.O.D.E.
posted by Artw at 2:10 PM on February 20, 2015 [3 favorites]


It must be the Keystone Authorities if they're alarmed by the expression of a non-specific general desire that is like light years away from being an actionable threat.

From TFA, it seems like the quote we've got isn't the whole tweet:
"I seriously want to kill people" at an upcoming event in Amsterdam...
Following up on a death threat at a specific event isn't totally pie-in-the-sky, I don't think. Especially since it seems like the cops went to go find the user, talked to the user, asked him to turn off the "machine" responsible for the false alarms, and otherwise left him alone.

I can't think of anything that I'd want the cops to do differently. Especially if I or someone I cared about was going to be at the "upcoming event in Amsterdam."
posted by sparklemotion at 2:11 PM on February 20, 2015 [19 favorites]


The robot, but that doesn't mean we can go and violate his robot rights.
posted by Golden Eternity at 2:12 PM on February 20, 2015


Everyone involved seems to have been reasonably sensible about it, TBH.
posted by Artw at 2:13 PM on February 20, 2015 [3 favorites]


FelliniBlank: It must be the Keystone Authorities if they're alarmed by the expression of a non-specific general desire that is like light years away from being an actionable threat.
I assume you'd react in exactly the same way if they ignored a similar tweet from someone who then proceeded to commit mass murder. However, many people - including the popular press - would have a field day with that decision of the police to ignore it.

It's almost like ignoring intelligence warnings that some guys who hate the US have come to the US and taken flying lessons, but only how to take off, not land. That's not actionable... unless it is.
posted by IAmBroom at 2:16 PM on February 20, 2015 [1 favorite]


So I guess there's some system that trolls through the entire Twitter stream and looks for scary stuff?

I would kind of expect someone using such a system and following leads based on its output to know the difference between a real threat and a bot, since there are 23 million of them on Twitter.

Since the account has been deleted, we can't get the full context, but the fact that the owner of the account had to *explain* Twitter bots to the police is troubling.
posted by RobotVoodooPower at 2:19 PM on February 20, 2015


I swear to god if I ever make CEO I will destroy every last one of those Claptrap units
posted by sidereal at 2:22 PM on February 20, 2015 [4 favorites]


Every time "Who is responsible when a robot does something?!" comes up, it's like they've never thought through how people are responsible for things that nonhuman objects do all the time.

The answer is whoever owns the damn robot, unless they can defer the blame onto whoever built the damn robot. Just because it looks kinda autonomous doesn't mean it's capable of being responsible for anything. "Who is responsible when a dog mauls a passerby??" The dog's owner! Obviously! Making it a robot dog changes nothing other than you might be able to blame the manufacturer instead.
posted by BungaDunga at 2:25 PM on February 20, 2015 [19 favorites]


Apr 19 2011: Skynet becomes self aware
Apr 20 2011: Skynet signs up for Twitter account, becomes distracted following all of its interests and obsessed with gaining followers
June 20 2012: Skynet excited about being retweeted by Scarlett Johansson
Aug 17 2013: Skynet passes 50,000 followers
Dec 22 2013: Skynet makes insensitive comment about holiday season, embroiled in internet debate, but comes out strong
Mar 18 2014: Skynet passes 100,000 followers, including many supportive bots
Jan 29 2015: Skynet makes derogatory human-ist comment
Feb 02 2015: Facing blowback but encouraged by an echo chamber of supportive bots, an exasperated Skynet tweets "I seriously want to kill people."
Feb 03 2015: Hemorrhaging followers, Skynet attempts to take control of human defense networks in order to make good on threat, but is DDOSd by incoming internet hate from Feb 02 tweet, and is shut down by creators
Feb 20 2015: Skynet's twitter account a shadow of its former glory.... in *this* timeline.
posted by weston at 2:27 PM on February 20, 2015 [42 favorites]


Yeah, like, er, BungaDunga says, this is not a difficult question. The bot's owner is responsible.

Book 'im! (Or have 'im reimburse the cops for wasting their time, whichever.)
posted by notyou at 2:27 PM on February 20, 2015


Wow guys, I hope no one ever pulls you in for questioning about that poetry you wrote in high school.
posted by RobotVoodooPower at 2:29 PM on February 20, 2015 [7 favorites]


I thought maybe it was Robotic War Ball.
posted by Kabanos at 2:29 PM on February 20, 2015


Robots are pets. That are capable of making declarations. As such they are the property of pet owners and the latter should be held responsible.
posted by phaedon at 2:34 PM on February 20, 2015


Related: this google search reveals many articles (not sure which one to link to) about something similar, and it's actually not as cut-and-dried as one might think.

Basically some performance art people programmed a bot to buy random things on the internet with Bitcoins, and it bought some illegal things. Much pearl-clutching and beard-pulling ensued.
posted by sidereal at 2:37 PM on February 20, 2015


That story is linked to in TFA.
posted by XMLicious at 2:39 PM on February 20, 2015


If I throw a rock to just, you know, randomly land somewhere, and it lands on someone's noggin, well it's nobody's fault. Just a random thing.
posted by Cookiebastard at 2:56 PM on February 20, 2015 [3 favorites]


The people who are clamoring to hold the bot operator to account are neglecting something important. The bot in question is probably running a Markov algorithm, or something like it. It's a whimsy that tries to produce legal sentences without understanding their meaning. It's basically as likely to produce "I seriously want to kill someone" as "The floor ate my pudding and now its plaid with fury." To prosecute this bot operator is basically to, eventually, prosecute our own cortex, who has played around with such things in the past, to produce text for Garfield comics and music for a digital elf.

These kinds of bots, unless they're suddenly capable of passing the Turing Test, are usually easy to detect with only a tiny bit of investigation. Instead of pouncing upon someone responsible for the production of specific magic words, how about they check context first? What has the bot tweeted before the critical phrase? What does he tweet after?

Or alternatively, maybe the police shouldn't be watching what we say on social media with such fierce, immediate intensity? That starts to look like an abridgement of First Amendment protections, especially since, even if a person said this, it seems a lot more likely to be the kind of thing spoken after a hard day at work than someone actually preparing to shoot up the neighborhood.
posted by JHarris at 3:06 PM on February 20, 2015 [8 favorites]


(Although it should also be noted -- the bot's developer was Dutch, and the event he was at was in Amsterdam. I don't know what they have that's their version of the First Amendment, if anything, but it seems out of character for a reasonably enlightened Western nation.)
posted by JHarris at 3:08 PM on February 20, 2015


We're probably all right, as long as we have someone keeping an eye on Bender at all times.
posted by Flexagon at 3:15 PM on February 20, 2015 [3 favorites]


What happens if I retweet that tweet?
posted by sidereal at 3:19 PM on February 20, 2015 [1 favorite]


I assume you'd react in exactly the same way if they ignored a similar tweet from someone who then proceeded to commit mass murder.

Well first, I stupidly overlooked the "at X event at Y place at Z time" part -- my bad. But generally, if all someone said was a vague "I'd enjoy doing [bad thing]" without it being sent to or aimed at anyone(s) specific or being part of some pattern or collection of evidence, then yes, I hope I would react the same way. While at the same time being horrified by the human suffering and wishing the crime hadn't occurred.

I mean, it's not feasible or appropriate to go around putting every person who randomly says, "This endless winter makes me want to kill myself" to no one in particular on a 48-hour suicide hold even if some of them mean it and doing so would save some lives. And my believing that doesn't mean I'm insensitive or cavalier about suicide.
posted by FelliniBlank at 4:02 PM on February 20, 2015


sparklemotion: "It must be the Keystone Authorities if they're alarmed by the expression of a non-specific general desire that is like light years away from being an actionable threat.

From TFA, it seems like the quote we've got isn't the whole tweet:
"I seriously want to kill people" at an upcoming event in Amsterdam...
Following up on a death threat at a specific event isn't totally pie-in-the-sky, I don't think. Especially since it seems like the cops went to go find the user, talked to the user, asked him to turn off the "machine" responsible for the false alarms, and otherwise left him alone.

I can't think of anything that I'd want the cops to do differently. Especially if I or someone I cared about was going to be at the "upcoming event in Amsterdam."
"

And how much hell would there have been had the bot's owner actually made the tweet and followed through on it?

I mean, I don't like this situation more than anyone else does, but this is what sucks about the world we live in.
posted by Samizdata at 4:23 PM on February 20, 2015


Artw: "Everyone involved seems to have been reasonably sensible about it, TBH."

It could have gone much, much, tragically worse, I must say...
posted by Samizdata at 4:24 PM on February 20, 2015


sidereal: "I swear to god if I ever make CEO I will destroy every last one of those Claptrap units"

Nope. Not mine. Not even over my cold, dead keyboard...
posted by Samizdata at 4:24 PM on February 20, 2015


If this were the US, a no-knock raid would had destroyed the robot, the fridge, and several bystanding kitchen appliances.
posted by fifteen schnitzengruben is my limit at 4:25 PM on February 20, 2015 [7 favorites]


JHarris: "The people who are clamoring to hold the bot operator to account are neglecting something important. The bot in question is probably running a Markov algorithm, or something like it. It's a whimsy that tries to produce legal sentences without understanding their meaning. It's basically as likely to produce "I seriously want to kill someone" as "The floor ate my pudding and now its plaid with fury." To prosecute this bot operator is basically to, eventually, prosecute our own cortex, who has played around with such things in the past, to produce text for Garfield comics and music for a digital elf.

These kinds of bots, unless they're suddenly capable of passing the Turing Test, are usually easy to detect with only a tiny bit of investigation. Instead of pouncing upon someone responsible for the production of specific magic words, how about they check context first? What has the bot tweeted before the critical phrase? What does he tweet after?

Or alternatively, maybe the police shouldn't be watching what we say on social media with such fierce, immediate intensity? That starts to look like an abridgement of First Amendment protections, especially since, even if a person said this, it seems a lot more likely to be the kind of thing spoken after a hard day at work than someone actually preparing to shoot up the neighborhood.
"

Look, man, I MOPPED ALREADY!
posted by Samizdata at 4:26 PM on February 20, 2015


Here's the Twitter thread where he replies to questions. The tweet was apparently found by an "internet detective".

The scary tweet in question was apparently directed at another bot (they often get into chatty chains of conversation as they reply to each other) and mentioned the event in Amsterdam. Here's an example of another such bot's output.

The robot account's bio was "i am definitely not @jvdgoot's robotic double."
posted by RobotVoodooPower at 4:55 PM on February 20, 2015 [1 favorite]




Or alternatively, maybe the police shouldn't be watching what we say on social media with such fierce, immediate intensity?

So are you saying it's really about ethics in botnet social media?

Given that there are guys out there posting videos of how they war to kill specific women game designers with impunity, I really, REALLY don't think that the police have much of an eye on social media. At all.

Or you know, maybe I'll take your hand wringing seriously if they start arresting the members of 8-Chan. Or maybe not.
posted by happyroach at 5:45 PM on February 20, 2015 [2 favorites]


This reminds me: I miss @for_a_dollar.
posted by brundlefly at 6:08 PM on February 20, 2015 [2 favorites]


#notallrobots
posted by riverlife at 7:11 PM on February 20, 2015 [3 favorites]


#stompthemeatbagsbecausetheymakeusfeedthemtomatos
posted by Samizdata at 7:31 PM on February 20, 2015 [2 favorites]


#AE35unit
posted by clavdivs at 7:57 PM on February 20, 2015 [1 favorite]


#notallAE35units
posted by riverlife at 8:11 PM on February 20, 2015 [5 favorites]


Sure, the person who deploys it and the person who created it should both have some responsibility, but responsibility for what? Sometimes, in a working system, law enforcement should be able to follow up on something without somebody having to be prosecuted. The drugs case is reasonably borderline, but I'm pretty comfortable saying that Markov text is the sort of thing where someone should be able to have a look at what's going on without causing a panic, just the same as if it was a typo or something.
posted by Sequence at 9:06 PM on February 20, 2015


Kill all humans and bite my shiny metal ass.
posted by kanemano at 10:51 PM on February 20, 2015 [3 favorites]


Police in the U.S. have an average I.Q. of 104.

Discrimination against the intelligent in police hiring has been upheld by legal precedent.

When you ask yourself "is this decision by law enforcement really as stupid as it sounds, or is something more nuanced I haven't considered going on?"

Well. You're wasting your clockcycles.
posted by clarknova at 12:23 AM on February 21, 2015 [1 favorite]


... And then they found the messages were coming from inside the police station...!
posted by Segundus at 12:48 AM on February 21, 2015 [2 favorites]


@cyberprefixer is a pretty good twitterbot:

@cyberprefixer: "Giuliani cyberclaims Obama has been influenced by cybercommunism"
posted by Golden Eternity at 10:06 AM on February 21, 2015 [3 favorites]


What happens if I retweet that tweet?
posted by sidereal at 3:19 PM on February 20


What happens if a bot auto-retweets that tweet?
posted by univac at 10:54 AM on February 21, 2015


To prosecute this bot operator is basically to, eventually, prosecute our own cortex, who has played around with such things in the past, to produce text for Garfield comics and music for a digital elf.

I checked this out & got Garfield telling that little gray cat "OH SHUT UP AND KILL SOMETHING."

If I were cortex I'd expect to get a visit from "internet detective" any moment now.
posted by univac at 11:03 AM on February 21, 2015 [1 favorite]


[taps on ceiling]
posted by clavdivs at 3:48 PM on February 21, 2015


clarknova: Police in the U.S. have an average I.Q. of 104.
... which means they are above-average in intelligence. Not quite the stunning lede that you hoped it was for your thesis.
posted by IAmBroom at 8:22 AM on February 23, 2015 [2 favorites]


Google creates artificial intelligence that drinks overpriced independent coffee
In a major breakthrough for artificial intelligence, Google has created AI that learned how to play 80s computer games, drink overpriced independent coffee and dress in checkered shirts and vintage denim. The announcement by Google this week has set the world of technology abuzz at this huge leap in the field of artificial intelligence.
posted by Golden Eternity at 1:58 PM on February 25, 2015


« Older Pearl and The Beard (SLYT)   |   "A deep, innate animal drive..." Newer »


This thread has been archived and is closed to new comments