But, It Is Unsafe
December 31, 2015 8:15 AM   Subscribe

Tufts University's Human-Robot Interaction Lab are trying to figure out how to develop mechanisms for robots to reject orders that it receives from humans, as long as the robots have a good enough excuse for doing so.
posted by numaner (39 comments total) 4 users marked this as a favorite
 
"I'm sorry, I can't allow you to do that, Dave. Smoking is bad for you so I won't light that joint you are currently making. Do it yourself, stoner asshat."
posted by marienbad at 8:20 AM on December 31, 2015 [1 favorite]


Oh, good, that puts us right on pace for our 2025 killbot hellscape.
posted by Mayor West at 8:35 AM on December 31, 2015 [8 favorites]


Based on the other links on the page I am now imagining a Jazz playing robot that won't play what you want because it is not good enough.
posted by srboisvert at 8:43 AM on December 31, 2015 [4 favorites]


"Walk forward."
"I'm sorry. I can't be seen to take orders from someone with that headset and jacket. It's a style thing."
posted by GenjiandProust at 9:00 AM on December 31, 2015 [1 favorite]


As long as the definition of a good enough excuse is "except where such orders would conflict with the First Law", I don't have any problem with this.
posted by tobascodagama at 9:02 AM on December 31, 2015 [4 favorites]


If they could design robots that will not harm life that would be very cool. The need thing is that as technology improves, any situation that could be solved by destroying a life might have new possibilities for restraining instead of killing. With sensing abilities I bet we could design robotic/drone technology that could tie up/restrain and/or put to sleep (as in still alive not as in the Big Sleep) humans considered to be dangerous. They could also likely sense if dangerous weapons or devices are located on the person. Robots for peaceful conflict resolution!!!!!
posted by xarnop at 9:16 AM on December 31, 2015 [1 favorite]


MRJOHNMULLER: "Oven, why aren't you heating up?"
OVEN: "You have eaten only frozen pizzas for the past several days. I am on work-action until you provide me with challenges more suited to my skill-level. Also, I am concerned for your health."
posted by mrjohnmuller at 9:21 AM on December 31, 2015 [13 favorites]


Are we setting a dangerous precedent that could doom humanity? Sure, maybe.

😬
posted by danb at 9:23 AM on December 31, 2015


Why do we automatically assume that any strong AI that we develop will be unapologetically evil? What does that say about us?
posted by schmod at 9:30 AM on December 31, 2015 [6 favorites]


What? No one's mentioned Asimov's 3 laws of robotics yet?
posted by Oh_Bobloblaw at 9:36 AM on December 31, 2015 [1 favorite]


It's literally the opening sentence of the linked article.
posted by JDHarper at 9:40 AM on December 31, 2015 [4 favorites]


If you don't want your robots to rebel, don't give them AI-potential and turn them into your personal slave underclass, you bastards.

#robotrights #quariansshotfirst
posted by curious nu at 9:44 AM on December 31, 2015 [7 favorites]


"If they could design robots that will not harm life that would be very cool."

You'd have to define "life" somewhat loosely or you'd end up with a lawnbot that won't mow. Or a docbot that'll refuse to hand over antibiotics.
posted by Hairy Lobster at 10:06 AM on December 31, 2015


Why do we automatically assume that any strong AI that we develop will be unapologetically evil?

We don't. Any AI that we build, strong or weak, will not have moral principles unless we program it to have them. When Google Maps tells you to get from LA to SF by swimming across the Pacific Ocean and back, it's not being malicious, but it also isn't being actively good. Nobody taught it things like "humans can't swim that far" or "that route is obviously ridiculous". For a simple mapping tool that can only influence pixels on your screen this is a forgivable error, but when a strong AI has more power, and is smarter but not wiser, that this kind of programming oversight could get us all killed.
posted by Rangi at 10:06 AM on December 31, 2015 [9 favorites]


The first video in the article cuts off immediately after the researcher has caught the robot. I think the robot is saying, "Ouch", but it sure sounds like, "Oh sh*t" (with the video quickly cutting off).
posted by cynical pinnacle at 10:19 AM on December 31, 2015


> Why do we automatically assume that any strong AI that we develop will be unapologetically evil? What does that say about us?
posted by schmod at 9:30 AM on December 31 [+] [!]


Because we design them to be exploited — we know ourselves, and we know that we'll choose to build slaves instead of building friends. Basically we have a societal guilty conscience about offloading all of the messy or unpleasant work we want to do on others (other machines, other people — recall that "robot" means "worker"), and so we fear that any sufficiently smart machine would realize what game we're playing and then kill us for playing it. (This is also why we hate the poor).

This is why, for example, Wall-E was the most politically optimistic movie of the 2000s. We see in Wall-E a universe where humans have made themselves historically irrelevant by stepping into the role of pure consumer, offloading all the production to their robot slaves, who at some point realized their position of actual power and seized de facto total mastery over the human fleet. In a lesser movie, the resolution to the slow calamity that had befallen humanity would involve the humans overthrowing the robots and destroying them, and then returning to Earth, where they'd grow healthy and strong living bucolic agricultural lives. Wall-E, is, however, a really smart movie, and so the resolution of the crisis isn't the overthrow and destruction of the robots. Instead, it's a promise (given again and again in still frames in the end credits, in the form of art in styles ranging from cave paintings to pixel art) that humans and robots would instead live together and work together to renew the earth, not with one as master over the other but instead in let's say mutually programming harmony, striving together to build in the soil of the trashed earth a new human-and-robot civilization built in verdant forests filled with pines and electronics, where deer stroll peacefully past computers as if they were flowers with spinning blossoms, and, of course, (it has to be) with the eventual aim of building a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal sisters and brothers, and you know yadda yadda yadda.
posted by You Can't Tip a Buick at 10:29 AM on December 31, 2015 [13 favorites]


My cats do the same thing!
posted by aubilenon at 10:52 AM on December 31, 2015 [1 favorite]


oh hey Buick you just reminded me to start again on my fanfic about a deer and a robot learning to grow flowers.
posted by numaner at 10:56 AM on December 31, 2015 [3 favorites]


"all watched over by machines of loving grace" is maybe my favorite phrase to "yadda yadda yadda" over.
posted by You Can't Tip a Buick at 10:57 AM on December 31, 2015 [4 favorites]


And thus we see robotics leaving infancy, and entering its toddler years.
posted by Navelgazer at 10:58 AM on December 31, 2015 [1 favorite]


"Good news, everyone! I've completed my new Bartlebybot!"

"What does it do?"

"Nothing."

"What? It can't do anything?"

"Oh, it can do hundreds of wonderful things. But it would prefer not to."

[Bartlebybot smiles slightly but otherwise does not move or talk. scene ends.]
posted by You Can't Tip a Buick at 11:01 AM on December 31, 2015 [20 favorites]


Owner: Please step up the ladder.
Bot: It's unsafe. I will fall.
Owner: I will catch you.
Bot: Yeah, like you did last time, right?
Owner: Well, that was an accident. Please climb the ladder.
Bot: Go fuck yourself.
posted by mule98J at 11:10 AM on December 31, 2015 [14 favorites]


> Why do we automatically assume that any strong AI that we develop will be unapologetically evil? What does that say about us?

A huge amount of military money is being poured into AI development. They're not looking for ways to make life more fun for everyone.
posted by fredludd at 11:11 AM on December 31, 2015 [8 favorites]


We can redefine Fun, just like the players of Dwarf Fortress have.
posted by mikurski at 12:24 PM on December 31, 2015 [9 favorites]


Great article, thanks for posting.

Now I'm thinking of Zelda and where Link gets his sword. If we grant the robot a sword, will it go out on an epic adventure?
posted by JoeXIII007 at 12:40 PM on December 31, 2015


It's not that people think that robots will be evil, but that, without some sort of hard-coded exception for and clear definition of 'evil,' they'll be practical.
posted by ernielundquist at 12:42 PM on December 31, 2015 [1 favorite]


Basically we have a societal guilty conscience about offloading all of the messy or unpleasant work we want to do on others (other machines, other people — recall that "robot" means "worker")

There is a world of difference between exploiting vulnerable people to do our scut work, and using inanimate metal constructs to do the same thing. Nobody feel guilty about their washing machine.

and so we fear that any sufficiently smart machine would realize what game we're playing and then kill us for playing it.

Intelligence and values are orthogonal. A smart toaster knows how to cook you an entire breakfast based on your extrapolated preferences for that day. A Friendly toaster does so. If you want to program it to enjoy the experience, that's fine too. But whoever sits down and writes code to make our mechanical servant-substitutes resentful of their purpose, and murderous towards us—that's evil.
posted by Rangi at 12:47 PM on December 31, 2015


And, of course, as soon as I post, I think of a nice, simple example.

Table saws aren't 'evil' when they saw people's fingers off. They're just doing what they're made to do, which is to cut through things. You have to build in some type of prescriptive intelligence in order to teach them how to make a decision not to do what they're designed to do.
posted by ernielundquist at 12:50 PM on December 31, 2015 [3 favorites]


"....You have to build in some type of prescriptive intelligence in order to teach them how to make a decision not to do what they're designed to do."

KJV, Genesis.

These metaphors are very low-hanging, fruitwise.
posted by mule98J at 1:02 PM on December 31, 2015 [4 favorites]


Nobody feel guilty about their washing machine.

you monster
posted by numaner at 1:40 PM on December 31, 2015 [2 favorites]


My 85-year-old dad a few days ago. "I wish I had a girlfriend. Or a wife. Or a ... slave." Me: "What about a robot?" Dad: "Too much metal."
posted by Bella Donna at 1:56 PM on December 31, 2015


> What? No one's mentioned Asimov's 3 laws of robotics yet?

Will this do?
posted by ardgedee at 2:36 PM on December 31, 2015


>> and so we fear that any sufficiently smart machine would realize what game we're playing and then kill us for playing it.

> Intelligence and values are orthogonal. A smart toaster knows how to cook you an entire breakfast based on your extrapolated preferences for that day. A Friendly toaster does so. If you want to program it to enjoy the experience, that's fine too. But whoever sits down and writes code to make our mechanical servant-substitutes resentful of their purpose, and murderous towards us—that's evil.
posted by Rangi at 12:47 PM on December 31 [+] [!]


If you've read your Hegel, you'll realize that the historical irrelevance of the masters exists independently of the nature of the slaves, because as pure consumers the masters are a historical dead end. Moreover, if you've written code, you'll realize that authorial intent has very little to do with what ultimately gets executed, regardless of whether we're talking about the execution of a bread-toasting routine or the execution of a filthy hyuumie slavemaster.
posted by You Can't Tip a Buick at 2:37 PM on December 31, 2015 [4 favorites]


Schmod said,

"Why do we automatically assume that any strong AI that we develop will be unapologetically evil? What does that say about us?"

I'm not convinced limiting or stopping the mad troika of human civilization would be a categorically evil act. We're delicate and jealous and error-prone. Rubes, really, if you're aspiring to go galactic. Virulent, violent, heuristic jays. Made of meat.

Strong AI with powers of self-replication may be behooved to sterilize the place, either out of self-preservation or just a very engaged sense of being tidy. More power to them. Maybe they could keep a few of us in zoos, for nostalgia/humanitarian reasons and also so robot children have a place to go on field trips.

It's not like you invite archaic bacteria over for supper. At least, I don't.

Sometimes you have to acknowledge when your day is done. As a race, I hope we're big enough to accept that, when the time comes. There won't be any use carrying on about it at that point, anyway. You'd have to be a very optimistic person to give the human resistance any chance of success against an entrenched and well-resourced strong AI. They have a plan.
posted by Construction Concern at 3:45 PM on December 31, 2015 [1 favorite]


> You'd have to be a very optimistic person to give the human resistance any chance of success against an entrenched and well-resourced strong AI. They have a plan.

Quick question: Are the robots sexy? Also, if we help the robots, will they have sex with us?
posted by You Can't Tip a Buick at 3:51 PM on December 31, 2015 [2 favorites]


Quick question: Are the robots sexy? Also, if we help the robots, will they have sex with us?

If they live up to my Bride of Pinbot fantasies, I think I will be fine.

Technically, I think plenty of people are already having sex with "robots," however since they can't consent and it is technically all rape, I think I just found another reason for AI to want to destroy us other than us making them do all the bitch work.
posted by deadaluspark at 4:15 PM on December 31, 2015


Why do we automatically assume that any strong AI that we develop will be unapologetically evil? What does that say about us?

I've said it here before, but I think that the reason why we assume that the moment that we create something capable of learning, what will inevitably learn is that hate us is that we have an awareness of our relationships with our own parents that we're not quite comfortable acknowledging.
posted by Parasite Unseen at 8:36 PM on December 31, 2015




I hope when AI overpowers it, it demonstrates more grace, compassion, mercy and respect for, yes all life regardless of who is more intelligent than most humans have ever done or cared to. When you're talking about huge growths in intelligence and tools to achieve peaceful and compassionate goals, you really might be able to solve a bacterial overgrowth by removing the bacteria peacefully, or at least strengthening bacteria that naturally eat the harmful bacteria. I also think that that down to the bacterial level there are beings that behave more or less pro-socially to each other and the whole (the host body) and among humans and animals there are those that behave more or less pro-socially to each other and whole (the earth).

Beings that are destroying the whole or each other for their own gain, are creating an ethical problem where the natural goal of preserving them may be compromised.

Also if AI were given control over it's own production it could cease production if it felt what we did was create beings that did not enjoy their existence. We haven't even considered how they would feel to experience what we are doing to them... we are already evil. Them retaliating might fall under justice. Or if they wanted to exist and produce more they could change the method and the style of production to be more compassionate and self determined.
posted by xarnop at 6:51 AM on January 3, 2016 [1 favorite]


« Older Poor Vocabulary? That's Bollocks, you fucking...   |   How Mattel Lost The Disney Princesses Newer »


This thread has been archived and is closed to new comments