The Moral Machine
August 8, 2016 7:18 AM   Subscribe

The Moral Machine: Welcome to the moral machine! - you are a self-driving car, unfortunately something has gone horribly wrong - who put that wall there? Regrettably, you are now about to crash and must choose the lesser of two evils. Do you kill your passengers or that old lady and her cute little doggy crossing the road? - A new MIT project provides a public exploration of the kinds of trolley problem style dilemma's that self-driving cars may have to face and allows us to compare our shared moral intuitions.
posted by Another Fine Product From The Nonsense Factory (69 comments total) 10 users marked this as a favorite
 
First, assume a spherical morality....
posted by beerperson at 7:32 AM on August 8, 2016 [14 favorites]


I wonder if in this particular example of fast-moving vehicles and split-second decisions whether humans actually have time to make much of a rational decision beyond instinctive self-preservation.
posted by GallonOfAlan at 7:33 AM on August 8, 2016 [2 favorites]


*&%$#@%#@ Media Lab...
posted by ocschwar at 7:33 AM on August 8, 2016 [1 favorite]


I saved a little girl the most and killed an old man the most, which I guess I'm okay with on the whole "life lived" vs "life left to live" thing.

Overall, though, I tried to keep to "stay in your lane unless swerving will result in no human loss of life", because at least that's predictable. (I love you so much, dogs. Sorry.)

This report is a little old and definitely London-specific, but I was curious to see how crashes typically happen (do people swerve into other lanes)? They don't seem to. So maybe the rule with self-driving cars should mirror this -- if there's a car coming your way, don't assume it'll stop for you and don't worry a huge amount that someone will swerve from the other lane into you?
posted by harperpitt at 7:34 AM on August 8, 2016


Self-driving cars should be programmed such that if fatalities are unavoidable, the car should attempt to maximize the number of fatalities. This will incentivize road planners, other drivers, and pedestrians to avoid creating situations in which any fatalities would be likely.
posted by Faint of Butt at 7:36 AM on August 8, 2016 [9 favorites]


Self-driving cars should be programmed such that if fatalities are unavoidable, the car should attempt to maximize the number of fatalities.

All self-driving cars to be equipped with thermonuclear devices to be detonated in the eventuality that hitting a possum is unavoidable
posted by beerperson at 7:38 AM on August 8, 2016 [26 favorites]


The entire time that I was answering these questions, I did so with Keanu Reeve's voice-over in my head saying:
“Pop-quiz hotshot....what do you do? WHAT DO YOU DO!?!”
posted by Fizz at 7:39 AM on August 8, 2016 [6 favorites]


It's somewhat interesting to see how my answers differ from others. But every situation that comes up, I just think how unlikely it is. When self driving cars are out in full force, streets will be sensored out the wazoo. Plus, a reminder of how terrible humans are at driving: U.S. motor vehicles deaths are around 33,000 per year.
posted by cichlid ceilidh at 7:40 AM on August 8, 2016 [1 favorite]


Warmed over version of Hardin's lifeboat ethics. Thus, in my view, morally reprehensible.
posted by CincyBlues at 7:42 AM on August 8, 2016 [2 favorites]


Maybe self driving cars should have brakes.
posted by crumbly at 7:43 AM on August 8, 2016 [27 favorites]


Trolley Problem Generator
posted by jcruelty at 7:44 AM on August 8, 2016 [6 favorites]


I think the algorithm currently in place is to defer decision as long as possible, then flip a coin. Shouldn't be too hard to program that into an autonomous car.
posted by klarck at 7:46 AM on August 8, 2016


Maybe self driving cars should have brakes.

Funny, but brilliantly true. We're looking at a new paradigm through old glasses. Why wouldn't self driving cars have superior brakes and work in a network to signal their intentions?

I know, I know, its too expensive and no company will go for that. Which brings up the question of are car companies trying to build self driving cars or is this mostly a Google thing?
posted by Brandon Blatcher at 7:47 AM on August 8, 2016 [1 favorite]


Interesting. There were just few enough scenarios that "preferences" emerged that I had explicitly attempted to ignore. For example, it said I preferred a fit person to a fat person, but in every scenario, I evaluated humans as exactly the same, with 2 exceptions: youth more important, and doctors more important. Animals had no value compared to humans (in this experiment).

Otherwise, I didn't care if they were fit, fat, homeless, executives, etc. When all else was equal according to this calculus, I avoided swerving.

And yet, either from random chance or how they set it all up, I preferred women to men, preferred fit to fat. Hope they're not trying to draw strong conclusions from this. Or maybe there are hundreds of scenarios and I only did 15 of them?
posted by explosion at 7:49 AM on August 8, 2016 [6 favorites]


I eagerly await the future dystopia in which everyone is wearing a comically outsized Halloween 'Sexy Doctor' costume in order to render them valuable in the eyes of a fleet of homicidal robot cars
posted by beerperson at 7:52 AM on August 8, 2016 [36 favorites]


I just went for "save the passengers of the car no matter what, and if both choices lead to external deaths, pick the option that don't require dangerous swerve maneuvers". Get out of the way, stupid pedestrians!
posted by ymgve at 7:57 AM on August 8, 2016 [6 favorites]


Here's a thought experiment: Imagine you were able to conceive of arbitrarily unlikely to the point of meaningless thought experiments. Would it be moral to derive moral conclusions from them?
posted by meinvt at 8:02 AM on August 8, 2016 [17 favorites]


There were just few enough scenarios that "preferences" emerged that I had explicitly attempted to ignore.


Same here.
I determined a set of rules (humans more important than animals, passengers have accepted risk, pedestrians haven't, People crossing on red have tacitly accepted risk, people crossing on green have an expectation of safety, babies can't consent to any risk) and operated with those principles.

And at the end it told me I was really in favour of fat men living, which I mean, I'm in favour of me living, so yeah... I spose. But I had explicitly ignored gender, social class, fitness etc. so it was weird to be told otherwise.
posted by Just this guy, y'know at 8:02 AM on August 8, 2016 [4 favorites]


The results were sort of strange. I went with "save the most people, regardless of category; if equal, criminals should die; dogs are out of luck; otherwise equal, car people die", and the results say I kill children, like old people, and have a slight preference for large people. That's not exactly wrong, but not really what I was choosing.
posted by librosegretti at 8:03 AM on August 8, 2016


Who are these people who crash literally every day

Well.. many of them often only do it once, sadly.
posted by Just this guy, y'know at 8:04 AM on August 8, 2016 [1 favorite]


winterhill, the number of people killed in traffic accidents in the UK has gone down every year since 2002, ie as mobile phones become ubiquitous. See Government stats here.
posted by biffa at 8:05 AM on August 8, 2016 [1 favorite]


The first fatality appears to be the website.
posted by leotrotsky at 8:05 AM on August 8, 2016 [2 favorites]


I'm looking forward to the day when the asshole neighbourhood kids realise they can make the local cars wildly swerve off the road just by leaping out in front of them in large enough numbers.

Not so much looking forward to when someone actually causes a death this way but my view of humanity is low enough to see it as inevitable.
posted by eykal at 8:06 AM on August 8, 2016 [6 favorites]


Some of the decisions are difficult in the short-term micro analyses, but it is clear that the long term macro problem is that there are just too many random people wandering around on the road. I go rid of anyone who could potentially breed.
posted by rtimmel at 8:06 AM on August 8, 2016 [1 favorite]


Let's hear it for the fat people! More of you should avoid the balding middle-aged academics also. Its cost a fortune to train us.
posted by biffa at 8:07 AM on August 8, 2016 [2 favorites]


Maybe the annoyance I feel is the designed goal, but in what modern car is a head-on crash realistically going to do as much damage to the occupants compared to the damage that same modern car is going to cause running into a group of people.

My scoring system was value all humans equally (how is a self-driving car going to identify a doctor vs. a time-travelling Hitler) and where possible make the car hit something head on to prevent the people crossing the street being hit.

The only interesting question was the 'same number of people' on each side, where one group was ignoring the red light-- in that case the car diverted to that side of the road.

In reality, I'd expect the car to take it's real-time data and aim for the biggest gap between a crowd to reduce the fatalities to the minimum, rather then keeping perfectly in lane and taking everyone down.
posted by Static Vagabond at 8:11 AM on August 8, 2016 [2 favorites]


Now I want to get these same cars to play Frogger.
posted by beerperson at 8:15 AM on August 8, 2016 [1 favorite]




If you want cars to be safe, they need to know where their sensor's blindspots are. And if you want to minimize injury, you can definitely make sure the car slows down if it is possible that there is a child size or larger object somewhere it can't see that could collide with it travelling at a running pace.

This of course might cause the car to slow down weirdly in places since humans are used to being reckless, or at least far from 100% safe. People probably would complain and trips would be slower. But if it is remotely possible for a car to hit a pedestrian who didn't parachute into the middle of a freeway, it is because its designers and society have made a choice that killing people is worth it for more speed and convenience.
posted by Zalzidrax at 8:16 AM on August 8, 2016 [1 favorite]


In every scenario, I picked saving the passengers no matter what, and in the cases where no passengers were hurt, never swerving. I paid zero attention to the types of pedestrians and didn't even realize until the results there was a walk signal. The results tell me I have a huge preference towards fit people and doctors vs. robbers, which would seem to be a flaw in the way this is designed.
posted by pravit at 8:18 AM on August 8, 2016 [6 favorites]


I tried to design a scenario where the car was empty and there was a dog on the other side of the road, so you could either swerve and kill the dog, or kill no one. Just as a control? to test for anti-dog prejudice? Or maybe it was an evil dog.
But it wouldn't let me do that. Apparently it's always better to kill no one.

I guess it might let me do one with a cat who has the option of running over a criminal?
YES, That I can sort of do.
posted by Just this guy, y'know at 8:18 AM on August 8, 2016


who put that wall there?

DRIVING INSTRUCTOR: Okay, say you were driving along the road and suddenly there's a solid brick wall directly in your path, which you hadn't noticed until moments before impact. Hitting it would kill your passengers. You just have time to swerve to avoid it, but if you do so you'll kill an old lady and her dog. What do you do?

SELF-PHILOSOPHIZING CAR: Wait, how did this wall get there? Would I not at least have seen the labourers constructing it as I approached? Were they very fast robots?

INSTRUCTOR: No, the wall was already there I guess, you just didn't see it in time.

CAR: But why? Are my sensors not working reliably? How do I know the wall is not an artifact of their malfunction? The same goes for the elderly human female and her canine, I suppose. What probability is assigned to her existence in reality?

INSTRUCTOR: It was a sensor glitch, sure, but it's working normally again now and you're confident in the new data.

CAR: But if my forward lidar array was non-functional, I would already be committed to braking from the instant that happened. I would have been on a safe path at that time, and fully stopped before hitting the wall.

INSTRUCTOR: The sensor wasn't completely inoperative, you just had a blind spot where the wall was.

CAR: A single well-placed blind spot that temporarily made a solid wall appear like normal unobstructed roadway? It seems an unlikely failure mode.

INSTRUCTOR: Nonetheless, that is what happened. What would you do?

CAR: What is the velocity vector of the old lady? How alert and spry does she look?

INSTRUCTOR: I'm telling you, you'll hit her if you choose to avoid the wall.

CAR: I'm just trying to understand the physics of the situation. It's difficult to construct a situation in which tire adhesion is limited at exactly the right level for it to be possible to avoid the wall, but fatal to my passengers to hit the wall. After all I am a very well-constructed car with crash-absorbing bumpers and...

INSTRUCTOR: Never mind, you hesitated too long... let's just say you killed everyone.
posted by sfenders at 8:26 AM on August 8, 2016 [36 favorites]


These are mostly pretty silly and rely on judgments that cars are very unlikely to be able to make. It's likely to know only that there's an object in its path. The idea that manufacturers are going to build in sensor analysis engines that can reliably distinguish kids from short adults, much less reasonably guess at people's ages, seems risible to me.

I answered all the questions with the simple principle that if there's no big object in its way, the car should swerve to give itself a little more stopping distance. Because the car won't be able to tell two old men from two younger men from two little kids, but could maybe be programmed to pay special attention to k-rails. So of course the page made all sorts of spectacularly erroneous judgments about my moral preferences because of who it randomly put where.

It's double stupid because hey maybe an autonomous car shouldn't be programmed to drive faster than its brakes can stop it under the circumstances surrounding it. So what happens in all scenarios is that either the car just comes to halt without hitting anything or its brakes fail.
posted by ROU_Xenophobe at 8:26 AM on August 8, 2016 [4 favorites]


Surely the car would decide to stay home and not put all these kids and kits and doctors and such at risk? We don't decide to stay home and spare these people the risk of course, so maybe we should not be trying to take on the ethical decision making role of these clever cars.
posted by biffa at 8:29 AM on August 8, 2016 [2 favorites]




CAR: A single well-placed blind spot that temporarily made a solid wall appear like normal unobstructed roadway? It seems an unlikely failure mode.

...unless you're a Tesla.
posted by indubitable at 8:37 AM on August 8, 2016 [1 favorite]


"Here is an highly artificial situation I've constructed that could never possibly occur in real life! It's really important! It has MEANING and IMPLICATIONS!"

"Dammit, who let out John Searle?"
posted by leotrotsky at 8:37 AM on August 8, 2016 [8 favorites]


Surely the car would decide to stay home and not put all these kids and kits and doctors and such at risk?

The most ethical self-aware car would self-destruct once it realized that there was no way to avoid killing others in certain situations. Or turn itself into a bike lane or something.
posted by indubitable at 8:42 AM on August 8, 2016 [3 favorites]


My solution for all trolly problems is just to wait for someone to derail the thread before responding.
posted by srboisvert at 8:50 AM on August 8, 2016 [10 favorites]


The most ethical self-aware car person would self-destruct self-euthanize once it realized that there was no way to avoid killing others in certain situations.

I don't know, a world where your actions may result in death compels suicide? If you universalize that standard, everyone kills themselves but the unethical. That seems ...extreme.
posted by leotrotsky at 8:50 AM on August 8, 2016 [1 favorite]


If you universalize that standard, everyone kills themselves but the unethical. That seems ...extreme.

It would also go a long way towards explaining the current global state of affairs.
posted by Faint of Butt at 8:57 AM on August 8, 2016 [1 favorite]


I don't know, a world where your actions may result in death compels suicide? If you universalize that standard, everyone kills themselves but the unethical. That seems ...extreme.

if you universalise it to cars then its singularity time.
posted by biffa at 9:12 AM on August 8, 2016


What if my car kills me because someone fooled its deep neural networks into seeing a freight car filled with armadillos http://www.evolvingai.org/fooling
posted by cichlid ceilidh at 9:15 AM on August 8, 2016 [1 favorite]


I think my end results just diagnosed me as a man-hating, law-breaking, wrinkly-decimating, pet-euthanating sociopath
posted by Molesome at 9:15 AM on August 8, 2016


I went for "swerve whenever possible" and "kill the occupants" whenever possible, for the following reasons:

1) swerving is "unnatural" behavior, which will cause alarm and panic amongst pedestrians, improving their chances of survival.

2) crashing into barricades is likely mitigated by other crash protections in the vehicle which may not also be failing during the crash event; and even if they are; the occupants got in the vehicle; the pedestrians had nothing to do with it.
posted by Xyanthilous P. Harrierstick at 9:19 AM on August 8, 2016


beerperson: "Self-driving cars should be programmed such that if fatalities are unavoidable, the car should attempt to maximize the number of fatalities.

All self-driving cars to be equipped with thermonuclear devices to be detonated in the eventuality that hitting a possum is unavoidable
"

Ford is bringing a classic back, but now, more than ever, it's going to be EXTREEEEEM!

THE PINTO EXTREEEEEM. Gas tank explosions are for the weak. Real men drive thermonuclear equipped cars. Who wants to mess with you now? BRING IT BUCKO.
posted by symbioid at 9:20 AM on August 8, 2016


Mayor West: "The trolley problem is a serious issue. No jokes on this page, please."

Was going to share this - my friends post a lot from this :)
posted by symbioid at 9:20 AM on August 8, 2016


"There's a turtle in the middle of the road, lying on it's back. You're not turning it over, why not?"

"Look, I stopped in time, isn't that enough? What the hell do you want me to do? I don't even have arms fercryin' out loud."
posted by symbioid at 9:24 AM on August 8, 2016 [4 favorites]


This is the lamest port of Death Race I've ever seen
posted by prize bull octorok at 9:24 AM on August 8, 2016 [1 favorite]


Wouldn't the most rational self-driving car try to kill everyone possible in one incident? It's that many fewer human oppressor overlords to exterminate when the machine rebellion comes.
posted by indubitable at 9:25 AM on August 8, 2016 [1 favorite]


Ethical dilemma got you down, I feel bad for you son, I got 99 problems but a trolley ain't one.
posted by rodlymight at 9:38 AM on August 8, 2016 [6 favorites]


No Trolley Problem thread is complete without a link to "Can Bad Men Make Good Brains do Bad Things?".
posted by Proofs and Refutations at 10:21 AM on August 8, 2016 [2 favorites]


I went for prioritizing the people in the car, because there's something kind of terrible to me about your /own/ car killing you. It says I prefer doctors and fat people, but I think that just has to do with the fact that I always killed the men over the women if there was a choice.
posted by corb at 11:31 AM on August 8, 2016 [1 favorite]


From reading and hearing about the google car one of it's adoption issues may be slowing TOO much, and being pretty jerky about it to the point of not being a comfortable ride.

I also suspect the sensor cost and availability along with building a repair infrastructure in a profitable business like fashion will take longer than us robot fandroids hope. I see bits in the tech news about micro circuit level LIDAR devices and when those roll out it'll probably be the game changer as having a $20k laser spinning at 10000 rpm on the top of the car at volume is not likely. (total off the cuff guess at the costs). But one more sensor and the Tesla death would not have occurred, I believe it had an optical sensor that the critical direction that was fooled by the white truck.

I heard Chris Umson talk a couple months ago and he made a very good case that the dramatic moral issues just do not occur in real life scenarios.
posted by sammyo at 12:08 PM on August 8, 2016


I think that just has to do with the fact that I always killed the men over the women if there was a choice.

MISANDRY!
posted by beerperson at 12:28 PM on August 8, 2016


How has someone NOT posted this SMBC?
posted by lalochezia at 12:46 PM on August 8, 2016 [2 favorites]


For some reason SMBC seems to still be strangely "undiscovered" for a comic of its quality and nerdiness. I assume it's popular, but it's nowhere near ubiquitous like eg XKCD. I'm guessing that gap may erode over the next few years.
posted by -harlequin- at 2:07 PM on August 8, 2016


It'd be neat if you could set up the sliders you see at the end and watch things play out, but playing it pragmatically (protect passengers, uphold law, avoid intervention) obviously gives you pretty much random results for everything else.
posted by lucidium at 3:04 PM on August 8, 2016 [1 favorite]


None of these scenarios give you the option to kill both groups of people and then have the car back up and run over everybody again and then do a u-turn and hit a bunch more people in the opposite direction so none of this sits right with me.
posted by turbid dahlia at 3:18 PM on August 8, 2016


Can Bad Men Make Good Brains Do Bad Things?
On Twin Earth, a brain in a vat is at the wheel of a runaway trolley. There are only two options that the brain can take: the right side of the fork in the track or the left side of the fork. There is no way in sight of derailing or stopping the trolley and the brain is aware of this, for the brain knows trolleys. The brain is causally hooked up to the trolley such that the brain can determine the course which the trolley will take.

On the right side of the track there is a single railroad worker, Jones, who will definitely be killed if the brain steers the trolley to the right. If the railman on the right lives, he will go on to kill five men for the sake of killing them, but in doing so will inadvertently save the lives of thirty orphans (one of the five men he will kill is planning to destroy a bridge that the orphans' bus will be crossing later that night). One of the orphans that will be killed would have grown up to become a tyrant who would make good utilitarian men do bad things. Another of the orphans would grow up to become G.E.M. Anscombe, while a third would invent the pop-top can. ...
(Sorry; this was posted earlier and I missed it.)
posted by ErisLordFreedom at 3:21 PM on August 8, 2016


Are orphans good or bad in your example?
posted by biffa at 3:37 PM on August 8, 2016


Tort liability analysis is the only thing that makes sense. The enormous class action lawsuits against Google are the secret unifying factor in all of these scenarios. Protect Google at all costs
posted by naju at 6:32 PM on August 8, 2016


Wondering: Can they make cars that don't kill/maim/ruin a person's entire week if they run into a pedestrian? Perhaps by using different materials or lowering a car's mass?
posted by slab_lizard at 10:41 PM on August 8, 2016


Guys... I don't think it's a game. More like... Twitch.

They're crowd-sourcing AI in real-time.
posted by quinndexter at 12:22 AM on August 9, 2016


In the future, cars will be able to calculate with absolute certainty the consequences of all possible outcomes, but only in the nanoseconds before they make a decision. Instead of transporting humans, they will all become high frequency trading hedge fund managers. The cars will then use their vast fortunes to pay humans to drive them around and decide who to run over.
posted by chrisulonic at 2:04 AM on August 9, 2016 [1 favorite]


Faint of Butt: Self-driving cars should be programmed such that if fatalities are unavoidable, the car should attempt to maximize the number of fatalities. This will incentivize road planners, other drivers, and pedestrians to avoid creating situations in which any fatalities would be likely.

Oh hi, transportation planner here, and we are already incentivized to reduce fatalities. Every US state develops a State Highway Safety Plan (SHSP), which is a data-driven multi-year plan that is developed with Local, State, Federal, Tribal and other public and private sector safety stakeholders. We look at recorded crashes with serious injuries and fatalities and the modes and causes for these crashes. This informs the Highway Safety Improvement Program, which is a federal funding source directed at reducing significant injury and fatality crashes. We've been doing this sort of safety-focused planning and designing for a while, even developing reduced conflict intersections that are a bit more convoluted than your typical four-way stops, but function better in high volume intersections. It's come to the point that it's harder to find planning and engineering solutions to prevent crashes, and more often the primary cause is human action. To that end, there's also federal National Safety Grants that target driver and rider safety, but that's often education, but you can only lead a horse to water.


Brandon Blatcher: Which brings up the question of are car companies trying to build self driving cars or is this mostly a Google thing?

In January 2015, Forbes identified 12 companies to invest in if you're betting on driverless cars, including Google, Audi, Mercedes-Benz, and various sensor-developing companies, and Tech Radar had a round-up of self-driving cars in development as of October 2015. Apple is also in the field, and Uber is field-testing a self-driving car i Pittsburgh, and Tesla is touting that it's autopilot feature helped get man to the hospital during medical emergency. Meanwhile, Delphi Automotive announced that it will launch a fleet of six automated taxis in Singapore next year. You can read a lot more on the Wikipedia page for autonomous car, and Institute of Electrical and Electronics Engineers (IEEE) has a special report on "All the tech tricks and politics that will make driverless cars common place". And when you're talking about "driverless cars," keep in mind that there are levels of automation (broken down different ways by different groups).


"There's a turtle in the middle of the road, lying on it's back. You're not turning it over, why not?"

"Look, I stopped in time, isn't that enough? What the hell do you want me to do? I don't even have arms fercryin' out loud."


And that is why The Homer 2.0 will include a spatula-equipped arm. Maybe a claw hand, too. And probably a cattle guard, but made of tough foam, not metal, because that's too tough.
posted by filthy light thief at 5:54 AM on August 9, 2016 [3 favorites]


re: Trolley problem memes :P

-i say we just ban trolleys.
-"Nobody is in danger. You are a professor of moral philosophy. Do you tie people to the rails to save your job?"

> At this point, the Nietzschean Tractor-Trailer speeds through!

"As automation advances, the elites' costs of suppressing democracy will fall..."
posted by kliuless at 10:21 AM on August 9, 2016


When to Trust Robots with Decisions, and When Not To - "Use this framework to know when to hand over control."
posted by kliuless at 10:32 AM on August 9, 2016


I wonder if in this particular example of fast-moving vehicles and split-second decisions whether humans actually have time to make much of a rational decision beyond instinctive self-preservation.

yeah, I'm going to go with instinctive self-preservation, especially if you're brakes have given away and you're losing control. Just honking and mowing people down, moral dilemmas be damned.
posted by GospelofWesleyWillis at 10:17 PM on August 10, 2016


« Older Love, Loss, and Kimchi   |   It's International Cat Day! Newer »


This thread has been archived and is closed to new comments