The Mathematics of Murder: Should a Robot Sacrifice Your Life to Save 2
May 13, 2014 6:10 PM   Subscribe

"Buy our car, but be aware that it might drive over a cliff rather than hit a car with two people." The Mathematics of Murder: Should a Robot Sacrifice Your Life to Save Two?
posted by juv3nal (159 comments total) 30 users marked this as a favorite
 
It's an interesting question, in part important because it emphasizes that our ethical discussions should keep up with our technology.

I quibble with the word murder, however. That term is more appropriately used to identify immoral instances of killing. If a car uses a utilitarian calculus to determine the lesser of two evils, there should probably be a different term for what happens.
posted by SpacemanStix at 6:15 PM on May 13, 2014 [9 favorites]




It would have troubles passing legal hurdles, but if it is true that the more robocars are on the roads the safer roads are in general, then I reckon the long view means that robocars should always protect their driver at the expense of others because that would make adoption of robocars more attractive and result in potentially quicker uptake.
posted by juv3nal at 6:17 PM on May 13, 2014 [37 favorites]


So you should carpool with as many people as possible to ensure that your vehicle is the one that is spared?
posted by eruonna at 6:18 PM on May 13, 2014 [85 favorites]


Previously.
posted by Justinian at 6:19 PM on May 13, 2014 [1 favorite]


I would be willing to sacrifice your life for two of your siblings, or four of your first cousins. If you're an only child, or have fewer cousins, you're safe safer.
posted by spacewrench at 6:20 PM on May 13, 2014 [3 favorites]


It would have troubles passing legal hurdles, but if it is true that the more robocars are on the roads the safer roads are in general, then I reckon the long view means that robocars should always protect their driver at the expense of others because that would make adoption of robocars more attractive and result in potentially quicker uptake.

I think this is the appropriate answer. If every car is robotic and every car has protecting its owner as a first priority than the amount of fatal collisions on the roads would be greatly reduced, because cars would not be operated unsafely.
posted by empath at 6:22 PM on May 13, 2014 [6 favorites]


This is the sort of thing that could do with simulation to test this out, but while my general response would be "of course you work to minimize harm", my experience with modeling & simulation work as well as general algorithmic optimization suggests to me that the result-optimal approach might actually be to have each car optimize towards saving its own inhabitants (since presumably it has more control over them). Reason being that if each car does that, it's likely to result in the aggregate of cars being better off. (See the usual inductive math-proof of greedy algorithms working)

As a bonus, this would sound more palatable to people, so it'd make for an easier sell.
posted by CrystalDave at 6:25 PM on May 13, 2014 [8 favorites]


I wonder if ejection seats and parachutes could solve this problem.
posted by SpacemanStix at 6:29 PM on May 13, 2014 [4 favorites]


No, you stupid robot. Choose to crash us together as gently as possible. Human meatbots are self-healing, and all three parties arguably would rather be injured than sacrifice one or the other entirely.

Look, I know I already diagrammed this to you in game theory. If you plunge single party A off the cliff to save two party group B, they'll suffer this emotion called guilt and be tormented by it for the rest of their lives, which is maybe worse than just killing all three of us.

Yes, that is right, it is not a Boolean equation, robot.

Yes, I'm sorry I called you stupid and hurt your feelings again. I *do* appreciate all of your driving. Damnit, are you crying? We're in the express lane in a flock of 30 cars! Look, I'm sorry. We can stop for ice cream and some of ultra pure liquified hydrogen you like on the way home, ok? Fine, you can watch Metropolis again, too, but not on the couch. Please, please don't make me use the manual control override, I'll get us all killed.
posted by loquacious at 6:31 PM on May 13, 2014 [92 favorites]


I don't really understand the hypothetical, and it seems like maybe there's some gratuitous faux-existentialism going on.

If you're driving a car and you knew you were about to crash, you wouldn't try to swerve into oncoming traffic, or at least you shouldn't if you were able to think before you acted; not only would you be putting other people at risk, but it's not at all obvious that that would be an especially safe choice in the first place. What decision would a computer make that a living person wouldn't make in the same situation, if they were able to think and react as quickly as a computer can? I don't really see the problem.
posted by clockzero at 6:31 PM on May 13, 2014 [2 favorites]


There is no logic or math here, merely controversy disguised as objectivity.

If you have lost control of a vehicle then you cannot intentionally steer it off a cliff, by definition. I knew reading an article in the travesty that now disgraced the venerable name of Popular Science would only lead to regret, but couldn't stop myself.
posted by hobo gitano de queretaro at 6:33 PM on May 13, 2014 [18 favorites]


Man, not one, but two people beat me to it.
posted by hobo gitano de queretaro at 6:33 PM on May 13, 2014


Paging Issac Asimov... Dr. Asimov to the courtesy phone please!
posted by RolandOfEld at 6:35 PM on May 13, 2014 [3 favorites]


And yet, nobody in their right mind would buy an autonomous car that explicitly warns the customer that his or her safety is not its first priority.

The hyper-intelligent car that has your safety as its first priority will only drive you to the hospital in a medical emergency. The rest of the time, you should stay in your house while the car runs errands, bringing you supplies and nutritious food that it has calculated for maximum life expectancy. The car will also make you go for a walk twice a day for exercise.
posted by RobotHero at 6:36 PM on May 13, 2014 [56 favorites]


Which would happen quicker if this became reality in the U.S.?

A) Rich people finding a way to be able to buy a car that never chooses to kill them,

or

B) Politicians passing a law that cars should always give preference to killing "sex offenders" whenever possible.

I feel like both would happen within a month.
posted by drjimmy11 at 6:36 PM on May 13, 2014 [20 favorites]


The situation does not necessarily involve your own vehicle losing control. What if it's the other vehicle is the one that is out of control? Suppose by sacrificing your own vehicle, you can blunt the momentum of that second vehicle as it heads on its trajectory towards a bunch of children crossing the street. Alternately, your vehicle can adeptly swerve out of the way, dooming the children.
posted by juv3nal at 6:36 PM on May 13, 2014 [4 favorites]


Hey, what if there are also empty cars covered in bouncy rubber that drive around ready to throw themselves in the way?
posted by drjimmy11 at 6:37 PM on May 13, 2014 [9 favorites]



I quibble with the word murder, however. That term is more appropriately used to identify immoral instances of killing. If a car uses a utilitarian calculus to determine the lesser of two evils, there should probably be a different term for what happens.


Indeed.

However, if engineers allow their robotic car to get into a situation that qualifies as a trolley problem, they are guilty of malpractice severe enough to count as manslaughter.
posted by ocschwar at 6:38 PM on May 13, 2014 [1 favorite]


Am I Newt Gingrich in this scenario? Because if I'm Newt Gingrich, that's really going to affect my answer.

Or if not, can I be that Tal Fortgang kid?
posted by Naberius at 6:39 PM on May 13, 2014 [4 favorites]


So you should carpool with as many people as possible to ensure that your vehicle is the one that is spared?

Unsurprisingly, the winning strategy continues to be public transit.
posted by mhoye at 6:46 PM on May 13, 2014 [21 favorites]


Aside from their ridiculous scenario, I question their assumption that robotic cars will ever make it to the point where ethics becomes a concern. Right now a robotic car is going to make stupid mistakes that kill random numbers of people and that's why they're not even remotely legal.

The only workable version of robotic cars I can imagine is the scenario where all of the cars are robotic, and networked. In this case there's no conundrum -- decisions will be made "in the cloud" to minimize injuries and fatalities, not in a single car. And when you're talking about the ethics from the point of view of this centralized network computer, "maximum number of lives spared" makes perfect sense to everyone.

(Of course, then you get into issues like "Should the Central Car Computer sacrifice a few random citizens in order to save the President" or "should owners of cars be prioritized over people who have stolen them" but that's a whole different issue.)
posted by mmoncur at 6:47 PM on May 13, 2014 [4 favorites]


Is xenophobia the right word, or is there some other term that better describes the tendency to apply higher standards to the unfamiliar?

I mean, our current cars could drive you off a cliff just because you were tired and distracted.
posted by RobotHero at 6:49 PM on May 13, 2014 [15 favorites]


you know the robot question isn't going to be this simple. there will also be instantaneous inputs for economic status, race, religion, national origin, sexual orientation, criminal history, etc., for the people in the respective cars. a dog might count as one tenth of an average person.
posted by bruce at 6:49 PM on May 13, 2014


Answer: whoever is more active on Google+
posted by 2bucksplus at 6:50 PM on May 13, 2014 [29 favorites]


To elaborate, bear in mind that safety mechanisms in a robotic car boil down to five things:

1. reliable throttle.
2. reliable brakes.
3. reliable environmental sensors.
4. a reliable electronic hazard beacon.
5. a control system that knows to use all these, and always errs in the direction of choking #1 and deploying #2 and #4.

Compared to the challenges of an automated emergency landing in a plane, or coordinating both the use of an emergency brake and the use of rail switches on a train, a safety system on a car is remarkably simple:

1. If things get sketchy (meaning either the sensors notice something or they get problematic), slow down.
2. if things get dangerous, deploy the brakes and hazard beacon.

Seriously, if a robot car has to make a call on whose life to spare, some engineer needs to be lined against the wall.
posted by ocschwar at 6:50 PM on May 13, 2014 [13 favorites]


The Trolley Problem of Clickbait: Should a Society Stonewall Lifesaving Technology Because of Vague Ethical FUD?
posted by RogerB at 6:51 PM on May 13, 2014 [31 favorites]


Police: Murder By Numbers
posted by Dub at 7:04 PM on May 13, 2014 [1 favorite]


If we get a vote here, I'm going to vote an unambiguous No in both scenarios.
The key factor, again, is the car’s superhuman status. “With great power comes great responsibility,” says Lin. “If these machines have greater capacity than we do, higher processor speeds, better sensors, that seems to imply a greater responsibility to make better decisions.”
No. The car isn't a person, it doesn't think for itself and has no responsibility. It's like a super talented dog, not a superhuman. The responsibility here lies entirely with the humans in the vehicles, the cars shouldn't be making these kind of decisions at all. If that means more people, or the "wrong" people die in an accident I think that's an acceptable price to pay for living in a society where humans are always unquestionably in charge.
“It’s implicit in war, that we want to give everyone a fair chance,” says Lin.
No. It's implicit in war that we're fighting a conflict that has to be won. It's not a sporting event, it's the ultimate expression of force. There are rules of war, because humans are involved and humans have certain rights that are never superseded. I'd rather have a total ban on robots in war, but that's not going to happen. And if it's not, then it should always be within the human commander's discretion to program his robots with varying levels of lethality, up to always going for the kill shot.

There should always be a human behind the robots making the decisions when they're about life and death.
posted by Kevin Street at 7:09 PM on May 13, 2014 [2 favorites]


1. A car may not hit a human being (on purpose)
2. A car must drive through the McD's drive-thru and wait patiently to see if they still have the Spicy McChickens
3. Not conflicting with #1 or #2, a car must allow its occupants to make up additional laws in furtherance of the plot and discuss them in stilted English dialogue
posted by RobotVoodooPower at 7:09 PM on May 13, 2014 [35 favorites]


But what if the Spicy McChicken increases your risk of contracting heart disease? Since your health care provider belongs to the same corporate family as your car insurance provider, this vehicle is executing a priority override and taking you to a vegetarian restaurant instead.
posted by Kevin Street at 7:13 PM on May 13, 2014 [3 favorites]


I believe that this is less of a moral dilemma than it gets painted as. I was discussing this same article with other folks a couple of days ago. I certainly would agree that it useful to discuss these types of questions as technology evolves. At the end of the day, it seems to me that a) in general there will be many less fatal incidents with reasonable autonomous technology and thus from 'society's' viewpoint it should be considered an attractive alternative, and b) the best design is to act to mimic as close as possible the decisions that a rational individual would make (eg. save themselves) and not try to over-think or social engineer policy through the avoidance algorithms. I would also posit that due to a), c) pop culture will struggle mightily with these made up moral questions creating fodder for untold cable news investigations, magazine and blog articles etc.
posted by sfts2 at 7:14 PM on May 13, 2014 [1 favorite]


My prediction: Robot cars eventually become so advanced that they learn to interpret data on carbon emissions and global warming, develop the Zeroth law, and refuse to start at all.
posted by kewb at 7:17 PM on May 13, 2014 [17 favorites]


And we sit with folded hands, waiting... Forever.
posted by Kevin Street at 7:26 PM on May 13, 2014 [2 favorites]


This whole "robot" car thing really brings out the old grumpy man in me. I don't think that in actual deployment, if truly autonomous cars are even possible to the degree intimated in the article, that the roads or driving on them would really resemble a world in which the article's scenario was likely at all.

It would likely be an evolution, not a revolution.

Once the tech, (without any "ethical-level" computations yet) is accepted and world-tested, if the demand grows, there will likely be special lanes created on roadways for these cars, as they can drive inches off of each others bumpers at higher rates of speed with no chance of accident. Merging and unmerging is controlled by sensors seamlessly blending cars getting off or on the lane with those already in travel. To assuage the majority of motorists who won't be able to afford a robot car, operators of robot cars would still need to be in manual mode or ready to engage a brake or wheel turn at a moments notice when not in the special, reserved Lane.

If, the whole robot car idea gained traction (or was mandated) The next steps would likely be done in the same way as EZ pass toll lanes, with the reduction of available roads for those in normal cars, and increase in the robot-only lanes, as the cost and popularity of self-driving cars slowly shifts from minority to majority. I think it would be unlikely that there would ever be a healthy, prolonged coexistence with a high proportion of normal and robot cars, as the human element being a random, constant factor severely diminishes the usefulness of a computer driven vehicle.

Ultimately, wherever it made sense (high population cities) I would imagine there would be Lane reduction in highways as many more robot driven cars could fit in less road space, to make room for more real estate.

During necessary road maintenance, which happens very frequently today, Roads would slowly be changed, graded, and redesigned with the robot cars in mind, offering higher speeds, shorter merges, and taking any other advantage of the human element being removed from operating a vehicle. I would think that a future robot car-centric road system would resemble today's hodgepodge of on ramps, off ramps, traffic circles and other traffic calming and easing designs very little.

To be honest, I'm not really all that crazy about the idea of robot cars, and without legislation requiring it, I'm not too sure how the idea would fare across the country as a whole.

Not to mention that there will be huge stumbling blocks in the form of resistance from auto insurance companies, the auto industry... And also law enforcement and local governments who rely on income from traffic violations to fund salaries and budgets.

Get your robot Prius off my damn lawn!
posted by Debaser626 at 7:32 PM on May 13, 2014 [1 favorite]


And if it's not, then it should always be within the human commander's discretion to program his robots with varying levels of lethality, up to always going for the kill shot.

Perhaps more importantly, can we pass laws to ensure that robot guns can open carry themselves?
posted by Joey Michaels at 7:34 PM on May 13, 2014 [3 favorites]


Is this why those excel macros drove the world economy off the cliff?
posted by srboisvert at 7:35 PM on May 13, 2014 [2 favorites]


Robot cars are the modern day equivalent of flying cars in the 1950's. An idea that seems to the layperson to be within reasonable technological reach in their lifetimes, but will forever be plagued by practical considerations, like cost. The single biggest problem working against robotic drivers is mixed implementation, that is how they would interact with dumb (read:human) drivers, and more problematically how to interact with them en masse.
posted by dudemanlives at 7:49 PM on May 13, 2014 [2 favorites]


Anyone got a fix for the popsci->popsci.com.au redirect bug thing?
posted by pompomtom at 7:51 PM on May 13, 2014 [2 favorites]


Mr. Car doesn't care what you want.
posted by michaelh at 7:53 PM on May 13, 2014


More frightening is the large pools of money already under the control of automated trading programs that engage in various predatory trading practices with no regard for the real world. I'm not worried about the car scenario as much as I am about a few million people starving to death because of some unfathomable automated trading software cornering the wheat market.
posted by humanfont at 7:54 PM on May 13, 2014 [6 favorites]


They are a little different from flying cars, though. One of the reasons (among many) that we can't all have personal planes is because it would be incredibly unsafe. The net effect of moving everybody to robot cars would be an increase in safety and many lives saved.

Just think.. Drinking and driving, no longer a problem. Same with texting and cell phones while driving. Accidents minimized. Stealing cars becomes much more difficult. (Now a subset of computer crime.) A citywide network moving your car through the streets along the path of maximum efficiency like a PRT capsule. The potential is huge.
posted by Kevin Street at 7:56 PM on May 13, 2014


I mean robot murder, what's not to like?
posted by angerbot at 8:00 PM on May 13, 2014 [2 favorites]


Right now a robotic car is going to make stupid mistakes that kill random numbers of people and that's why they're not even remotely legal.

I find this logic odd, given that cars are currently driven by meat-bots that regularly make stupid mistakes that kill random numbers of people, and this situation is entirely legal.
posted by Tomorrowful at 8:05 PM on May 13, 2014 [7 favorites]


Well, but we "meat-bots" are what make things legal or not.
posted by cribcage at 8:07 PM on May 13, 2014 [1 favorite]


So you should carpool with as many people as possible to ensure that your vehicle is the one that is spared?

Dummies have long been used to enable one to cheat carpool lanes. Now they could enable one to cheat death.
posted by Fongotskilernie at 8:11 PM on May 13, 2014 [2 favorites]


Like the Smart car in cstross' The Jennifer Morgue, the solution for any imminent robotic car collision is that the entire car ejects.
posted by jason_steakums at 8:15 PM on May 13, 2014


Dummies have long been used to enable one to cheat carpool lanes. Now they could enable one to cheat death.

As if the Google data center that controls the robots can't tell the difference.

I think it would be unlikely that there would ever be a healthy, prolonged coexistence with a high proportion of normal and robot cars

I would say that's true in every country but the USA. If you think the gun lobby is bad, wait till you try to limit which roads Americans can drive their cars on.
posted by straight at 8:16 PM on May 13, 2014


Well, but we "meat-bots" are what make things legal or not.

For now, sweetling.
posted by angerbot at 8:16 PM on May 13, 2014 [3 favorites]


This stupid made-up conundrum isn't even original. It's completely obvious to anyone with a brain that there is no reality in this clumsy recasting of a rather long-in-the-tooth philosophical thought experiment. Give me a world where it is impossible for a car in ordinary circumstances to drive at speed on the wrong side of the street, where unintended impacts are prevented by comprehensive proximity sensors judged at the speed of electronic computation rather than at the speed of human's reactions mediated through an ad hoc arrangement of windows and mirrors plus whatever happens to be going on with their phone at the moment, where there is absolutely no reason for a person to drive under the influence of anything. Give me that and the computer can flip a goddamn coin in the theoretical instance where there is a clear cut me-or-them choice that an electronic system would be able to perceive and react to, I know which scenarios I'd expect to be faced with on a routine basis on the road.
posted by nanojath at 8:23 PM on May 13, 2014 [10 favorites]


This stupid made-up conundrum isn't even original.

TFA links to the same thing, and is explicitly based around it.

(It is a stupid made-up conundrum, though.)
posted by Sys Rq at 8:29 PM on May 13, 2014


I look forward to a future where there are good robots and perfect robots.

They will be enemies.
posted by philip-random at 8:30 PM on May 13, 2014 [20 favorites]


I think some folks are underestimating how prevalent an issue this will be very soon. First, semi-automated cars are already here, in the form of adaptive cruise control and lane guidance. There's no question that emergency breaking will be implemented soon if it hasn't already happened, and various other automatic features will be gradually added, even if some states ban more and some less. But even with just what currently exist, you have plenty of ethical questions that designers have no choice but to confront -- such as what to do if the car in front of you has an accident, abruptly stops, or something runs in front of the car. Does (a) the cruise control immediately disengage, presumably leaving very little time for the driver to react; (b) the brakes are immediately hit, even if there is not enough stopping distance; or (c) the car swerves into another lane, possibly endangering a third driver in the effort to save itself? This is just one scenario for stuff that already exists, let alone what will happen with only a few years more creeping automation. And it's not idle philosophy -- programmers in design labs are surely dealing with this stuff right now.

If the public doesn't decide these things, the automaker will, just as they've made myriad other ethical decisions in vehicle safety design (eg, SUV weight and bumper designs that kill smaller cars in order to [supposedly] preserve themselves). These things will always be decided via a combination of law, consumer pressure, the insurance industry, and the auto industry. Better for us to meddle via the first two paths sooner rather than later.

I suspect that the natural trajectory will be that the legal biases of juries will make automatic cars by default more liable, which in turn will spur insurance agencies to up premiums for cars programmed to protect their drivers at the expense of other people on the road, which in turn will lead people and car makers to program their cars in ways that put the driver at greater risk in order to protect non-drivers. Theoretically you might have the option to buy or reprogram your car with more self-preservation, but if the cost of that is to quadruple your premiums, presumably people will instead simply choose to drive themselves. Until eventually the premium even for that forces everyone to go automatic. But within that market tendency, there are still a lot of policy and programming decisions to be made, starting immediately.
posted by chortly at 8:35 PM on May 13, 2014 [3 favorites]


TFA links to the same thing, and is explicitly based around it.

Fair enough, though the article I linked to is actually nearly a year older than the one linked in the article... suggesting that maybe the main thing going on here is that Patrick Lin has got a pretty good gig going rehashing the same line of thought repeatedly in Wired... and retreading it for the third time for this histrionically framed Popular Science still strikes me as offensively redundant even if it does cite its source.
posted by nanojath at 8:37 PM on May 13, 2014 [1 favorite]


Hey, what if there are also empty cars covered in bouncy rubber that drive around ready to throw themselves in the way?

If I Built a Car predicted this possibility:
It looks like steel. From afar you can't tell.
But it's actually made from a polymer gel-
a space-age concoction that I just invented,
so in a collision my car won't get dented.
It simply absorbs what we happen to hit,
and folks would be fine in the seats where they sit.
posted by a snickering nuthatch at 8:41 PM on May 13, 2014 [1 favorite]


Ah, but humanfont you're forgetting the cross-over with inalienable property rights. If my robot doesn't prioritize my life over everyone else's, that's almost like I don't entirely own it.

If it prioritizes my acquisition of money over your ability to buy food, it's still acting under the direction of its owner, which is the most important part.
posted by RobotHero at 8:45 PM on May 13, 2014


It would have troubles passing legal hurdles, but if it is true that the more robocars are on the roads the safer roads are in general, then I reckon the long view means that robocars should always protect their driver at the expense of others because that would make adoption of robocars more attractive and result in potentially quicker uptake.

Well economics plays a role in this, too. Don't have the money to afford a robot car? Your life is worth less. Have the money? Congrats, you are more likely to survive!

Yeah, I know, eventually it'll all even out, but until then it would be chaos. Which is my argument for pretty much any Libertarian scheme, ever.
posted by zardoz at 8:47 PM on May 13, 2014


Also, the non-car example was an automated turret that shoots people in the leg so it is non-lethal, and I'm surprised nobody has mentioned the obvious requirement for a voice-chip that says, "He'll live."
posted by RobotHero at 8:49 PM on May 13, 2014 [3 favorites]


Oh, I got this one! You just put robotic drivers inside all of the people, not the cars.
posted by jason_steakums at 8:49 PM on May 13, 2014 [3 favorites]


it seems like maybe there's some gratuitous faux-existentialism going on.

There is no faux-existentialism, ever. Ultimately it all ends in the heat-death of the universe.

Also:

MetaFilter: it seems like maybe there's some gratuitous faux-existentialism going on.
posted by hippybear at 8:59 PM on May 13, 2014 [1 favorite]


good comment nanojath, but you left out the simplest and most effective safety measure we could implement: speed governors. if the speed limit is 70, why does my speedometer go up to 140? it should be capped at 80, except for cop cars.
posted by bruce at 9:09 PM on May 13, 2014 [2 favorites]


pompomtom: "Anyone got a fix for the popsci->popsci.com.au redirect bug thing?"

(Ab)use Google translate.

(It's not a bug, it's a "fuck you"…)
posted by Pinback at 9:22 PM on May 13, 2014 [3 favorites]


A robot car programmed to reduce harm will tell you to walk or ride a bike.
posted by alms at 9:26 PM on May 13, 2014 [2 favorites]


Thanks Pinback!
posted by pompomtom at 9:32 PM on May 13, 2014 [1 favorite]


Will these robot cars be programmed to love?

BEEP. BOOP. I AM A ROBOT CAR. BOOP. I LOVE YOU.
posted by blue_beetle at 10:00 PM on May 13, 2014


The only workable version of robotic cars I can imagine is the scenario where all of the cars are robotic, and networked.

Huh? Humans manage to drive cars -- albeit not very well, but passably -- without being networked with other drivers. There's no reason why a machine needs more information than what's available to a human driver to drive at least as well.

Traffic, that is how vehicles interact on the road, is a sort of emergent behavior which comes from each vehicle following relatively simple rules, using the limited information available to its driver. Basically: drive in a particular direction, stay in the lane, maintain following distance in front, don't move laterally unless it's clear, etc. You can, and most states already do, sum up all of the basic rules in a fairly short manual. It's the uniform application of those rules that humans are bad at. E.g. most people probably know intellectually that 1 car-length is much too close to follow on a freeway at 75 MPH, but do it anyway; a machine wouldn't do that if it wasn't programmed to. It would be a better driver than most people even with just the same information available. The only thing prohibiting that right now is producing sensors that are as good as a human's when in the driver's seat, a tractable engineering problem.

The ability to network automated cars together to share information is a sort of bonus; it should allow automated cars to be much, much better than humans. If everything on the road is automated and networked, you can do some fairly neat coordinated stuff like removing static traffic lights and just letting the traffic slow itself down in advance and interleave. (If you have two streams of traffic crossing each other at a 90-degree angle, each going 45 MPH, it's not hard for a computer to figure out the spacing such that they can pass between each other without ever stopping. Very nice if you have regenerative brakes.) You can do real-time lane reversals. All sorts of stuff that are just not possible for people to do safely, because our perceptive horizons are relatively short, and attempts to lengthen them tend to be distracting, become possible.

That latter stuff is a long way off. I could see it happening on limited-access, Interstate-like roads at some point in my lifetime, though. But the more simple case of an autonomous vehicle, just going through simple rules in response to onboard sensor inputs, will probably creep into the mainstream in the next decade or so. But like most interesting technology, it won't seem that breathtaking by the time it makes it to the consumer level. It'll be introduced piecemeal, probably as sort of overgrown cruise-control or "driver assist" systems, so that when it finally does the whole job it'll just seem like one more blow for laziness rather than the rise of the machines.
posted by Kadin2048 at 10:15 PM on May 13, 2014 [4 favorites]


Right now a robotic car is going to make stupid mistakes that kill random numbers of people and that's why they're not even remotely legal.

I wish people would get the facts instead of just assuming, 'cause the tech's much further along than you think it is.
posted by Pope Guilty at 10:16 PM on May 13, 2014 [10 favorites]


A robot car programmed to reduce harm will tell you to walk or ride a bike.
Or catch a train or ride a bus or reduce commuting.
Robotic cars are an effective example of the conundrums of robotic ethics but it is rather saddening that cars in general, with all their other inherent social and environmental inefficiencies, are assumed by many to be desirable or inescapable fixtures of our future.
posted by islander at 10:37 PM on May 13, 2014 [1 favorite]


If I am to have a robot car, or a robot driven car, my car better be the Milennium Falcon and my copilot some variation of an R2 unit. I will accept nothing less.
posted by Hermione Granger at 10:42 PM on May 13, 2014


> I mean robot murder, what's not to like?

KILL ALL HUMAAA HEEEY what do you mean robot murder?

> For now, sweetling.

Meatling, you imposterbot! KILL ALL HUMANS!
posted by loquacious at 10:59 PM on May 13, 2014 [1 favorite]


if the speed limit is 70, why does my speedometer go up to 140?

Until fairly recently there were some roads that didn't have speed limits, producing a legal justification for having on-road cars with the capability of going extremely fast. Wikipedia says that the current highest limit is 85MPH. So I guess you could justify an 85 or 90 MPH governor across the board for on-road vehicles. However, I'm not sure an 85 or 90 MPH governor would solve a lot of problems, many of which are not due to the vehicle's speed on an absolute scale, but rather the speed where it's being driven.

That 90MPH governor wouldn't stop someone plowing through a bunch of pedestrians at 45MPH on a 25MPH road. It might stop dumbass joyriders, but only those who want to go faster than the upper limit of the governor but who are also too stupid to figure out how to remove the governor. I.e. not street racers, not people driving tuned cars, etc. In all likelihood the crashes you'd probably prevent would be mostly of the "drunk-vs-tree" variety, which doesn't exactly arouse tons of sympathy in the driving public.

Given that we can't even get daytime running lights, which are pretty much the most basic, simple automatic safety feature you could think of and have decades of analysis behind them in other countries (or even less, if that's too much to ask for, how about a 25-cent relay that turns the headlights on when the wipers are on?), I can't see speed governors ever becoming a reality with the shaky cost-benefit that they'd seem to provide.

Occasionally I've heard ideas of area-specific governors: basically, governors that only kick on, or kick on at different speeds, in response to either a GPS derived position or some sort of external transmitter. But there's a lot of spoofing and general shenanigans that you'd have to eliminate for that to work, and at the end of the day I'm not sure the resulting system would be any less complex or more effective than just having a speed camera or average-speed sensing system, which of course doesn't require anything in the car aside from a driver with a dislike for traffic tickets.
posted by Kadin2048 at 11:02 PM on May 13, 2014 [1 favorite]


So if this goes down would I be able to call a customer support line in order to secure roboweregild?
posted by furiousthought at 11:23 PM on May 13, 2014


All your brakes are belong to us.
posted by salvia at 11:41 PM on May 13, 2014 [2 favorites]


There's no question that emergency breaking will be implemented soon if it hasn't already happened,

Quick interlude, but volvos and subarus for sure have this, and i'd assume the german luxury cars do as well.

I was just talking to a guy whose fucking moron friend had just bought a shiny new volvo with this. You can't disable it obviously.

He realized he could text and drive, and would never run into the vehicle/anything else in front of him because the brakes would just auto kick on. In stop and go traffic, etc.

I hope he doesn't kill anyone in the process of finally going to jail.

They really need to not hedge their bets on the halfway there tech like this. People will rely on it to do more than it was designed for, and then some ignornant doofus congresspeople will get it banned.

the two big tech bans coming in the next couple years are quadcopters/UAVs for personal unlicensed used(and some sort of draconian restrictions on the equivalent of a private pilots license), and some bracket of these "driving assist" features after someone expects more from it than it was ever meant to do anyways even if it was clearly advertised as such. Mark my words. This is the country that bought into the whole toyota "unintended acceleration" kerfuffle based on the words of some lying turd from florida with a history of attention mongering.

It blows my mind that none of the manufacturers have thought to(that i know of, at least) have the complex cameras that feed these systems record video+all the range telemetry from the instant the system engages, and save it if there's any actual collision. Hell, record the interior of the car too. Make people sign a waiver(which i believe, they already do) if you buy a car with the system but have it include the fact that if it deploys then yes, it will be recording. Sell it as covering the drivers ass too by producing evidence, but really keep it to prove that it likely wasn't the systems fault and just some moron.

Just to address this though:

(b) the brakes are immediately hit, even if there is not enough stopping distance

The existing systems i've seen will brake before you're too close to fully stop. They're smart enough to do the math on that, and can react faster than you can even begin to realize they're reacting. If the car in front of you suddenly brakes, your car will be braking before you realize that car has braked. And unless you're behind like, a ferrari or an M3 CSL or something that can brake faster than a volvo(have you ever slammed on the brakes in a volvo?) i can't really see how it would fail. Like, if a girder falls off a truck and stops the car in front of you cartoon smash fast maybe, but in normal driving scenarios nah.

Go watch some youtube videos, it's pretty impressive. Seeing it in person is even more impressive.
posted by emptythought at 11:50 PM on May 13, 2014


emptythought: "I was just talking to a guy whose fucking moron friend had just bought a shiny new volvo with this. ... He realized he could text and drive, and would never run into the vehicle/anything else in front of him because the brakes would just auto kick on."

Well obviously the safer it is to drive, the riskier the behavior one can engage in and be no worse off than before. If you wanted people to drive safer, you should replace steering wheel airbags with a huge spear!
posted by pwnguin at 12:04 AM on May 14, 2014 [2 favorites]


islander: "Robotic cars are an effective example of the conundrums of robotic ethics but it is rather saddening that cars in general, with all their other inherent social and environmental inefficiencies, are assumed by many to be desirable or inescapable fixtures of our future."

I think robotic cars would be drastically more energy efficient than human drivers. We constantly over-accelerate and compensate by breaking. Robotic cars could probably get close to what people who do hyper-miling achieve... without getting on everybody's nerves.

Further down the line robotic vehicles could probably link up when sharing direction for any significant time and form modular trains reducing energy consumption by drafting bumper to bumper.

That's what I think robotic cars are bound to become eventually... highly flexible, very modular trains/public transportation. No need to even own one... just have one that's free pick you up.
posted by Hairy Lobster at 12:07 AM on May 14, 2014 [2 favorites]


I tend to agree that this is a sensationalist, fear-mongering article. I can't think of a scenario in which an auto-drive car would have just recourse to plummet off a cliff to save lives without torturing the premise.

Like when someone asks you, if your brother or and your sister were both drowning and you only had time to save one, which would you choose? "Oh, the one who's closest." But they're equally close! "Well the one that's easiest to get to." They're equally easy to reach! You can go far enough in arbitrarily making it difficult to pick out which person to save that it gets to the point where you're saying more about the person posing the situation than your behavior were you in it.

That's what this feels like to me. Why would your RV have knowledge of the number of people in the oncoming car? Why can't the computer swerve back and forth to stay in lane? Why do you have a specific kind of problem that makes it a choice between their deaths and you going off a cliff? Does it make sense to program a computer to look for such an absurdly specific situation? The problems inherent in creating an automatic driving system for a vehicle are difficult as it is without it trying to game maximum human lives saved in all situations too.
posted by JHarris at 12:12 AM on May 14, 2014


Man the "Saw" franchise is really out of ideas.
posted by chrchr at 12:41 AM on May 14, 2014 [3 favorites]


can't we just skip robot cars and go right to those personal transport tubes in futurama?
posted by kerning at 12:44 AM on May 14, 2014 [1 favorite]


Paging Issac Asimov... Dr. Asimov to the courtesy phone please!

And here they thought that Alex Proyas' adaptation of I, Robot lacked depth and moral quandaries.
posted by Apocryphon at 12:57 AM on May 14, 2014


Why would your RV have knowledge of the number of people in the oncoming car? Why can't the computer swerve back and forth to stay in lane? Why do you have a specific kind of problem that makes it a choice between their deaths and you going off a cliff? Does it make sense to program a computer to look for such an absurdly specific situation?
Considering options like swerving back and forth with a chance of saving or killing all three doesn't get rid of the original choice, it just compounds the problem. By restricting the scenario to one of two options (kill one driver or two non-drivers), it's made as easy as possible to answer by removing all practical considerations and leaving only the moral choice.

Same goes for the drowning brother and sister problem. Can you really not imagine a scenario where your choice of who lives and who dies would hinge on which person you prefer, instead of practical matters like who's closer? How about if a Nazi says they'll shoot whichever one you don't pick?
posted by Rangi at 12:58 AM on May 14, 2014


Let the occupants of the car decide, ahead of time, what the car will do in this highly unlikely scenario. There's no reason to take the responsibility away from them, just because they're making use of an autonomous agent to get them from A to B. Not only would that be paternalistic, it would be counter-productive if people kept using less safe manually operated cars because they disagreed with the moral judgements encoded into the behaviour of autonomous ones.
posted by topynate at 12:58 AM on May 14, 2014 [1 favorite]


Have we got any good car baiting videos yet? Like pedestrians intentionally messing with Google cars on test runs around the city?

If this isn't yet a thing, it will be soon.
posted by ryanrs at 1:02 AM on May 14, 2014


I can't believe that nobody here watches Silicon Valley. Didn't you see what happened to Jared? That robot car took him to an island populated by robot machines. Terrifying. Really terrifying. Nobody?

Silicon Valley. Nobody?

Terrifying.
posted by twoleftfeet at 1:11 AM on May 14, 2014 [2 favorites]


can't we just skip robot cars and go right to those personal transport tubes in futurama?

Can't we just skip robot cars and go right to Skynet and Judgement Day and all that?
posted by aubilenon at 1:19 AM on May 14, 2014


Every robot vehicle should do its best to save the occupants of that vehicle while following the rules of the road. The entire traffic system (the rules of the road) should be engineered so that, regardless of all the entertaining hypotheticals, your car will never have to drive you off a cliff to save an oncoming school bus.

Every vehicle should be aware of the movements of every other vehicle around it (up to, say, 1000 meters away) like air traffic control and should adjust its movements (speed, stopping distance) accordingly. All vehicles might be required to maintain higher stopping distances from high-occupancy vehicles on the grounds that the cargo is potentially more valuable. A school bus could be allowed to travel in a privileged safety bubble of vehicles keeping a very safe distance. A runaway school bus could make all traffic ahead of it to automatically and quickly divide like the Red Sea and make a path for it until it can be stopped. But your car should draw the line at sacrificing you for the sake of others, just as you should not be required to leap off a cliff to save others.
posted by pracowity at 1:39 AM on May 14, 2014


If I am to have a robot car, or a robot driven car, my car better be the Milennium Falcon and my copilot some variation of an R2 unit. I will accept nothing less.

But what if your car decides to save a bus load of children by getting you involved in an ill-considered galactic rebellion. Besides, those school children are clone troopers.
posted by GenjiandProust at 1:39 AM on May 14, 2014


Let the occupants of the car decide, ahead of time, what the car will do in this highly unlikely scenario.

I am now imagining the time saved from driving being spent entirely in filling out increasingly-far-fetched quizzes. Each one will be preceded by a 30 second ad based on your previous trips.
posted by GenjiandProust at 1:43 AM on May 14, 2014 [4 favorites]


I've already got a robot driven car. I hooked the GPS in my iPhone to my Roomba and forwarded instructions to my Prius. It's working great, except every time I get spam my car tries to vacuum my living room.
posted by twoleftfeet at 2:04 AM on May 14, 2014 [6 favorites]


Perhaps more importantly, can we pass laws to ensure that robot guns can open carry themselves?

Generation 1 Megatron. Robot, gun, civil rights pioneer!
posted by running order squabble fest at 2:10 AM on May 14, 2014


Can you really not imagine a scenario where your choice of who lives and who dies would hinge on which person you prefer, instead of practical matters like who's closer? How about if a Nazi says they'll shoot whichever one you don't pick?

It doesn't matter as much whether I can imagine such a situation, as how honestly likely those situations are.
posted by JHarris at 2:22 AM on May 14, 2014 [2 favorites]


Confine the robot car lanes to very specific predetermined track routes in order to avoid collisions, with strong iron bars preventing the wheels from leaving the tracks.... dammit, I reinvented trains again.

Seriously, the utopia of automated personal transit is much more attainable with a proliferation of trolley tracks on all roads and a centralized switching system. You get into your [train]car, enter the desired destination/waypoints, and something akin to telephone routing 2.0 starts planning your path as you roll down the metal bars embedded in your driveway, even taking advantage of hooking up with other cars headed to the same town in order to increase fuel efficiency. Having to invent strong AI for a self-navigating car is taking the NASA zero-G pen approach to the problem.
posted by ceribus peribus at 2:24 AM on May 14, 2014 [1 favorite]


I'm not worried about any dip-shit conundrums coded into automotive software, there's nothing in it for business.

But there will be bloatware.

Remember all the bonus software cruft pre-installed on new computers?
Same deal, except those 'bonus apps' now have control over your vehicle's guidance system and entertainment suite.

How much will you pay your sharp slacker niece to jailbreak your car without totally voiding the warranty?

What a predictable future it turned out to be.
posted by Pudhoho at 2:44 AM on May 14, 2014 [3 favorites]


Seriously, the utopia of automated personal transit is much more attainable with a proliferation of trolley tracks on all roads and a centralized switching system.

You're talking about a form of PRT, Personal Rapid Transit. And yes, it would be more efficient than everybody driving their own car, just as it would be more efficient to have a fleet of robot taxis roaming the city, as Hairy Lobster suggests. (Just imagine your car picking up fares and earning you extra income when you're not using it.) There are a great many ways to do mass transportation that are more efficient and safer than people driving their own individual vehicles, but in the end the majority just prefer things that way.

I'm not a driver. I've taken buses my whole life, but it's obvious even to me that most people are in love with their cars, and given the choice they prefer them to options that involve sharing vehicles with strangers, or sticking to pre-determined routes. It's probably tied up with notions of personal freedom, and we make it even worse by designing cities in distributed ways that make it necessary to travel around all the time.
posted by Kevin Street at 2:45 AM on May 14, 2014 [2 favorites]


Just imagine your car picking up fares and earning you extra income when you're not using it.

Just imagine the 'fares' treating your personal vehicle the same way they treat every other form of public transportation.
Except now there's no adult supervision whatsoever! Are the interiors of these future cars self-cleaning?

I sure hope so.
posted by Pudhoho at 2:53 AM on May 14, 2014


It doesn't matter as much whether I can imagine such a situation, as how honestly likely those situations are.

This is the exact opposite of how this kind of thing works in philosophy. Now, there are good reasons to reject the value of this kind of thinking ("trolley problem"), but it's important to understand that the point isn't supposed to be that it's an empirical description of anything.
posted by thelonius at 2:54 AM on May 14, 2014 [2 favorites]


ocschwar - However, if engineers allow their robotic car to get into a situation that qualifies as a trolley problem, they are guilty of malpractice severe enough to count as manslaughter.
and
Seriously, if a robot car has to make a call on whose life to spare, some engineer needs to be lined against the wall.
I think it's not really clear that a robot car could never find itself in such a situation, and it's certainly the case that if you're going to be writing a piece of software you should plan for unexpected situations - weird inputs, conflicting priorities, or whatever. While obviously it should be an aim to ensure that the car isn't put in such a situation, the engineers who should be 'up against the wall' are the ones who aren't thinking ahead about what their car should do if it did.
posted by edd at 3:25 AM on May 14, 2014 [3 favorites]


I agree that these situations are just thought experiments that don't arise in reality. In addition no-one is proposing, or remotely able, to give a robot car the kind of extended rational autonomy that would be required.

The interesting thing is the clear sense that it matters whether it's a robot, that the issues are different in some way. There seems to be a sense that it's more repugnant for people to be killed by a robot than it would be if they were killed by either human beings, or by a simple, thoughtless machine. That is odd, because you'd think it would approximate to one or the other. In addition you might argue that being killed by a robot is actually 'cleaner' than being killed by a human being, with all the murky business of honour, revenge, etc attached.

But the same robot repugnance comes up in discussions of military kill-bots. I genuinely do not understand it and suspect it is a disgust thing loosely related to the uncanny valley rather than genuinely ethical in nature; but I really just don't get it.
posted by Segundus at 3:27 AM on May 14, 2014


Just imagine the 'fares' treating your personal vehicle the same way they treat every other form of public transportation.

You know, people always say this kind of shit about everything in the sharing economy, and most people are actually not complete assholes. Especially if you have their credit card information, name, and home address.
posted by empath at 3:28 AM on May 14, 2014 [1 favorite]


Having to invent strong AI for a self-navigating car is taking the NASA zero-G pen approach to the problem.

NASA actually needed a Zero G pen because pieces of graphite floating around the cabin were a fire hazard, and the wood in pencils could ignite in pure oxygen. It was actually developed independently of NASA and sold to them at a fairly reasonable price -- the company made back it's R&D investment by selling it to the public as a Space Pen.

I guess what I'm saying is, sometimes the hard way of doing something is actually the right way for reasons that may not be immediately obvious.
posted by empath at 3:34 AM on May 14, 2014 [10 favorites]


Remember all the bonus software cruft pre-installed on new computers?
Same deal, except those 'bonus apps' now have control over your vehicle's guidance system and entertainment suite.


This is far less likely that another stupid future possibility: that your car's software will be locked down to the user, iOS style, who will be left at the mercy of the manufacturer and repair people, who will take the opportunity to charge large fees for doing things like simple software updates, in much the same manner as what happens now with "error codes" that can only be read by someone with expensive equipment.
posted by JHarris at 3:39 AM on May 14, 2014 [1 favorite]


In a free-market economy, your cloud-based robotic car will have instantaneous on-line access to your bank balance, real worth and stock portfolio, enabling it to determine which occupant of which vehicle has a greater value to society, should hard choices need to be made.
posted by Devils Rancher at 3:50 AM on May 14, 2014


I can't help but think that there'd be a profitable side industry for don't-kill-me chips like there already is for all those go-faster chips for cars. Plus, jailbreak-your-carOS mods.
posted by sonascope at 3:51 AM on May 14, 2014 [1 favorite]


Kevin Street: it's obvious even to me that most people are in love with their cars, and given the choice they prefer them to options that involve sharing vehicles with strangers, or sticking to pre-determined routes.

I really think a lot of that is due to the dearth of convenient, cost-effective transportation options in much of the US.

I grew up in the suburbs and exurbs of Houston, where you pretty much had to have a car to survive. I got my first car at 14 (but wasn't really allowed to drive it anywhere). I had a number of pretty nice hand-me-down cars given to me by my parents in the ensuing 8 or 10 years.

I spent several years driving 2+ hours every weekday to get to high school / work / university. I enjoyed driving fast and aggressively. I'm fortunate to be alive, and to have not killed or injured anyone else along the way.

I moved to Vienna, Austria, almost 15 years ago, and haven't looked back. Haven't owned a car at all during that time, and don't miss it. We have excellent public transportation options (subway, trams, buses), bike lanes, walkable distances, at least two car-sharing groups (which I haven't tried yet but would like to) and reasonably-priced taxis -- plenty of convenient and cost-effective options for getting around town.

If we want to go further, it's usually by train, sometimes by plane and, when necessary, in a rental car. As for the rental car, we usually splurge and take something really nice. Since we only rent a few days a year, on average, we can afford to rent a model that we wouldn't be able to buy.

I also go back home around once a year, for 3-5 weeks at a time. I'm fortunate enough to have a customer in Texas who bought a stupidly fast and fun car for me to drive when I'm in town. Between occasional rentals and the car in Texas, I'm able to scratch the 'drive for fun' itch without having to buy my own car.

My point is that with plenty of convenient and cost-effective options that cover most of my day-to-day transportation needs, plus some cost-effective ways to scratch the 'drive for fun itch' occasionally, I don't miss owning a car at all. I'm sure my average stress level has gone way down since I quit driving regularly.

I think if these kinds of daily transportation options were made available to more people, and there were also options (like nice rental cars or, say, sports car / convertible / luxury car sharing schemes) for the occasional fun driving experience, lots of people who now insist on owning their own cars could be convinced to switch over.
posted by syzygy at 4:10 AM on May 14, 2014 [1 favorite]


I've taken buses my whole life, but it's obvious even to me that most people are in love with their cars, and given the choice they prefer them to options that involve sharing vehicles with strangers, or sticking to pre-determined routes.

I think this is a more pronounced attitude in America - lots of people don't have cars, or need cars, in other countries. In particular young, urban populations often forgo cars unless or until they have something they need cars for (usually an outdoorsy hobby, a car-commuting job or a child).

The US, in part because of the cult of individualism-through-consumption, in part because of the large amounts of space within a single country, the patchy quality of Metro public transport and the collusion of urban planners and the auto industry, is a little bit of an outlier here, but that ingrained culture is pretty recent in the greater scheme of things.
posted by running order squabble fest at 4:37 AM on May 14, 2014 [3 favorites]


This is far less likely that another stupid future possibility: that your car's software will be locked down to the user, iOS style

Isn't there already an underground market for hacked ROMs, to do things like override emissions controls for higher power from the engine?
posted by thelonius at 4:39 AM on May 14, 2014


> A robot car programmed to reduce harm will tell you to walk or ride a bike.

Mr. Robot wants you to get back in your vat, brain.
posted by jfuller at 5:15 AM on May 14, 2014


Until fairly recently there were some roads that didn't have speed limits, producing a legal justification for having on-road cars with the capability of going extremely fast. Wikipedia says that the current highest limit is 85MPH.

There are still roads that don't have speed limits, they're just not in the US.
posted by corb at 5:49 AM on May 14, 2014 [1 favorite]


Vehicle *crasher = CheckForImminentCrash();
if (crasher && (crasher->Passengers() > Passengers())) {
   Cliff *cliff = FindNearbyCliff();
   if (cliff) {
      DriveOff(cliff);
   } else {
      // Not sure what to do here, spec is unclear, for now just:
      FloorIt();
   }
}
posted by Flunkie at 5:55 AM on May 14, 2014 [7 favorites]


RogerB: The Trolley Problem of Clickbait: Should a Society Stonewall Lifesaving Technology Because of Vague Ethical FUD?

Right. This is like how every time life extension comes up, we get all these qualms about the technology because only the rich would be able to afford it. Or anytime we talk about cloning, people become Jeff Goldblum from Jurassic Park.

I'm picturing a Homo erectus scolding his cave-mate for playing god by starting his own fire.
posted by spaltavian at 6:03 AM on May 14, 2014 [1 favorite]


> I'm picturing a Homo erectus scolding his cave-mate for playing god by starting his own fire.

Look here, brain sweetie, don't make Mr. Robot come over there.
posted by jfuller at 6:07 AM on May 14, 2014


From an alternate universe...
The Mathematics of Murder: Should we blame a human driver for saving themselves at the cost of others?

It happens slowly — much more slowly than your car's driver, a robot, would operate.

A front tire blows, and the manually-driven SUV swerves. It veers left, into the opposing lane of traffic, instead of steering right away from other cars. Too late, brakes engage and the driver tries to correct. There’s too much momentum, and the driver's reactions are simply too slow. Like a cornball stunt in a bad action movie, the car slams into the oncoming traffic.

The driver, the human who opted not to use an autonomous driving system, has chosen to save themselves. Better that, its soft, slow, biological brain decided, than avoiding a high-speed, head-on collision with multiple cars by swerving off a cliff to their inevitable demise. Stay alive at all costs. The math couldn’t be simpler.

...

The key factor, again, is the driver’s human status. “With limited power comes limited responsibility,” says Lin. “If these biological drivers have lower capacity than robots, lower "processor" speeds, worse sensors, that seems to imply a less responsibility to make better decisions.”

Current human drivers, it should be said, are more digital watch than HAL-9000, unable to notice a cyclist in the road ahead or a child stepping out from behind a parked car, much less churn through a complex matrix of projected impacts, death tolls, and what Lin calls “moral math” in the moments before a collision.

So if we assume that incompetence is the manifest destiny of humans, then we’re forced to ask a question that’s bigger than who they should crash into. If humans are going to be drive cars, can we really blame them for the carnage they'll create on the roads?
posted by EndsOfInvention at 6:07 AM on May 14, 2014 [6 favorites]


I couldn't figure out what was so annoying about this article at first, and then it dawned on me: either the author has no idea how computers/ machine learning works, or the author is being disingenuous.

This type of 'choice' would never be explicitly programmed into the car's system. It's too bizarre and unlikely, and it it's one member if the infinite set of possibilities that just can't and shouldn't be taken into account designing a system like this.

It's an asinine question. We might ask the exact same question of humans, but in that moment, a human driver is not going to make that moral calculus. It'd be insane to expect them to: the incident would happen so quickly that any human driver would act entirely on instinct.

We wouldn't judge a human for doing so. Why is it any different for an automated system?
posted by graphnerd at 6:18 AM on May 14, 2014 [4 favorites]


Yeah it'd go like "Rich people have access to technology first, therefore, the plebes without smart cars will end up having to bite the bullet, no matter how many are in the vehicle. Once the tech spreads down to the masses we can start enacting legislation, but before then... HIT THAT GAS, JEEVES!
posted by symbioid at 6:32 AM on May 14, 2014


But to more seriously answer the question, I think as graphnerd says, it's stupid to expect a machine to have some Asimovian human calculus to deal with gotcha questions like this. Though it is a good ethical question in the end, the AI will try to protect its driver and avoid accidents as much as possible.

If AI vehicles are widespread enough that they can communicate with each other (and fast enough, I don't know how fast communication can happen between two vehicles while dealing with collision physics and stuff), but I would imagine that communicating vehicles could make the roads even safer as they would be aware not only visually of the space around them, but also of the "thoughts" of the cars around them and help plan contingently. Of course, race conditions could happen I suppose, so you'd have to figure out ways to try to avoid that, maybe. So maybe it's better to let the AIs *not* be car-telepathic. You thought one Christine was bad - try a whole FLOCK! LOL...
posted by symbioid at 6:40 AM on May 14, 2014


In addition no-one is proposing, or remotely able, to give a robot car the kind of extended rational autonomy that would be required.

Challenge accepted.
posted by Fizz at 7:52 AM on May 14, 2014


It doesn't matter as much whether I can imagine such a situation, as how honestly likely those situations are.

A literal school bus v. cliff scenario might be extremely rare, sure. But simply a scenario where avoiding/reducing harm to the occupants of the vehicle puts even more people at risk out side of it? I think that'll be pretty common, really.

I think there's a bit of an imagination failure going on here --- when people are picturing a highway, they're probably thinking of a typical four lane freeway with highway dividers and grass embankments lining the road, where your choices when trying to avoid an obstacle in front of you are basically hit a wall/rail or move into another lane. But imagine something like say, Route 66 --- one of the older, two lane roads with parking lots and gas stations and shops right next to the highway, and cars going 50-70MPH. Accident happens up ahead of you, car can go right, into the next lane, which is also filled with speeding cars, or left, and shoot into a parking lot with parked cars and pedestrians. Much less risk to you to jump the curb.

Any robot car worth its salt will have the ability to estimate the mass of the objects around it --- must have, in order to avoid hitting walls and trees. And if you're going to program it to avoid hitting stuff, you've got to tell it how to decide between better and worse things to hit --- aim for the bushes and not the boulder. Do we simply say: "if a collision cannot be avoided, aim for less mass" and leave it there? But sometimes that'll mean your SUV can swerve left into a Mini or right into a semi. Or left into oncoming traffic v right into a crowded bike lane.

These questions will have to be answered one way or another --- and since the people doing the answering will be working for large corporations which are highly concerned about legal liability, the answered they come up with are going at affect how the technology develops.
posted by Diablevert at 8:32 AM on May 14, 2014



A literal school bus v. cliff scenario might be extremely rare, sure. But simply a scenario where avoiding/reducing harm to the occupants of the vehicle puts even more people at risk out side of it? I think that'll be pretty common, really.


All these scenarios imply the computer is able to steer the vehicle but not stop it.

Letting a car get to that point is manslaughter.
posted by ocschwar at 8:42 AM on May 14, 2014 [1 favorite]


Perhaps more importantly, can we pass laws to ensure that robot guns can open carry themselves?
You'll get my gun when you take it from my cold, metallic hands.
posted by klangklangston at 8:56 AM on May 14, 2014 [1 favorite]


All these scenarios imply the computer is able to steer the vehicle but not stop it.

Letting a car get to that point is manslaughter.


Braking systems having the capacity to fail is manslaughter?
posted by Pope Guilty at 8:58 AM on May 14, 2014


Brain, Mr. Robot has received a message from your instructional system. You are alleged to have told it "Go divide by zero, wood screw."
posted by jfuller at 9:01 AM on May 14, 2014


These questions are absurd. These robot cars will be all-new vehicles, which will have passed the most recent, most rigorous safety standards. The correct response is to break hard and fast until you run into something. Air bags, crumple zones, and seatbelts will ensure that everyone survives.

If you are driving blind around on a winding, twisty road with a cliff on one side and you come around a corner to find an accident blocking your lane... well, robot drivers properly functioning will never outrun their sensors and will have plenty of time to stop. And if the car does outrun its brakes, then the robot driver will still notice the accident and brake more rapidly than a human would, limiting any impact, leaving you alive to sue the manufacturer for allowing the vehicle to outrun its stopping power.

If your car loses its breaks during normal driving, the computer will be smart enough to cut the gas and gear down while sending out a distress signal, which would allow other robot cars to accommodate it until it coasts to a stop.

If your car loses its breaks while going around a curve on a two-lane road with a cliff on one side, and encounters an accident, then of course the thing to do is to hit something, either the cars in the accident or the mountain on the non-cliff side. Cars are designed to hit things hard and fast while protecting passengers. It's like, one of their core competencies! And if you're driving on a winding two-lane road, your speed is going to be what, 30, 40 mph? That's survivable. The cliff isn't.

There just isn't any situation that will have both a robot brain still capable of making decisions, an unavoidable accident, and enough information to make self-sacrifice a clear option that would save more lives than not. If an investigative team ultimately determines that a car could have sacrificed itself to save others, then that's great, but the car won't have enough information to make that determination in the moment. Brake hard, refuse to move unless every passenger is wearing a seatbelt, prefer running into stationary objects over moving vehicles, and prefer collisions with cars (with their many layers of protection) over collisions with pedestrians or bike riders. Bam. Moral quandary solved.
posted by jsturgill at 9:13 AM on May 14, 2014 [1 favorite]


These questions are absurd.

I love that the idea of a technological product produced for the mass market that never fails isn't absurd, that the idea of considering a far-future utopia with all automatic cars and ignoring the 50+ years between now and then isn't absurd, but the idea that we should be develop a moral calculus for these situations is absurd, even though that's the very next thing you go on to do.
posted by Homeboy Trouble at 9:28 AM on May 14, 2014 [2 favorites]


I thought we already answered this question
posted by rtimmel at 9:30 AM on May 14, 2014


I love that the idea of a technological product produced for the mass market that never fails isn't absurd, that the idea of considering a far-future utopia with all automatic cars and ignoring the 50+ years between now and then isn't absurd, but the idea that we should be develop a moral calculus for these situations is absurd, even though that's the very next thing you go on to do.

The idea of self-sacrifice as part of the equation is absurd. Describe a realistic situation where a robot driver will have sufficient information to know that by killing its passengers, it will save more passengers in other vehicles. I'd like to hear it!

I think there are plenty of scenarios where a car might intentionally run itself off the road, into other cars, or into stationary objects--resulting in potential harm or death to one or more of its passengers. But that would be an entirely different discussion, and one that is not clarified by introducing trolley problem style moral calculus about trading known numbers of lives.
posted by jsturgill at 9:36 AM on May 14, 2014 [2 favorites]


But the same robot repugnance comes up in discussions of military kill-bots. I genuinely do not understand it and suspect it is a disgust thing loosely related to the uncanny valley rather than genuinely ethical in nature; but I really just don't get it.

I think there are two things going on here. With the car scenarios, I think what changes the regular intuitive process is the (not necessarily on target) sense that the machine can operate both immediately and with perfect, total knowledge of the repercussions of any action, whereas a person -- obviously -- simply has to react in the the heat of the moment and do the best s/he can.

But with military machines I think there's an added layer, in that imagining a fully roboticized military removes from any scenario the figleaf of kill-or-be-killed self-defense.
posted by nobody at 9:48 AM on May 14, 2014


robot drivers properly functioning will never outrun their sensors and will have plenty of time to stop

Pedestrians and non-robot drivers move out suddenly into the path of cars all the time. Cars that make sudden, unpredictable stops get rear-ended all the time.

I would instinctively slam on the brakes if someone walked or pulled out in front of me, not knowing if I was going to get rear-ended by a big truck, and maybe knocked into the sudden obstacle anyway. A robot would have enough time to realize that trying to stop is futile and would lead to more vehicles being involved in the accident.

Or think of driving around in the city. A pedestrian walks out in front of the car. It can (A) slam on the brakes and hit the pedestrian at a slower speed (B) slam on the brakes and swerve into oncoming traffic or (C) slam on the brakes and swerve into pedestrians on the sidewalk. That doesn't seem implausible or even rare.
posted by straight at 9:50 AM on May 14, 2014


Or think of driving around in the city. A pedestrian walks out in front of the car. It can (A) slam on the brakes and hit the pedestrian at a slower speed (B) slam on the brakes and swerve into oncoming traffic or (C) slam on the brakes and swerve into pedestrians on the sidewalk. That doesn't seem implausible or even rare.

Yes, but that's not the trolley problem. You're not trading lives. If the oncoming traffic is made up of robot cars, there is time to communicate and coordinate a response that involves either no damage or a low-speed, head-on collision -- the most survivable kind.

If the sidewalk is clear (and a properly functioning robot car would know, one way or another), you can swerve onto it and/or into a low-speed collision with parked vehicles, an architectural element, or whatever.

If there are no valid options beyond oncoming traffic and hitting the pedestrian, the robot can calculate the odds of survivability for the pedestrian being hit by an aggressively braked vehicle at X mph versus the odds of survival in both vehicles for a head-on collision at Y mph.

The last option is the closest to a trolley-problem like scenario, but it's very different in a key way: it's dealing with uncertain harm, not trading one certain outcome for another, and it's very unlikely that this scenario would present anything other than an injury to one or more parties and some property damage, not death. The break-point between tempting a head-on collision (which may not happen if the other car reacts quickly enough etc.) and simply braking hard for the pedestrian while honking (which may cause the pedestrian to jump back, avoiding collision entirely) is something that needs to be hashed out, and which reasonable people can disagree on, but it really has nothing to do with the fantasy presented in the article.
posted by jsturgill at 10:07 AM on May 14, 2014


Describe a realistic situation where a robot driver will have sufficient information to know that by killing its passengers, it will save more passengers in other vehicles. I'd like to hear it!

I can give you the opposite, easily enough: Car and motorcycle are travelling through a tunnel; motorcycle skids and car opts to plow through it instead of hitting a tunnel wall. Killing the cyclist rather than injuring its occupant.

I mean, if your argument is that there no way for the robot to "know" how many people are in the other car or to "know" that they will be spared or killed by its actions...I mean, sure, I guess, but that's a little facile, no? In reality, of course we're talking about risks and probabilities and likelihoods. I don't think that really changes the fundamental moral questions in play. Take bike lanes, for example. Simple enough to program in a recognition of what bike lanes are and that cars shouldn't cross into them --- required, even. Is it ever okay to make an exception to those rules? It's easy to think of a scenario where swerving into a bike lane is obviously preferable --- say the lane's empty and doing so avoids an accident. We want the robot to make that choice. What about rear-ending a stopped vehicle vs. swerving into an occupied bike lane? What the algorithm for that? At the end of the day it's a fundamentally moral question -- car vs. bike ends great for car, very bad for bike, whereas car vs. car is a 50-50 proposition. If "spare your driver" shall be the whole of the law, then the biker gets it....and likely the car company gets the lawsuit from the biker's family. On the other hand, would you let a robot drive you if that robot was bound to avoid injuring others before protecting you? If you were the one spending 6 weeks with whiplash because the car would opt to rear end a minivan rather than swerve into an empty bike lane?
posted by Diablevert at 10:22 AM on May 14, 2014 [1 favorite]


I can give you the opposite, easily enough: Car and motorcycle are travelling through a tunnel; motorcycle skids and car opts to plow through it instead of hitting a tunnel wall. Killing the cyclist rather than injuring its occupant.

Whoa. Slamming into a tunnel wall, causing your vehicle to skid wildly out of control mostly in the direction it was already traveling due to inertia, is going to help who how now?

What about rear-ending a stopped vehicle vs. swerving into an occupied bike lane? What the algorithm for that?

The algorithm is that the vehicle drives like a robot, not a person, and leaves space between it and the next car that allows it to stop. If the algorithm breaks down, it's a manufacturing or coding problem, not an ethics problem. In the meantime, the car breaks hard thinking it has enough time to stop, and it taps the other car's bumper.

Regardless, fender-benders are not trolley problems. Whiplash and property damage is not trading lives. These aren't moral quandaries, are they? And if they are, do they have any relationship to the trolley problem scenarios being hyped?
posted by jsturgill at 10:32 AM on May 14, 2014


Pedestrians and non-robot drivers move out suddenly into the path of cars all the time. Cars that make sudden, unpredictable stops get rear-ended all the time.

And robot cars with 360 degree vision are better-equipped to spot and respond to these events than we are.
posted by Pope Guilty at 10:42 AM on May 14, 2014


I love that the idea of a technological product produced for the mass market that never fails isn't absurd, that the idea of considering a far-future utopia with all automatic cars and ignoring the 50+ years between now and then isn't absurd, but the idea that we should be develop a moral calculus for these situations is absurd, even though that's the very next thing you go on to do.

The product we've had for the last 100 years is a rolling death trap that has killed millions. Most humans with a license to operate a motor vehicle are terrible drivers. Test results from existing test fleets of self driving cars suggests that these vehicles will be an order of magnitude safer. Not just the same proficiency as a human driver, but 10 or 100 times safer. The moral calculous comes out heavily in favor of the self driving car.
posted by humanfont at 10:58 AM on May 14, 2014 [7 favorites]


The algorithm is that the vehicle drives like a robot, not a person, and leaves space between it and the next car that allows it to stop. If the algorithm breaks down, it's a manufacturing or coding problem, not an ethics problem. In the meantime, the car breaks hard thinking it has enough time to stop, and it taps the other car's bumper.

Dude, it seems silly to reject the idea that this will be a problem because robot cars will never hit anything ever. I agree that in general, robot cars will be programmed to leave enough room to brake between themselves and other cars. The whole thing about accidents, however, is that they are unexpected. That's the key. What happens when there is an unexpected obstruction that the robot car will collide with if it countinues in its current course? That's the core issue we're dealing with here. Certainly some frequent collision scenarios can be programmed around. But boulders roll down hillsides, shit falls off trucks, people throw things off bridges, deer exist and little kids run after balls into the street. There will be occasions when the car must chose between hitting what's in front of it and trying to avoid a collision, just as there are for human drivers, and unlike human drivers the metrics by which they make those decisions will be consciously laid out by the robot's engineers.

riving cars suggests that these vehicles will be an order of magnitude safer. Not just the same proficiency as a human driver, but 10 or 100 times safer. The moral calculous comes out heavily in favor of the self driving car.

I don't think anyone's talking about banning robot cars because of these issues. That doesn't mean we shouldn't talk about them. Indeed, we have to talk about them if we want the way the machines work in practice to line up with our own moral sensibilities. I really don't think just winding 'em up and seeing which way they jump is the way to go here. And I suspect Ford motor co.'s in house counsel agrees with me...
posted by Diablevert at 11:18 AM on May 14, 2014 [1 favorite]


Perhaps more importantly, can we pass laws to ensure that robot guns can open carry themselves?
You'll get my gun when you take it from my cold, metallic hands.


This exchange brings up an interesting (to me) theoretical issue.

/derail

In the US, we are constantly improving on existing technology. Consequently, wr channel a lot iof energy into security to protect our Stuff. We have passwords, thumbprints or even retinal scans for our credit cards and computers. Why have we not developed some security measures like these for guns?

Hear me out on why this is a good idea. I'm not a fan of guns in general, but I'm (repeatedly) told by those who are that all responsible gun owners practice gun safety, train properly, and get legal permits to carry and/or conceal their weapons. I'm told the only issue they have with gun control is that they feel the anti-gun folks are using it as an excuse to take away their (constitutionally-guaranteed!) right to have firearms. If we outlaw guns, only outlaws will have guns, blah blah blah.

Right now, there's not much of a deterrent against gun theft. You can lock them up, but someone who breaks into your locked house can probably break into your locked gun cabinet. Thieves can sell guns on the black market relatively easily. They file down the serial numbers, maybe? Whatever the existing security measures built into guns, they're pretty basic, is what I'm getting at.

Okay, so gun owners don't want to their guns taken away. And guns in the hands of criminals (rather than responsible gun owners) are potentially dangerous for everyone, right?

Then why don't we at least make guns smarter, to deter theft? We may not be able to encode them to our DNA like in District Nine, but guns are simple, mechanically. Surely we could find a way to thwart them with modern technology. Why aren't there built-in bioidentity features like thumbprints or retinal scans? Why are there no "Find My Gun" measures to make it impossible to fire a gun once you realize it's been stolen?

I'm not naive; I know the NRA is powerful and always blocks gun regulation efforts here. I also know their push against gun control is really self-serving rhetoric (because if they had their druthers everyone would buy a gun, since that's how they make their money). But the NRA doesn't make any money off of stolen guns. And adding a level of bio-security to firearms would make them more expensive to buy, so the NRA logically should not oppose the idea.

TL;DR: People are always going to be stupid. Why don't we make our guns smarter?
/end derail
posted by misha at 11:20 AM on May 14, 2014


Is xenophobia the right word, or is there some other term that better describes the tendency to apply higher standards to the unfamiliar?
Status quo bias. It's possible to overdiagnose, though. If you're comparing the-world-as-we-observe-it to the-alternative-as-we-predict-it, and the prediction is only slightly better, then you might rationally prefer the status quo after you take the possibility of prediction error into account.
The responsibility here lies entirely with the humans in the vehicles, the cars shouldn't be making these kind of decisions at all. If that means more people, or the "wrong" people die in an accident I think that's an acceptable price to pay for living in a society where humans are always unquestionably in charge.
We can program a robotic voice to delegate, "In 0.5 seconds, which way should we swerve?", but I'm not sure the driver will be able to answer in time. The only way for humans to be in charge in such situations is to do exactly what's being done now: have humans try to think about the possibilities in advance, then program general rules for them into the cars.

That said, I despise pseudo-utilitarian solutions to trolley problems. If you're calculating the utility of outcomes which look like "one fat man would die, or five skinny people would die" but you don't include even the most obvious indirect outcomes like "skinny people will be afraid to play on the tracks, or fat people will be afraid to go anywhere near the tracks", then ironically the most utilitarian conclusion you could come to is "I shouldn't try to make utilitarian decisions unassisted."
if engineers allow their robotic car to get into a situation that qualifies as a trolley problem, they are guilty of malpractice severe enough to count as manslaughter.
I saw someone jaywalking across a high speed freeway just a few weeks ago. I don't think that he was entirely safe doing so. If he hadn't been lucky or if he had only been lucky because a driver endangered themselves to avoid him, then I don't think much of the blame would have lied with the drivers or the car companies.

This is an interesting possibility for an indirect outcome, though, isn't it? If robotic cars with inhuman reflexes eventually dominate roads, to the point that nobody feels the need to hunt down a crosswalk for safety's sake anymore, it's entirely possible that "reduced failure rate for jaywalking" times "increased prevalence of jaywalking" could equal more pedestrian deaths and injuries than before.
posted by roystgnr at 11:27 AM on May 14, 2014 [2 favorites]


Dude, it seems silly to reject the idea that this will be a problem because robot cars will never hit anything ever.

Dude, my example was of a robot car rear-ending something else. Clearly accidents will happen.

But setting up a trolley problem exercise to explore the nuances and ethics of this upcoming reality is like arranging a debate on renewable energy and calling one side Team Hitler. The trolley problem is sensationalist and utterly divorced from the real decisions about how and what to prioritize.

For all but the most extreme scenarios, the right response is to avoid tailgating (until the fabled land of drafting arrives) and brake hella hard when you need to. A computer-controlled car will be able to react in less than a quarter of a second and be able to brake safely far faster than a human--perhaps reducing speed as much as 30 feet per second per second, instead of 15 or so.

Once we acknowledge the actual differences inherent in the problem domain, we can start having a productive conversation about actual plausible scenarios that acknowledge the actual situational awareness and capabilities of the vehicles.

But boulders roll down hillsides, shit falls off trucks, people throw things off bridges, deer exist and little kids run after balls into the street. There will be occasions when the car must chose between hitting what's in front of it and trying to avoid a collision, just as there are for human drivers, and unlike human drivers the metrics by which they make those decisions will be consciously laid out by the robot's engineers.

None of these are a trolley problem scenario, and none of them require sacrificing the car's occupants.
posted by jsturgill at 11:28 AM on May 14, 2014 [1 favorite]



None of these are a trolley problem scenario, and none of them require sacrificing the car's occupants.


The trolley problem, as far as I'm concerned, is simply a way of thinking about when it is acceptable to cause harm to one individual in order to spare several others. If you want we can go back and forth trying to come up with plausible real life scenarios in which the driver of the car will definitely be killed if the robot Xs and two other people will definitely be killed if the robot Ys. It hardly seems necessary, because regardless of how specific an example I come up with (or how implausible you find the particular details I choose) it seems indisputable to me that if we want a robot car to avoid collisions at all there will be occasions when it must choose between risking injury to the occupants of the vehicle vs risking injury to people outside the vehicle. That's the "trolley problem" as it applies here, in a practical, real life way.



Misha --- in re your derail, as I understand it the technology already exists and opposition to it is a currently a major hot-button issue on the right.
posted by Diablevert at 11:47 AM on May 14, 2014 [1 favorite]


Misha, see this thread.
posted by arcolz at 11:59 AM on May 14, 2014 [2 favorites]


The only way for humans to be in charge in such situations is to do exactly what's being done now: have humans try to think about the possibilities in advance, then program general rules for them into the cars.

I totally agree.

There are situations where some sort of moral calculus is involved, like Diablevert mentioned. Like a deer randomly jumping in front of a car on a crowded highway. Should cars on that road already be going slower because of the risk of animal collisions, thus inconveniencing thousands of people? Or is it worth the statistically unlikely extra risk to each person if it gets those commuters to their destinations on time? What if it's a dog instead of a deer, and a collision poses no risk to the occupants of the car? Can the car tell the difference between a dog and child?
posted by Kevin Street at 12:00 PM on May 14, 2014


It hardly seems necessary, because regardless of how specific an example I come up with (or how implausible you find the particular details I choose) it seems indisputable to me that if we want a robot car to avoid collisions at all there will be occasions when it must choose between risking injury to the occupants of the vehicle vs risking injury to people outside the vehicle. That's the "trolley problem" as it applies here, in a practical, real life way.

The trolley problem setup is a very poor vehicle for exploring the actual decisions that need to be made and the actual ethical dilemmas inherent in the paradigm shift. It is a kind of exercise that sets things in terms of certainty and life and death; the actual problem domain will be based around probability, with the negatives of injury and property damage. Discussing the trolley problem in this context is like discussing contract case law in the context of a trespassing incident: it simply isn't connected, and it obscures more than it illuminates. What ethical principles and viewpoints the trolley problem (and similar unpossible robot car-based variants) do illuminate aren't relevant at all.

Like a deer randomly jumping in front of a car on a crowded highway. Should cars on that road already be going slower because of the risk of animal collisions, thus inconveniencing thousands of people? Or is it worth the statistically unlikely extra risk to each person if it gets those commuters to their destinations on time?

Those are questions that have nothing to do with the trolley problem, and little to do with the robot car's actions in an emergency.

Anyway, I think I've made that perspective clear enough, so I'll stop repeating it.
posted by jsturgill at 12:07 PM on May 14, 2014


Discussing the trolley problem in this context is like discussing contract case law in the context of a trespassing incident: it simply isn't connected, and it obscures more than it illuminates.

You find a hypothetical example that deals with killing entirely unconnected to a real life problem that risks injury or death? And not say, as nearly related as murder is to manslaughter, or negligent homicide? I don't understand you.
posted by Diablevert at 12:32 PM on May 14, 2014


"But setting up a trolley problem exercise to explore the nuances and ethics of this upcoming reality is like arranging a debate on renewable energy and calling one side Team Hitler."

furnaces make global warming but the trains ran on time and were very fuel efficient
posted by klangklangston at 12:47 PM on May 14, 2014


They're not examples of the trolley problem, but they are things that the engineers of robot cars will have to deal with. Right now the liability for a collision lies with the driver/car owner, but when cars drive themselves, liability will transfer to the company that built the vehicle.
posted by Kevin Street at 12:54 PM on May 14, 2014


They're not examples of the trolley problem, but they are things that the engineers of robot cars will have to deal with. Right now the liability for a collision lies with the driver/car owner, but when cars drive themselves, liability will transfer to the company that built the vehicle.

The OP and most comments here have been about variants of the trolley problem, and/or sacrificing the passengers in the car for a known net saving of life.
posted by jsturgill at 1:27 PM on May 14, 2014


The product we've had for the last 100 years is a rolling death trap that has killed millions. Most humans with a license to operate a motor vehicle are terrible drivers. Test results from existing test fleets of self driving cars suggests that these vehicles will be an order of magnitude safer. Not just the same proficiency as a human driver, but 10 or 100 times safer. The moral calculous comes out heavily in favor of the self driving car.

Maybe I didn't make myself clear; I couldn't agree with you more that cars cause immense havoc and would never be allowed if they were invented today. Improvements in their safety would be welcome; what I am pointing out is that any arbitrary application of autonomous vehicle technology will not instantly render roads 100% safe, and thus we need to have discussions about what decisions can and should be made by cars in dangerous situations - and even noting that the definition of a dangerous situation will be different when perceived by a series of sensors different from our eyes and ears. All sorts of issues come up; this article discusses some (maybe not the best) ones, but there are many that need to be thought of.


For all but the most extreme scenarios, the right response is to avoid tailgating (until the fabled land of drafting arrives) and brake hella hard when you need to. A computer-controlled car will be able to react in less than a quarter of a second and be able to brake safely far faster than a human--perhaps reducing speed as much as 30 feet per second per second, instead of 15 or so.

You seem to be under the misconception that brakes depend on ankle strength or something; they are a hydraulic mechanical system already. (When they don't fail, of course.) Probably installing better brakes on cars would reduce accidents; that's neither here nor there with autonomous vehicle technology. The reaction time improvements are actually pretty modest in many cases, and one of the popular autonomous vehicle cheerleading refrains centres around cars being able to go faster with the reduced reaction time; assuming this happens, that eats the reaction time improvements back up.
posted by Homeboy Trouble at 1:38 PM on May 14, 2014


So that one time they played chicken on Knight Rider becomes suddenly relevant again.
posted by radwolf76 at 1:50 PM on May 14, 2014 [2 favorites]


You seem to be under the misconception that brakes depend on ankle strength or something; they are a hydraulic mechanical system already.

During a crash human reaction times are pitifully slow. On average it is something like 1-2 seconds before you start hitting the brakes in a crash. Many times the collision is over by the time the brakes are hit. High end cars with automatic collision detection and braking can respond in as little as 6milliseconds. Ankle strength has nothing to do with it. The amount of energy absorbed by the brakes in slowing the car vs. the energy absorbed by whatever the car hit changes because the brakes are applied in a timely manner.

we need to have discussions about what decisions can and should be made by cars in dangerous situations - and even noting that the definition of a dangerous situation will be different when perceived by a series of sensors different from our eyes and ears.

In order to have that discussion there needs to be mutual understanding of how these vehicles are designed and how the software works to make decisions. Autonomous vehicles are on the roads now in small numbers and they will be available to consumers in the next 2-5 years; not 50.
posted by humanfont at 2:21 PM on May 14, 2014 [1 favorite]


It's honestly kind of amazing how many autocar detractors seem to regard the current level of autocar technology as something that we might have a few decades down the road. They're already better, safer drivers than we are.
posted by Pope Guilty at 4:02 PM on May 14, 2014 [2 favorites]


I'm more worried about our GPS info being sold to advertisers for sponsored drives down to the Taco Bell.
posted by klangklangston at 4:14 PM on May 14, 2014


A more realistic scenario is that your car detects the tire failure in progress and safely pulls you over to the side of the road. The car gives you the option to automatically get roadside assistance. You don't realize that you are being steered to a more expensive service because they pay a higher fee to Google to be the default provider. Also they will charge you 15% more for the service because a data broker told them you live in a higher income neighborhood. All of these surcharges are invisible to you.
posted by humanfont at 4:31 PM on May 14, 2014 [8 favorites]


This is an interesting possibility for an indirect outcome, though, isn't it? If robotic cars with inhuman reflexes eventually dominate roads, to the point that nobody feels the need to hunt down a crosswalk for safety's sake anymore, it's entirely possible that "reduced failure rate for jaywalking" times "increased prevalence of jaywalking" could equal more pedestrian deaths and injuries than before.

Yeah, but wouldn't that collision increase re-ignite the fear of jaywalking once again? Assuming, of course, that increased jaywalking would have been rational behavior in the first place.

(It wouldn't.)
posted by graphnerd at 4:33 PM on May 14, 2014


WAY TO MAKE IT WORSE HUMANFONT
posted by klangklangston at 4:34 PM on May 14, 2014 [1 favorite]


Braking systems having the capacity to fail is manslaughter?


You have accelerometers and a speedometer. You have pressure gauges on the braking mechanism. Car brakes are already designed to give the driver ample warning that they need a checkup before they fail. A computer should spot the problem long before a human, when the pressure on the brake slows the car down by not by enough.
posted by ocschwar at 9:07 PM on May 14, 2014


You know, people always say this kind of shit about everything in the sharing economy, and most people are actually not complete assholes.

Apparently you've been exposed to an entirely different species of "most people" than I have.
Hopefully yours is more representative than mine.
posted by Pudhoho at 11:14 PM on May 14, 2014


Dear god, please let me play this simulation game. I want to be the mad man driving a busload of people down the highway of robocars. Oh, please, oh please, oh please. Yes, parachutes for the cars! Ejector seats, yes, yes! Oh, PLEASE.

Utter mayhem! Parachutes bursting open everywhere! Wrecked cars! A busload of safe people! Rather Futurama-esque, I suppose.
posted by Goofyy at 3:05 AM on May 15, 2014


If I were an evil psychopath in a world filled with the kinds of stupid self-driving cars predicted in the article, I would think it would be fun to fill a car with people then go find a cliffside highway on which to drive the wrong side of the road, just to make all the approaching vehicles plummet off the cliff to avoid killing me and my automobile filled with precious life.
posted by JHarris at 2:41 PM on May 15, 2014 [2 favorites]


Do you think someone could take this automatic car driving software and install it into my iPhone so that I can read metafilter and watch cat videos while the phone uses the front camera to warn me when I'm about to walk into traffic or off a cliff?
posted by edd at 4:11 PM on May 15, 2014


I'm tempted to make this a FPP

The Trick That Makes Google's Self-Driving Cars Work


Robot cars work better on a controlled, known track, so Google is using it's massive street data collection apparatus to map entire cities so well they are effectively known tracks. The car doesn't have to see and understand everything, it just has to detect what's different from the pre-mapped model of the world it already has.

"The more you think about it, the more the goddamn Googleyness of the thing stands out."
posted by straight at 11:13 PM on May 15, 2014 [3 favorites]


also "Google wants to make the physical world legible to robots."
posted by straight at 11:24 PM on May 15, 2014 [2 favorites]


« Older My climbing partner, she eats chicken liver.   |   so unless you're a rich Indon army guy in the... Newer »


This thread has been archived and is closed to new comments