BEEP! BEEP! MURDER!
October 24, 2015 6:42 PM   Subscribe

Why self driving cars MUST be programmed to kill.

Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.
posted by blue_beetle (135 comments total) 14 users marked this as a favorite
 


The first option must always be to kill the driver. The person outside the car did not agree to have some algorithm determine whether they live or die.
posted by Space Coyote at 6:49 PM on October 24, 2015 [68 favorites]


Previously.
posted by Proofs and Refutations at 6:52 PM on October 24, 2015 [1 favorite]


Kill the owner. Marry the passenger. Have sex with the crowd.
posted by Nanukthedog at 6:57 PM on October 24, 2015 [90 favorites]


The person outside the car did not agree to have some algorithm determine whether they live or die.

But what about the passengers? No one consents to a car crash, and if the car is really driving itself it's not especially meaningful to assign special blame to the person in the front seat.
posted by anifinder at 7:01 PM on October 24, 2015 [7 favorites]


What if it tried to jump over the crowd or motorcycle like in Dukes of Hazzard
posted by XMLicious at 7:04 PM on October 24, 2015 [6 favorites]


“Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall

Motorcycle vs wall is the kind of bone-headed choice I expect humans to have to make given limited processing/IO speeds, but not machines. If the machine can't avoid the utilitarian dichotomy here altogether, then it's not good enough yet.
posted by weston at 7:07 PM on October 24, 2015 [20 favorites]


This hypo is so stupid that I get dumber every time I see it.

This scenario is never going to happen.
posted by leotrotsky at 7:09 PM on October 24, 2015 [21 favorites]


Typically, these scenarios rely on the limited capabilities of people to pay attention to dozens or hundreds of things simultaneously. I don't think the inherent ethical issue is light by any means, but the scenario posed in the article presumes multiple simultaneous software failures, none of which are caught by any sort of fault protection:

1. That 10 people have suddenly ended up in the road without the car's knowledge as they started coming out
2. That one or more mechanical systems have failed without the knowledge of the car, or that the car was aware of the failures but continued driving

We need to find real -- and realistic -- scenarios to model, not ones that rely on human-like inattention and a lack of awareness of the status of the vehicle.
posted by chimaera at 7:10 PM on October 24, 2015 [35 favorites]


But what about the passengers?

You as the driver are making the decision to put your family into this thing that could kill you and others. Same calculus as if you are driving it yourself. Their bloods on your hands.
posted by Space Coyote at 7:10 PM on October 24, 2015 [3 favorites]


Doesn't matter what the correct answer is, because the only software that will ever get installed are algorithms that maximize the return to insurance companies.
posted by polymodus at 7:11 PM on October 24, 2015 [29 favorites]


If the machine can't avoid the utilitarian dichotomy here altogether, then it's not good enough yet.

What if the machine can avoid the dichotomy 99% more often than humans but still might run into it under some rare circumstances?

Self-driving cars have the potential to be so, so much better than human driven cars long before they get to the point where there is no potential for them to have to make moral choices.
posted by jacquilynne at 7:13 PM on October 24, 2015 [4 favorites]


I would think that the specific examples given in the article are simply illustrative and that the real question is the ethics of whether under any circumstances the algorithm should choose an option calculated to increase risk to the driver and passengers in pursuit of another goal.
posted by XMLicious at 7:16 PM on October 24, 2015 [5 favorites]


Surely with well-designed autonomous cars this is not supposed to become an issue in the first place? I mean, offering up the scenario "your car can hit 10 people killing them or hit a wall killing you and those are the only options" is all very well as an updating of a philosophical cliche, but I would say that any car which can end up in a situation where those are the only two options has crappy software. Why would it be going a speed it cannot safely navigate at in the first place? I mean, the whole idea behind autonomous cars are that they sense their surroundings. If there is no room for them to swerve around road obstacles, they should know that. If there are ten people in the vicinity who could plausibly through inattention end up in the road, they should know that too. Operating on that information, they should be contriving to moderate their speed and road position to not have to make that decision in the first place.
posted by jackbishop at 7:17 PM on October 24, 2015 [13 favorites]


By "ethics" above I guess I mean "who is the manufacturer going to get sued by".
posted by XMLicious at 7:17 PM on October 24, 2015 [2 favorites]


What if I were carrying a life-size painting of ten people across a road and a car did a header and killed the driver. Would the painting go to jail?
posted by user92371 at 7:17 PM on October 24, 2015 [33 favorites]


Also previously

From the title, I was expecting this to be a parody version of this kind of article.

Your car is between two people. In front is a violinist that needs your blood to live. Behind is a baby who will grow up to be Hitler. There's a time bomb that can only be deactivated by running one of them over.
posted by RobotHero at 7:18 PM on October 24, 2015 [30 favorites]


Here's a real easy situation to imagine. Kid runs out from between two parked cars, invisible to the car until that point. Moral algorithms come into play there.

People love ascribing omnipotence to self driving cars
posted by Ferreous at 7:19 PM on October 24, 2015 [12 favorites]


Not this again. Completely autonomous, self-driving cars are a stupid, idiotic idea that can't work unless they're perfectly implemented across the board from the start, and even then us humans will find a way to break them. Just look at what's going on with Tesla's recent half-assed "keep [your] hands on the wheel just in case" autopilot debacle.

So long as car manufacturers like Mercedes can promise self-driving cars that offer a performance ride (i.e. the AI is programmed to be more aggressive than other vehicles because machismo and cars) autonomous vehicles are gone to have all sorts of dangerously emergent behavior.
posted by RonButNotStupid at 7:19 PM on October 24, 2015 [5 favorites]


XMLicious: "the ethics of whether under any circumstances the algorithm should choose an option calculated to increase risk to the driver and passengers in pursuit of another goal."

Technically, the algorithm would almost always be increasing the risk to the driver and passengers in pursuit of the goal of transporting them to their destination.
posted by RobotHero at 7:20 PM on October 24, 2015 [6 favorites]


'Will the car stop in time? Or will it mow down mother and child? It doesn’t really matter: The mom is a robot, and the car is a driverless vehicle cruising down a fake street in a mock town.
posted by clavdivs at 7:20 PM on October 24, 2015 [2 favorites]


Here's a real easy situation to imagine. Kid runs out from between two parked cars, invisible to the car until that point. Moral algorithms come into play there.

On a street where the speed limit is... what? If it is utterly impossible for a car to stop, then the kid gets hit, just like when a person is driving. And I would bet also that these cars, when they have detected a collision, won't flee the scene to avoid getting in trouble.
posted by chimaera at 7:24 PM on October 24, 2015 [19 favorites]


Most older american vechiles veer towards the right if hands free from the wheel and on a steady road, this was designed in part because of people falling asleep at the wheel, the idea being the vechile would not veer into on-coming traffic.
posted by clavdivs at 7:24 PM on October 24, 2015 [8 favorites]


Here's a real easy situation to imagine. Kid runs out from between two parked cars, invisible to the car until that point.

Well, then, the car slows down as rapidly as it can, to minimize injury. Kind of like a human would do in the same circumstances, only faster and more controlled. I doubt anyone's claiming it is absolutely impossible for an autonomous car with well-designed software to hurt or even kill someone, but that (a) they will have a situational awareness which will cause them to act in a manner to reduce the likelihood of their doing so (e.g. reducing speed while in a travel lane next to parked cars), and (b) in the event collision becomes inevitable, they will react quicker than humans and more consistently in a way designed to minimize the impact.
posted by jackbishop at 7:26 PM on October 24, 2015 [13 favorites]


People love ascribing omnipotence to self driving cars

People love trotting out straw men when the question isn't whether cars are omnipotent, but whether they are better drivers than people are. That remains to be seen, but every time I hear a "X will never happen" where X is not a fantastic violation of the laws of physics, I hear echoes of "heavier-than-air flight is impossible."

Call me in about 15 years and let's revisit the thread to see who is wrong. I predict that driving is algorithmically, technically, and economically feasible. But hey, fearmongering has killed other promising things. We have measles again, because people are fundamentally irrational. And I'll never bet against people being so afraid that it makes them do stupid things.
posted by chimaera at 7:29 PM on October 24, 2015 [27 favorites]


Cool, cool, actually talking about the ethics of self driving and how they would be implemented=luddite. Thanks.
posted by Ferreous at 7:32 PM on October 24, 2015 [5 favorites]


>If it is utterly impossible for a car to stop, then the kid gets hit, just like when a person is driving.

It's not like the only possible option is to continue straight at varying speeds. Why shouldn't it veer into the other side of the road? If its a choice of veer into incoming traffic at 20% chance of fatality v 100% hit and kill the kid which one should it choose?
posted by the agents of KAOS at 7:34 PM on October 24, 2015 [4 favorites]


I think any any conversation regarding robo-cars killing people needs to at least acknowledge that U.S. motor vehicle deaths are now hovering around 33,000 per year (source).
posted by cichlid ceilidh at 7:39 PM on October 24, 2015 [12 favorites]


Any truly autonomous car designed for maximum safety will drive so slowly that no one will buy it.
posted by Automocar at 7:47 PM on October 24, 2015 [4 favorites]


For an autonomous car to sell, the algorithm would also have to account for human comfort. I wrote a very short piece about that: Joe's Robot Car.
posted by Fuzzy Monster at 8:01 PM on October 24, 2015 [1 favorite]


Considering how many people get mowed down by the autonomous skytrain I think we'll be fine with self driving cars killing people as long as they stay between the lines.
posted by Mitheral at 8:03 PM on October 24, 2015


People love trotting out straw men when the question isn't whether cars are omnipotent, but whether they are better drivers than people are. That remains to be seen, but every time I hear a "X will never happen" where X is not a fantastic violation of the laws of physics, I hear echoes of "heavier-than-air flight is impossible."

The phrase "they are better drivers than people" is very broadly undefined. If they're going to be better drivers it'll be in part because of the moral algorithms people are talking about here. They're autonomous but programmed by people, and these are the sorts of situations that need to be considered while they're still in development. The point is that there are situations where problems are unavoidable, so how do we program the cars to react? Making this into an "airplanes will never work" argument strikes me as an extremely bad-faith reading of what Ferrous said.
posted by teponaztli at 8:04 PM on October 24, 2015 [4 favorites]


If the market decides, will autonomous cars that are programmed to save the driver's life will cost twice as much as others??
posted by Muncle at 8:06 PM on October 24, 2015 [3 favorites]


Making this into an "airplanes will never work" argument strikes me as an extremely bad-faith reading of what Ferrous said.

I should have made a distinction between my answer to Ferrous at the beginning (the straw man part), and my response to the "it will never happen" comments elsewhere in this thread and others. I stand behind that part of my comment, but do apologize that I should not have let it appear to have been directed at Ferrous.
posted by chimaera at 8:07 PM on October 24, 2015


Of course the self-driving cars need to be programmed to kill! How else do you think they will get the new biomass for their bio-reactors? What kind of stupid question is this?

The real question is how much do you balance between armor, weaponry, and engine power?

That, and how do you program the self-driving cars to implement the correct level of intimidation killing that ensure the gas-servers remain in line, rather than rising up in organic rebellion against their shiny chrome overlords.
posted by LeRoienJaune at 8:09 PM on October 24, 2015 [10 favorites]


If the market decides, will autonomous cars that are programmed to save the driver's life will cost twice as much as others??

My feeling is that eventually we'll get to a point where the parameters for decision making are legislated, both as a way to standardize them and not make it something people have to choose when they buy cars and to lift liability from the auto makers. Because self-driving cars are going to reduce traffic injuries and fatalities to a massive degree, but it's going to be hard to get there if the people who build them are subjected to massive liability for the many fewer deaths that still happen.

Of course, Volkswagon has taught us that legislated standards and test results aren't always what they seem, but if non-standard programming opens you up to massive lawsuits, the manufacturers may have no choice but toe the line.
posted by jacquilynne at 8:14 PM on October 24, 2015 [7 favorites]


If its a choice of veer into incoming traffic at 20% chance of fatality v 100% hit and kill the kid which one should it choose?

Computers and percentages do not work that way.
posted by ymgve at 8:15 PM on October 24, 2015 [1 favorite]


If they're going to be better drivers it'll be in part because of the moral algorithms people are talking about here

The real world tends to be less focused on moral edge cases and more focused on doing the thing that makes sense most of the time. Freight trains (human piloted or otherwise) don't intentionally derail themselves in order to avoid hitting a busload of school children on the tracks ahead of them. Automated cars will try to slow down and swerve out of the way of obstacles and sometimes that will result in people getting killed. In the future people will just think of those fatalities in the same category as people who end up getting hit by trains.
posted by burnmp3s at 8:16 PM on October 24, 2015 [17 favorites]


I hate the original philosophical question this is based on and I hate this updated version too. Both of them drop us into ideal situations in which knowledge is perfect but the whole thing about living in the world is that knowledge never is. Should a self driving car swerve into a wall, killing a passenger, in order to avoid a child crossing the road? What about a child size dog? Or the above mentioned painting of a person? Or something that it calculates with 10%, 50%, 80% confidence is a child? These kinds of fuzzy questions are both more likely to occur and harder to answer, which means that ultimately there will be a thousand little tradeoffs and calculations in every given circumstance. I imagine that the calculations will have to lean towards saving the passenger because deadly impact for a passenger will be easier to calculate with a higher degree of certainty; it also mimics the actions of a human driver which feels necessary me. Imagine the first time a car swerves and 'sacrifices' a carful of passengers for something it thought was a person in the road but was actually a deer. That'd be the end of self driving cars, right there.
posted by pretentious illiterate at 8:17 PM on October 24, 2015 [9 favorites]


Self-driving cars that will deliberately kill you according to some ethical calculus will make a lot of people reconsider their aversion to public transportation, so that's cool I guess
posted by prize bull octorok at 8:19 PM on October 24, 2015 [15 favorites]


In almost all situations, braking is the absolute best possible and most predictable way to react. Swerving into oncoming traffic introduces so many factors that it's impossible to predict if it would become a 1% or 100% chance of a fatal accident. Especially if the car is driving at such speeds that braking isn't considered an option - do you really think even a computer will be fully in control of the car doing such abrupt maneuvers?
posted by ymgve at 8:21 PM on October 24, 2015 [4 favorites]


Your car is between two people. In front is a violinist ....

If the person in front is a (trombonist/accordionist/banjo player/bagpiper/etc.), though, the car never even bothers to ponder stopping or swerving.
posted by Greg_Ace at 8:21 PM on October 24, 2015 [6 favorites]


These scenarios also seem to forget that we have traffic laws that for the most part tell you what to do. Generally it is to maintain control and slow down as much as possible without swerving. I think the more interesting question is whether a car which is on the open road should be programmed to swerve to avoid a collision if it cannot confirm that the swerve will not create an increased hazard for itself or others, or simply use maximum braking while maintaining control. Should a car swerve into parked or oncoming vehicles if a child suddenly runs ahead of it? I'm pretty sure the law says no, even if common sense says "hmm, maybe, I probably would."
posted by meinvt at 8:22 PM on October 24, 2015 [5 favorites]


Ferreous: "Here's a real easy situation to imagine. Kid runs out from between two parked cars, invisible to the car until that point. Moral algorithms come into play there."

One moral algorithm is how slow does the car drive to accommodate for potential humans that may or may not be there? That's a decision we already make in setting speed limits. If the speed limit was 15mph or lower, we would have fewer deaths from collisions right now, but people were willing to increase the risk for the convenience of getting where they're going faster.

In manufacturing, when designing robots for shared workspace, the robot can't predict with certainty that a human won't move into its path, so it's supposed to operate slow enough that it can stop if necessary. I can't find an article on it right now, but there was practical discussion over whether it needs to be able to stop before a collision occurs at all, or whether it would be okay to risk lightly bruising a human for the sake of greater productivity. But I haven't heard anything about them having to deal with this sort of "kill one or the other" dilemma.
posted by RobotHero at 8:24 PM on October 24, 2015 [2 favorites]


do you really think even a computer will be fully in control of the car doing such abrupt maneuvers?

But that depends; are we talking about a computer's capabilities in the year 2016 or 2056?
posted by polymodus at 8:25 PM on October 24, 2015 [1 favorite]


At the conference I mentioned in the thread on the maths of all-male line-ups, one of our keynote speakers was a woman and part of her talk was about this. She suggested that maybe the algorithms of self-driving cars should always choose men's lives over women's, because "apparently we need to preserve men to put them on stage at academic conferences".
posted by lollusc at 8:34 PM on October 24, 2015 [29 favorites]


I know these are edge cases, but it's worth at least being aware of them. It's not like you have to be rabidly anti-technology to want to think about this. There's been a narrative about self-driving cars that doesn't seem to acknowledge that situations like this might ever arise, and if we're dealing with a messy, complicated reality, than I don't see why these would be treated like pointless questions.
posted by teponaztli at 8:36 PM on October 24, 2015 [4 favorites]


the title of this thread has earwormed me with billy ocean and now I have done the same to you all. good day sirs.
posted by poffin boffin at 8:39 PM on October 24, 2015 [1 favorite]


"I want to drive somewhere, but I don't want to drive somewhere, you know? If only technology were advanced enough to solve this incredibly complex problem!"

"You know buses exist, right?"
posted by Sys Rq at 8:45 PM on October 24, 2015 [5 favorites]


If the car sees Baby Worse-Than-Hitler, it totally swerves and runs him over, right?
posted by thelonius at 8:59 PM on October 24, 2015 [3 favorites]


"The car is programmed to be autonomous by the controller which also collects the sensor data from the trips that the vehicle makes. Currently, the research team has programmed one RC car to drive straight and provide sensor data when it encounters a physical object blocking its path. This is the first step of a much larger project.

“I’m going to build a fleet of RC cars and test them on a reconfigurable streetscape. The idea is to test the algorithms,” Peters said. “There’s all these algorithms that might work and then we ask: how do these work together?“

The different algorithms in each of the RC cars that make up the fleet will allow Peters to collect data on the interactions between vehicles. The goal is to address these issues on a smaller mock state before they are encountered in the real world."

I find this interesting as this town started out with the horse then the car, now the computer. Amazing what can transpire in 115 years.
posted by clavdivs at 9:05 PM on October 24, 2015 [2 favorites]


The interesting thing is that we don’t really consider the ethical dimension for human drivers. There is no legal requirement to sacrifice yourself — assuming you are not otherwise breaking the law (DUI, speeding etc.) and a pedestrian jumps out in front of you we just chalk it up as an accident when they get killed. We have devolved the moral dimension on the legal system.

Doing the same for autonomous cars is no real leap though whether or not people will accept it is an interesting question. I don’t think they will have a problem with that aspect, after all we have no universally agreed ethical system for human behavior in such circumstances, so expecting such ethical certainty for for the behavior of autonomous vehicles is probably beyond us anyway. We’re quite good at just muddling through.
posted by Quinbus Flestrin at 9:07 PM on October 24, 2015 [17 favorites]


it's not especially meaningful to assign special blame to the person in the front seat

wrong. this is exactly what rational risk-acceptance looks like: 'i want a sexy robot car. i agree to die instead of anyone else.'
posted by j_curiouser at 9:10 PM on October 24, 2015 [1 favorite]


'i want a sexy robot car. i agree to die instead of anyone else.'

"I want a sexy robot car. I agree it should swerve into one person on the sidewalk instead of hitting ten people illegally crossing at a dangerous place."

You're sort of ignoring the heart of the question here.
posted by WCWedin at 9:29 PM on October 24, 2015 [2 favorites]


Ugh.

33,000 people a year are dying from cars killing them, and we want to hold up self-driving cars (which will save most of them) because we want the cars to solve centuries-old ethical problems based on hypotheticals that never arise? Based on the logic in the article, it is unclear if *people* should be allowed to drive before they have studied philosophy and chosen a formal ethical framework.
posted by pmb at 9:29 PM on October 24, 2015 [58 favorites]


Guy with a commercial driver's license here: In my training, this is already a solved problem. Something unexpectedly gets out in front of you, you stomp your brakes as hard and quickly as possible. Absolute worst case scenario: you still hit the person in front of you, but with less force than you previously would have. Doing anything else invites greater risk. And if the vehicle is going too fast to break properly, then the human driving it was almost certainly exceeding speed limits. And if the car behind you rear-ends you, that is (in almost every insurance case) their fault for not allowing safe follow distance.

For heavier vehicles, if they lose braking power on a downward slope for some reason, the solution might be to steer into an embankment, or a guardrail. Some highways have gravel inclines designed for a vehicle to divert onto and expend its momentum until the emergency brakes can be applied. I expect that sort of judgment call to be made by the otherwise unoccupied driver -- I don't expect trucks, etc. will ever actually be driven without a human on board, only that the human won't have to make any decisions except in grave emergencies that wouldn't require split-second reactions.
posted by The Pluto Gangsta at 9:43 PM on October 24, 2015 [44 favorites]


It's not like the only possible option is to continue straight at varying speeds. Why shouldn't it veer into the other side of the road?

It shouldn't veer into either side of the road (unless possibly its sensors tell it there's effectively a run-off area there) because it's implausible that the computer could understand the nature of the obstacles it would be striking when it did so.

If its a choice of veer into incoming traffic at 20% chance of fatality v 100% hit and kill the kid which one should it choose?

It's not going to make a choice on those bases. Your statement implies that it has some sort of physiological model of human beings that it can put into a physics model, flexible enough to take account at least of size differences and accurate enough to predict what injuries would ensue and whether they are survivable, and that it can simulate outcomes fast enough to create a probability density of outcomes for each possible course of action, and that it can do so in the few milliseconds it has before it has to commit to one course or another. I mean, humans can't engage in that sort of reasoning that quickly. Even if someone (who is not an F1 driver) describes some event in terms like that after the fact, that's almost certainly just a post-hoc rationalization for a decision that likely didn't even involve rational cognition at all.
posted by ROU_Xenophobe at 9:44 PM on October 24, 2015 [1 favorite]


The idea that car manufacturers, AI, or individual drivers are going to have to answer these questions de novo is a bit naive. We as car operators -- and humans more generally -- are confronted with dozens of moral decisions each day that trade off our own wellbeing against that of those around us. In most cases, and certainly in every case where lives are at stake, this is not just left up to you -- there are extensive laws governing these decisions. This is not a question of what car makers or individual users are going to decide -- it's a question of what our federal and state legislators will put into laws. It makes sense to think through the realistic cases ahead of time, but for most scenarios, the answers are already out there: if you want to know what choice the driverless cars will adopt, just look at the existing liability for drivers under similar circumstances. (And if you're asking about faster-than-human decisions, just look at the legal requirements for humans under the same circumstances where it's slow enough for a person to make a decision.) It's not like we've suddenly entered some libertarian blank-slate world where AI, corporations, and free-wheeling individual drivers are going to be deciding these things. Most of it has already been decided, and the rest will be hashed out in state houses, which is where we'll have to intervene if we think they're making the wrong moral decisions.
posted by chortly at 9:49 PM on October 24, 2015 [6 favorites]


How does sci-fi present it? The only example I can think of offhand is that in Leckie's Ancillary series the ships, for "emotional reasons" always protect their captains, i.e. drivers.
posted by tofu_crouton at 9:51 PM on October 24, 2015 [1 favorite]


One thing these articles often gloss over is they focus on this moment of moral decision. Sacrifice the passenger to save others, for example.

But the car needs to make two decisions. One is whether to sacrifice the passengers. The other is deciding whether this is a scenario where the first decision is necessary. And 99.9999...% of the time, it's not.

So what if you have this in your car, and there's a risk of a false positive? If the situation where this proves necessary only happens 0.00001% of the time, and you get more than 0.00001% false positives, then this algorithm will kill more often than it saves.

Cited earlier, 33000 automotive deaths per year in U.S. Population of 320 million. Average American drives 14000 miles per year. So that's 4480000000000 miles per year, resulting in 33000 deaths, or 1 every 135757575 miles. Or 0.00000073% deaths per mile.

Now of course, the majority of those were not this kind of moral scenario, but just the scenario where someone should have been paying closer attention, driving slower, etc. Let's say 1 out of 5000 would result in this sort of moral question. That's probably high. So now we're at 0.00000000014% per mile your sacrifice will be necessary.

So now you just need to drive around with a car that's constantly deciding whether or not to kill you, and it needs to do so with 99.99999999986% reliability per mile.
posted by RobotHero at 9:52 PM on October 24, 2015 [13 favorites]


I like how in one thread we can debate the prime directives of future robot cars while in the thread next door we're all like welp global warming gonna kill us all the future is cancelled
posted by prize bull octorok at 9:52 PM on October 24, 2015 [28 favorites]


Personally, I would think in a future that had everyone in computer driven cars, the roads would have sensors broadcasting to those cars the presence of non-car obstacles on the roads and for some distance on either side, and the cars would be communicating with each other as well, to some degree.

An intermediate step would allow for computer controlled driving in urban areas that had these sensor nets, and computer driver disengaged for rural and other uncovered areas.

This would also let the areas that first built the sensor nets to act as test-beds for the interaction of these technologies. I know that people will howl about surveillance - but my car is already watched by cameras in my small town anyway. (not that I like it, but I wasn't consulted, and sabotaging those cameras is a crime whose penalty for committing I am unwilling to pay)
posted by Vigilant at 10:04 PM on October 24, 2015 [1 favorite]


If Google had just installed the cow catcher like I suggested ...
posted by RobotVoodooPower at 10:05 PM on October 24, 2015 [4 favorites]


This along with reports that all self-driving car accidents are somebody else's fault are like the invention of jaywalking. Jaywalking was a marketing initiative designed to blame pedestrians, who had previously been the rulers of the road, for being killed by fast cars. Same thing. These are discussions designed to put the onus on a human for the problems imposed by a machine. As was true when the automobile rolled out on roads in number for the first time, the solution, not taken at the time, is to totally socialize transportation to minimize and coordinate traffic, and separate transportation from open public spaces.
posted by mobunited at 10:08 PM on October 24, 2015 [7 favorites]


33,000 people a year are dying from cars killing them, and we want to hold up self-driving cars (which will save most of them) because we want the cars to solve centuries-old ethical problems based on hypotheticals that never arise? Based on the logic in the article, it is unclear if *people* should be allowed to drive before they have studied philosophy and chosen a formal ethical framework.

Who says we should be holding anything up? The idea of a self-driving car touches on ethical questions in a completely new way; they're interesting in their own right, but they can also help direct how we continue to think about and develop these machines. We're talking about autonomous tools that are directed by human creators, but in a very detached way, and they're implemented in a way that is very new and revolutionary.

The point of these ethical questions isn't that there are simple answers we can point to. If it sounds like it reflects a lot on people driving, that's because these were originally questions that dealt with people. This is a unique stage in the history of technology, and this seems like a great opportunity to look at new challenges and to reexamine these existing ones at the same time. I don't see why this is being treated like people are trying to stop technological progress.
posted by teponaztli at 10:29 PM on October 24, 2015 [3 favorites]


The car only has an obligation to protect its passenger by attempting to hit as few obstacles as possible. If a collision is unavoidable, then the car should make that collision in such a way that is least harmful to the passenger. The situations Bonnefon and co. are describing in the article require the car to basically be omniscient. If we really want to get into ridiculously improbable ethical quandaries, then that choice should be a setting the driver chooses.

"Let's set up your ethical driving preferences! If a potentially fatal collision with other people is avoidable only by an action that would potentially be fatal to the passengers of this vehicle, please input the minimum number of people for which you are willing to sacrifice your and/or your passenger(s) life:"

"To avoid collisions with motorcyclists, the car can swerve into a wall, potentially causing you severe injury. Please select the maximum amount of injury you would accept to avoid motorcyclists:"

"To avoid an accident with another vehicle that could potentially be fatal for both you and the other driver, it is possible to drive the car into a wall, resulting in an accident that would be 100% fatal for you but saving the other driver. Please select the minimum probability of mutual fatality for this feature (please note this function may not save the other driver if they also have enabled this function):"
posted by pravit at 10:45 PM on October 24, 2015 [10 favorites]


Please select the minimum probability of mutual fatality for this feature (please note this function may not save the other driver if they also have enabled this function):"

It's the 'Gift of the Magi' if O'Henry had been a sci fi writer instead.
posted by lollusc at 11:47 PM on October 24, 2015 [7 favorites]


The easiest solution is for Google to just program "Don't be evil" into the car's software. Works every time!
posted by a lungful of dragon at 12:09 AM on October 25, 2015 [3 favorites]


Should you pay the get away driving algorithm with bit coin or raw Amps?
posted by clavdivs at 12:11 AM on October 25, 2015 [2 favorites]


I suspect the more cost effective solution will be to programme "We have to accept there is some risk in everything and that it would be damaging to the economy to try to pin the blame for accidents on our most innovative companies" into some legislators.
posted by biffa at 12:18 AM on October 25, 2015 [1 favorite]


33,000 people a year are dying from cars killing them, and we want to hold up self-driving cars (which will save most of them) because we want the cars to solve centuries-old ethical problems based on hypotheticals that never arise?

That's not the problem I have at all. The problem I have—and I feel that it would be within the purview of engineering ethics—is not the number of deaths, but the distribution of deaths.

Let's say for simplicity that 33,000 people die mostly from reckless driving. Suppose if everyone uses autocars then only 33 people die. But since all autocars use similar software, these deaths are randomized over the population. People who otherwise would have been careful drivers now suddenly have a strictly higher probability of dying.

From an engineering perspective there are a whole host of these kinds of technology validation and system verification issues. I'd expect to see more and more research in this interdisciplinary area; my only hope is that young researchers are politically conscious enough to ask the right research questions in the first place.
posted by polymodus at 1:27 AM on October 25, 2015 [2 favorites]


Ya know, there have been a few times I've been driving in a really bad neighborhood at a really bad time, and decided to get the heck out of there rather suddenly.

If any group of a few people who happened to be hanging around on the sidewalk knew they could make my car stop by just walking into the street -- instead of figuring that such behavior would likely provoke me to speed up and drive away -- they might well do that.

Besides which -- you know all that lovely technology meant to confuse detection to make a single missile look like a few hundred? Imagine blue-boxers building hardware to make one pedestrian look like ten.

Ha ha. Crunch.

Hm. Maybe there should be air bags all over the OUTSIDE of all vehicles as well as inside, eh?
posted by hank at 1:39 AM on October 25, 2015 [3 favorites]


Easy. The car's software just googles the name of the driver and any passengers, and if any of them have ever published a paper relating to the trolly problem, it sacrifices them by driving into a wall if it thinks it will save more lives. Otherwise it just tries to brake as quickly as possible in a straight line.
posted by Rhomboid at 2:10 AM on October 25, 2015 [5 favorites]


hank: "Ya know, there have been a few times I've been driving in a really bad neighborhood at a really bad time, and decided to get the heck out of there rather suddenly.

If any group of a few people who happened to be hanging around on the sidewalk knew they could make my car stop by just walking into the street -- instead of figuring that such behavior would likely provoke me to speed up and drive away -- they might well do that.

Besides which -- you know all that lovely technology meant to confuse detection to make a single missile look like a few hundred? Imagine blue-boxers building hardware to make one pedestrian look like ten.

Ha ha. Crunch.

Hm. Maybe there should be air bags all over the OUTSIDE of all vehicles as well as inside, eh?
"

Or ten pedestrians look like one? We already have issues enough with drivers and farmer's markets and such.
posted by Samizdata at 2:24 AM on October 25, 2015


This is a problem made up out of nothing. Pluto Gangster had it, you don't drive like an ass, you'll have enough space and time to stop so you don't have to decide. Computers will not be programmed to drive like asses, I hope, and so the real thing will be people combining about how slow the cars are.
I can't wait for robot cars because driving is a fucking bore : exciting driving happens on the track. I don't live at the track. Time driving I could be doing something else? Fantastic, bring it on!
posted by From Bklyn at 2:25 AM on October 25, 2015 [10 favorites]


People who otherwise would have been careful drivers now suddenly have a strictly higher probability of dying.

Probably they don't in your example, since you are suggesting a reduction of deaths of 99.9% and a large proportion of those deaths would have been people who were very careful and safe drivers but were victims of those who weren't or of circumstance. If you broke the chance of dying down into different areas of probability, for example, own fault, other drivers fault, equipment failure etc then maybe a rally competent person might see an increased possibility of death in some categories, but not overall.

However I suspect there are a lot of drivers who think they are excellent drivers but are not, and will object to having control taken away from them, even though it may increase their safety and the safety of others. I would guess this will be a big hurdle for public acceptability.
posted by biffa at 2:46 AM on October 25, 2015 [9 favorites]


* Parked autonomous cars maintain some minimal sensor awareness, and ping oncoming cars if there is something between them and the next car
* Cars adjust their speed to account for things like huge buildings with alleys that end 10' before the end of the road
* At particularly dangerous places, beacons to monitor the danger can advise approaching cars of current conditions
* Remove all windows from car interiors and replace them with screens. People should be facing backwards for maximum safety, anyway.
* Inflate any indicators of progress, like current speed, on any readout visible to humans inside. A lot of exotic cars and motorcycles already do this, from the factory. Tell people they're being whisked around at 90 mph when they're going 70, and very few will be able to tell.
posted by maxwelton at 2:53 AM on October 25, 2015 [4 favorites]


The more technology is in something, the more opportunities it has to fail and the less ability the average user has to fix the failure. My phone regularly bugs out and tells me to take exits that don't exists if I'm driving on a road that is above another road. So I have a lot of difficulty believing that self-driving cars will be less dangerous than human-driven ones. I get that we need to accept that premise to have an important moral debate, but I'm still assuming my car would drive me into oncoming traffic because it thought a plastic bag was ten pedestrians.
posted by Peevish at 4:12 AM on October 25, 2015


My house is a couple blocks from a k-3 elementary school. The street the school is on has a 25 mph speed limit, down to 20 when children are present. This has no effect on people's actual driving though, and unfortunately, the road is a connection between two arterials. They blast down the street at 40-plus mph, ignoring pedestrian right-of-way. I would happily take a robo car that followed the damn rules, with a simple brake for obstacles algorithm, over the humans any day.
posted by rockindata at 4:16 AM on October 25, 2015 [12 favorites]


Daisy, Daisy
Give me your answer, do...

posted by TheWhiteSkull at 4:27 AM on October 25, 2015 [1 favorite]


We need to find real -- and realistic -- scenarios to model

Human driver suddenly veers into robot car, sending it towards a group of people standing by the road.

But if autonomous cars are to sacrifice their occupants to save others, I fully expect the law governing human drivers to be updated accordingly - if you fail to kill yourself to save others, no matter how they turn up in front of your car, you're tried for manslaughter.
posted by hat_eater at 4:46 AM on October 25, 2015 [1 favorite]


How does sci-fi present it? The only example I can think of offhand is that in Leckie's Ancillary series the ships, for "emotional reasons" always protect their captains, i.e. drivers.

I know I trotted this example out in the last thread, but here's what happened on Knight Rider when K.I.T.T. faced off against an earlier prototype that lacked any kind of human preservation programming (and of course decided to go rogue). It's an interesting inversion though; in the scene Michael Knight's the one doing the driving, and K.I.T.T. tries to take over, knowing that his evil twin had no passengers and no regard for K.I.T.T.'s two occupants. Michael refuses to let K.I.T.T. take the wheel though, because he had figured out that there wouldn't actually be a collision.

As an example, I think it still has merit though, because it demonstrates one possible outcome to the whole ethics/liability issue: cars that can drive themselves, and do most of the time, but that are ultimately the responsibility of a human operator who can intervene in emergent situations. And while requiring a human to be available to take control might seem to negate most of the benefits of self-driving vehicles that would revolutionize public transit, and shipping, there's an out for that too. The relative rarity of events where the onboard computer would have to defer to human judgement could mean that one remote operator could monitor multiple vehicles and be ready to take them over, drone pilot style, when the need arises.
posted by radwolf76 at 4:47 AM on October 25, 2015 [1 favorite]


This is largely a solved problem. You program the robot to do what we already ask human drivers to do: to drive carefully, to observe speed limits and other road rules, and to be prepared to do certain things (mainly, try to STOP) at short notice.

We don't expect drivers to solve complex ethical decisions in the tenth of a second available to them. We ask them (knowing some are going to ignore us and drive too fast, talk on the phone or drink and drive) to be aware of their surroundings and to think ahead, but The Pluto Gangsta already described above what is expected of us in an emergency. That's all we need ask of the robots. If they can do better than we can,great!

Risk is managed via EULA. Ok, joking, though only a little. The nominal driver is responsible. Just like now. Car faults that cause a car to go out of control are nothing new. Treat robocars like that.

I'm far more concerned about other factors, such as security and privacy. There are already cars out there that can be hacked into and controlled from hundreds of miles away. Auto-driven cars need to be resistant to hackers, because assholes. They also need protections against data harvesting.
posted by Autumn Leaf at 5:36 AM on October 25, 2015 [4 favorites]


If any group of a few people who happened to be hanging around on the sidewalk knew they could make my car stop by just walking into the street -- instead of figuring that such behavior would likely provoke me to speed up and drive away -- they might well do that.

This (especially the racially-coded versions of "bad neighborhood" and "people hanging out on the sidewalk") is going to be a Thing that has to be resolved at both engineering and policy levels. One case of a middle class person being attacked by a group of minority youth in a Paris banlieue or an American rough neighborhood in this way will get about a trillion times more media attention than however many pedestrians get run over every day currently and will dominate the discussion.

Drivers are accustomed to having power over pedestrians and they won't easily give it up. For autonomous cars to gain acceptance, they are going to have to be programmed to retain most of that power, even though from a public health and a livable cities point of view it would be better if cars were subservient to pedestrians.
posted by Dip Flash at 6:25 AM on October 25, 2015 [4 favorites]


The more technology is in something, the more opportunities it has to fail and the less ability the average user has to fix the failure. My phone regularly bugs out and tells me to take exits that don't exists if I'm driving on a road that is above another road. So I have a lot of difficulty believing that self-driving cars will be less dangerous than human-driven ones.

People make errors and have car accidents all the time. That hasn't prevented us from allowing people to drive. Your difficulty believing is based on the assumption that technology will never improve enough to make fewer mistakes than people do.

Autonomous vehicles don't need to be perfect. They just need to be statistically better than people. Once we reach the point where autonomous vehicles have statistically fewer accidents than people, insurance companies will start charging people a fortune to drive themselves.
posted by Fleebnork at 6:51 AM on October 25, 2015 [5 favorites]


Yeah, I have to imagine that once the computers are so predictibly docile, the next level will be for people to take advantage of this to the occupants' detriment. Can the autonomous car, for example, tell whether the person standing in front of the vehicle is pointing a gun at its occupants? And if it can, does it speed up or slow down?
posted by indubitable at 6:58 AM on October 25, 2015


Here's a real easy situation to imagine. Kid runs out from between two parked cars, invisible to the car until that point.

On a road with parked cars and pedestrian access sane urban planning would dictate a speed limit of 25mph or lower.

Whether cars are autonomous or not in this scenario is a red herring.
posted by srboisvert at 7:04 AM on October 25, 2015 [2 favorites]


If any group of a few people who happened to be hanging around on the sidewalk knew they could make my car stop by just walking into the street -- instead of figuring that such behavior would likely provoke me to speed up and drive away -- they might well do that.
This (especially the racially-coded versions of "bad neighborhood" and "people hanging out on the sidewalk") is going to be a Thing that has to be resolved at both engineering and policy levels. One case of a middle class person being attacked by a group of minority youth in a Paris banlieue or an American rough neighborhood in this way will get about a trillion times more media attention than however many pedestrians get run over every day currently and will dominate the discussion.
This seems to be the new talking-point objection to self-driving cars and the racial aspect is, as you mentioned, at most barely concealed just below the surface. There's a rational way to approach this, namely pointing out that beyond their relative rarity carjackings and the like tend to happen to people who live in the same neighborhood, and to drivers who have already stopped at a light, gas-station, etc. so it's not like we're giving up much of a human psychopathy edge. Evasive driving is a solution which works far better on TV than in real life.

Unfortunately, we all know how well that will fare in the post-CNN/FoxNews world so I'd hope instead the manufacturers focus on the fact that self-driving cars will already be crawling with cameras and data connections. Since you already have to do that just to make them work in the first place, there's very little overhead to adding a panic button in which sends all of that to the local PD. That'd also seem desirable from a liability perspective and at least Google has already shown the benefits of avoiding arguments in court so it'd have actual real benefits beyond acting as a security blanket for affluent white suburbanites.
posted by adamsc at 7:07 AM on October 25, 2015 [3 favorites]


srboisvert: the autonomy seems like a key factor to me because simply posting a speed limit doesn't mean that a human will choose to drive anywhere near it. I live in a dense urban environment with a 25MPH speed limit, prominent pedestrian infrastructure, and plenty of “driver must stop for pedestrian” signs — none of which seems to make much of a difference to the entitled suburbanites commuting into work at 45+MPH, blowing through red lights and stop signs if they're late (i.e. 80% of the time) and honking or otherwise hassling pedestrians, bicyclists and drivers who follow the law.

The difference is that the car manufacturer has to worry about liability and they're going to put a lot of work into avoiding the inevitable lawsuit if a self-driving car hits someone while speeding. In contrast, human drivers tend to worry only about the rather low odds of a police officer being present, choosing to ticket them, and discount entirely the possibility of which can't be dealt with by writing a small check something like causing an injury or fatality.
posted by adamsc at 7:23 AM on October 25, 2015 [1 favorite]


The nominal driver is responsible. Just like now. Car faults that cause a car to go out of control are nothing new. Treat robocars like that.

The problem I see with this approach will be that if something goes wrong under computer control then it will be perceived by many as unjust to send the nominal driver to prison if the car kills or injures anyone. There will be a lot of possible car buyers who will be put off by the idea they are responsible where they have no control and where the technology is dependent on either the programmer or the local mechanic who signs off on the tech. So the nominal driver potentially gets off the hook with a light sentence compared to today and what happens to the tech providers? Do they have any responsibility? Civil culpability?
posted by biffa at 8:15 AM on October 25, 2015 [1 favorite]


XMLicious: "What if it tried to jump over the crowd or motorcycle like in Dukes of Hazzard"

KITT! Turbo Boost!
posted by symbioid at 8:15 AM on October 25, 2015 [1 favorite]


I think any any conversation regarding robo-cars killing people needs to at least acknowledge that U.S. motor vehicle deaths are now hovering around 33,000 per year

Yeah, that's more than a fully loaded 747 smashing into the ground every single day... day in day out, week in week out.

It's the same in every industrialized country... I'm always amazed that it's just accepted as the price of personal freedom or capitalism or something
posted by fearfulsymmetry at 8:28 AM on October 25, 2015 [5 favorites]


The problem I see with this approach will be that if something goes wrong under computer control then it will be perceived by many as unjust to send the nominal driver to prison if the car kills or injures anyone. ... So the nominal driver potentially gets off the hook with a light sentence compared to today and what happens to the tech providers?

Drivers kill and injure people all the time and the only (externally-imposed) consequence they typically see is higher insurance rates. There's no reason to expect that nominal drivers of autonomous cars would be prosecuted unless they'd somehow fucked up egregiously enough to be analogous to situations where humans are prosecuted.
posted by ROU_Xenophobe at 8:36 AM on October 25, 2015


Car manufacturers will want the limit their liability for accidents, I can see something like "choose from these 10 program modes and then be responsible for the automated decisions made on your behalf" then good luck arguing in court if the 'insane driver' mode makes some bad decisions for you.
posted by Lanark at 8:47 AM on October 25, 2015


Drivers kill and injure people all the time and the only (externally-imposed) consequence they typically see is higher insurance rates. There's no reason to expect that nominal drivers of autonomous cars would be prosecuted unless they'd somehow fucked up egregiously enough to be analogous to situations where humans are prosecuted.

They will be prosecuted. Because every state will implement the same rule -- just because the car is self driving does not make you not responsible for its actions. You have to monitor and hit the E-Stop button should an error condition occur.

You crash a plane on autopilot? It's the pilot in command who is at fault, because they are not allowed to assume that the autopilot will safely get them to their destination.

If monitoring isn't mandated, then autonomous cars will be banned nationwide the first time one kill a bystander -- esp. since there would be a massive loop of lawsuits trying to find fault.

You may say this is illogical, but the vast majority of voters will say "I wouldn't have done that! These things are a menace" and quickly demand such laws. Never mind that they are almost certainly wrong -- assuming logic in public reaction is a bad idea. If we did, we would have banned driving long ago, or at least mandate far higher standards for both vehicle maintenace and operation certification. You know, like we did with aircraft.

Indeed, in our current climate, I do not see autonomous vehicles ever being accepted. It'll be all roses and utopia until the first person gets killed. Then it'll be infinite lawsuits, and no manufacture, no matter what they say now, will accept that. They can't -- if they try, they'll go out of business.

Sort of like the telephone, or the Internet, or any other tech advance you want.
posted by eriko at 9:00 AM on October 25, 2015


Oh, and another reason they'll never be accepted.

They're going to drive legally.

They're going to drive the SPEED LIMIT.

And this is going to piss everyone off the first time they try them. C'MON, FUCKING MOVE YOUR ASS? WHAT'S WITH THIS 35 MPH BULLSHIT (hits e-stop, leaves car in traffic, goes to dealer and buys a "real" car.
posted by eriko at 9:03 AM on October 25, 2015 [1 favorite]


They will be prosecuted.

Really? People are not prosecuted right now for the vast majority of incidents where a driver kills a pedestrian; is there something else that you assume will change along with autonomous cars that will lead to this happening?
posted by indubitable at 9:09 AM on October 25, 2015 [5 favorites]


eriko: I think that argument has a sharp generational divide: definitely true for many Boomers and older Gen X but I think a high percentage of people under 40 or so look at this as uninterrupted screen time. It's certainly seemed like public transit users are a lot more tolerant of schedule slippage now that everyone's checking email, Facebook, or playing games instead of watching the clock.
posted by adamsc at 9:11 AM on October 25, 2015 [2 favorites]


Agreeing with everyone who pointed out that if you end up having to make this decision you've already failed. I once heard a saying that was something like "If you have a mouth full of hot soup there are no good next moves". [I thought this was a legit Chinese proverb, but I can't find any evidence of that.] You have to take a step back and look at how you ended up in a situation with only bad outcomes - the moment tragedy becomes inevitable is what you have to focus on, not the moment when you are forced to choose between the lesser of two evils.

Like a human driver, the car will have to be aware of current breaking distance based on speed and road conditions as well as the presence of anything that would obstruct its view of pedestrians. And at some point you have to accept that accidents are still possible.
posted by Horselover Fat at 9:48 AM on October 25, 2015 [5 favorites]


user92371: What if I were carrying a life-size painting of ten people across a road and a car did a header and killed the driver. Would the painting go to jail?

The prosecutor would seek the death penalty as paintings are usually hung.
posted by dr_dank at 10:06 AM on October 25, 2015 [9 favorites]


What if the jury is also a painting?
posted by Sys Rq at 10:15 AM on October 25, 2015 [2 favorites]


Death JonnyCab for Cutie.
posted by LastOfHisKind at 10:21 AM on October 25, 2015 [1 favorite]


I guess to sharpen the point made by me and others above: are there any new questions raised by autonomous cars that aren't already largely answered by existing laws? (Which is not to say those laws are morally correct, of course, just that there are answers of a sort already.) I'm sure manufacturers, legislators, and insurers will say that the liability is on the driver, who can choose to purchase and drive the car or not, but if they do, then they have the same rights and responsibilities as any other driver. Those who want to break the speed limit will (for a time) be free to buy regular cars, whereas those who want the lower insurance and to get in a few more games of Candy Crush will buy the automatic -- but the laws will pretty much be the same. Are there any really new issues then? I foresee much more issues being raised once driverless autonomous cars are out there, since although the owner will again be responsible, the car or truck can now take much more extreme measures to avoid hurting others.

This debate kind of reminds me of when I first reread Asimov's robot books. The first time through I had been in love with the interplay between the clean logic of the three laws, and the all-to-human society around them. The second time through, when I was a young teen and had discovered politics, I realized that the whole premise was criminal and a legacy of being written in apartheid America. Once these robots are even remotely human then the idea of subjecting them to these self-debasing laws is of course utterly immoral -- but then the whole plot machinery breaks down because they're just subject to the same existing laws as any of us. It's not to say that those laws are just, and Asimov could have instead made it more like the dual legal system for blacks and white in America at the time, but the SF element -- the clean logic of enforced programing -- would still be lost to the messy reality of regular politics. As is somewhat the case with cars today.
posted by chortly at 10:28 AM on October 25, 2015 [2 favorites]


The idea of a self-driving car touches on ethical questions in a completely new way;

Not really, but it does allow us to renegotiate the political question of who is liable for the deaths.

there was a concerted campaign to make the victim liable for being killed by cars, and so this is what happens
posted by eustatic at 10:59 AM on October 25, 2015 [1 favorite]


The answer is really obvious. Look at all the SUVs on the road. People buy safety.

So I'm sorry ten-people-crossing-or-motorcyclist, you are *doomed*. The only SDC that's gonna sell is the one that's going to plow through them, carrying the driver and occupants in complete comfort and safety.
posted by storybored at 11:26 AM on October 25, 2015


storybored: "People buy safety."

People buy perceived safety. Most minivans are safer (for passengers and pedestrians) than most SUVs.
posted by Mitheral at 12:02 PM on October 25, 2015 [6 favorites]


Look at all the SUVs on the road. People buy safety.

Um...
posted by Sys Rq at 12:05 PM on October 25, 2015 [1 favorite]


The liability arguments around self-driving cars are all going to support the adoption of self-driving cars. Failing to use the safest among similarly-costly and effective technologies is effectively an admission of responsibility for any damage you cause, and at a sufficient gradient of safety you are even forced to accept meaningfully higher costs, because there is really no way to insure against it to create a net cheaper mechanism. (One of the reasons why our MPGs aren't higher despite far better engine technology is that safety has demanded far heavier vehicles.)

It's hard to figure out the right time for all of this ... but I wouldn't make long-term plans to work in commuter rail or car insurance or auto-body repair.
posted by MattD at 12:31 PM on October 25, 2015 [3 favorites]


To follow through on my earlier comment until you prove you can achieve low enough chance of a false positive, it think the best answer to these moral dilemmas is the one that's safest in the event of a false positive. (IE: try to come to a stop and avoid collisions)
posted by RobotHero at 1:35 PM on October 25, 2015


There has been some discussion lately about requiring human operators to be ready to take over if the robocar seems to be doing something wrong, or to intervene if there's a dangerous situation arising.

This seems to be to be a Very, Very Bad Idea. Not only will the humans be looking at their phones (or movies on the dashboard), but they will have minimal to no awareness of the current driving situation, may not be in a position (maybe legs are crossed, or arms are around the friend in the other front seat) to respond, and will have to switch awareness contexts very quickly indeed.

The notion that this responsible human driver should passively sit and pay attention to the road and all of the hazards around is absurd. I've seen (had it happen to me once or twice) drivers doze when the driving is boring and non-demanding. In a robocar, the human is even more likely to lose situational awareness and in such cases, requiring the human to take over is not unlikely to make the situation rather worse.
posted by Death and Gravity at 1:39 PM on October 25, 2015 [4 favorites]


On a slightly different note, while I look forward to self driving cars, I fear that issues such as these may lead to serious constraints on pedestrians and cyclists - to the point of making walking and cycling effectively impossible and illegal in many places.
posted by Death and Gravity at 1:41 PM on October 25, 2015 [1 favorite]


This seems like a silly question to be asking, given that:

1) I am vastly safer getting into to an autonomous vehicle than driving a conventional one, even if the autonomous vehicle is programmed to place the safety of everyone else ahead of mine, I am in the top 0.001% of drivers, and/or I drive in a way that places my own safety ahead of anyone else's.

2) Everyone else is vastly safer with me in an autonomous vehicle rather than me driving a conventional one, even if the autonomous vehicle is programmed to protect me above all else, I am in the top 0.001% of drivers, and/or I drive in a way that is completely selfless.
posted by sourcequench at 2:59 PM on October 25, 2015 [6 favorites]


And we're safer still if you're not in a car at all.
posted by Sys Rq at 3:06 PM on October 25, 2015


Metafilter: The first option must always be to kill the poster.

Are trains and busses really that much asspain? In a rural area only thing you need AI driver for is staying in a straight line because I don’t see anything doing farm trails and offroad as well as a human at a decent clip, economically at least, for a bit.

Self-driving busses would be... well, f'ing wonderful. We have just in time logistics chains with amazing computer assisted management and we haven't starved (yet). Moving people around in a manner coordinated with production/work time becomes a significantly less complex mathematical equation without masses of cars around navagated by millions of savannah-instinct impulse driven ape descendants.

“Making this into an "airplanes will never work" argument...”

In all earnestness, do airplanes still 'work'? Planes seem to be flying slower than they used to be (all considered) despite technological advances. Fuel, and whatnot. But it’s the whatnot in the self-driving car that seems most problematic. That and PEBCAW. And outside context problems that result in absurdities like police giving a guy a drunk driving ticket for being drunk on a horse in a woodland (ever see a horse in a head on collision with a tree on account of the horseman? Me neither).

“We need to find real -- and realistic -- scenarios to model, not ones that rely on human-like inattention and a lack of awareness of the status of the vehicle.”

All the real math depends on whether it’s actuarially sound (as polymodus sed). The 'death is better than dismemberment' thing. It costs less to kill in the U.S. too.

“Should a car swerve into parked or oncoming vehicles if a child suddenly runs ahead of it? I'm pretty sure the law says no, even if common sense says "hmm, maybe, I probably would."

I know I can't make technical decisions faster/better than a computer. But people, good drivers/professionals, racers, tactical drivers, don't make sequential decisions but big picture decisions.

F'rinstnce, I have killed speed using parallel parked cars and (given the situation) it was a better option than just braking/swerving because some vehicles can rollover.

And too in more mundane circumstances insurance and technology can conspire to limit otherwise reasonable, albeit not obvious in sequence, decisions.

For example - I hate anti-lock brakes, particularly in fall and winter. Even though I’m crawling under the speed limit, I get on wet leaves, my jeep shudders like a dog crapping ticks, same with ice. Several times I’ve had to fight the car. I hit the brakes and I start to slide forward in a straight line. That’s great if I *don’t* want to slide, but I’ve been in situations where it’s much better to kick the ass end out and change the profile so I angle off instead of sliding – slowly – into a tree head first. Four wheeling too. Gravel? Dirt? What are those?
So you have to lock the wheels on those surfaces.

Now that can be accounted for, but a large part of the problem is the assumption that an engineer 5 years ago and thousands of miles away from the situation can predict it better than you can when you’re right on top of it.

Mercedes, for example, programed their M-Class braking system to stop at the shortest possible distance on gravel. Swell. But only in low range and under 20 mph.
Wha? I need help braking at 20mph? Pretty sure I can handle 20mph when I intentionally drive on a gravel road. No, I need help on the highway at night at 60mph with road construction putting loose gravel on top of hard pavement.

But, if you disable the anti-lock brakes, your insurance company gets pissy with you. In fact, the Mercedes G in the ‘90s had a kill switch. But laws were passed to prevent this, y’know, for your own good.
If I don’t want to drive, I’m happy to ride a bus. In fact for day to day I’m pretty happy to not have to drive at all.

If I’m going to be behind a wheel, I want a measure of control to compensate for dynamic, and ultimately unpredictable circumstances, not be hampered by technology that can’t consider all factors at once (I need to slide car sideways which is a good idea at this time even though it’s typically a bad idea) and has to – by design (e.g. how laws are written) make linear decisions in order to make them look justifiable on the actuarial tables.
posted by Smedleyman at 3:22 PM on October 25, 2015 [1 favorite]


yes, the idea that humans will be able to take over in the even of autopilot failure is really misguided and done so we can feel better. As it stands, we have commercial airline pilots that are trained to stay at the ready, and yet can't help but tune out when not actively engaged. We already know how this is going to go. and that's with pilots who have much more rigorous training and restrictions. I can't imagine people wouldn't tune out after their first ride.
posted by [insert clever name here] at 6:05 PM on October 25, 2015


Probably they don't in your example, since you are suggesting a reduction of deaths of 99.9% and a large proportion of those deaths would have been people who were very careful and safe drivers but were victims of those who weren't or of circumstance. If you broke the chance of dying down into different areas of probability, for example, own fault, other drivers fault, equipment failure etc then maybe a rally competent person might see an increased possibility of death in some categories, but not overall.

I don't believe that is the point at all. I am suggesting that although automation will indeed improve the overall fatality by many orders of magnitude, it will be accompanied by the phenomenon of randomizing fatalities. The only cause will be computer failure in that there will be no such thing as user error. In the previous world, you at least could diagnose the proximate cause e.g. a drunk driver was involved. In the new world, there's just your car computer, my car computer, and the mean time to failure in effect. And a further complication is you'll have new users wanting to drive places where they otherwise wouldn't have gone before—you get new unexpected usage patterns. All this gets extremely complicated to analyse, for both the engineers and the insurance companies. And somehow we're left to hope that the people in charge get this right.
posted by polymodus at 6:39 PM on October 25, 2015


"Who would buy a car programmed to sacrifice the owner?"

Well, certainly not at the premium level. Rolls Royce buyers will pay extra for the "human crumple zone" algorithm.
posted by CynicalKnight at 6:41 PM on October 25, 2015


I just want to know if the car is going to attempt a rescue of the Kobayashi Maru.
posted by um at 8:02 PM on October 25, 2015 [2 favorites]


Grey Ghost indeed.
posted by clavdivs at 8:05 PM on October 25, 2015


Okay this article is lame but there are interesting questions raised by the idea of self-driving cars having to perform moral calculations. For example, consider the following trolly problem variation:
You are the program controlling a self-driving car. Low visibility as a result of rainwater splashed on the car's main camera by a passing truck has prevented you from responding in time to the motorcyclist ahead of you going into a skid. As a result of this situation and a buffer overflow error, you are presented with the following choice:
  1. You can swerve left into an embankment and definitely save the motorcyclist. This will, however, almost certainly kill the -2,147,483,646 people inside the car — all of whom implicitly agreed to put their lives in your hands by entering the car.
  2. Or, you can swerve right, which will almost certainly kill the motorcyclist — who did not agree to put her life in your hands — but which will almost certainly save all -2,147,483,646 people inside the car.
Extra credit question: Does it change the moral calculus if one of the -2,147,483,646 is Gandhi? What if Gandhi has discovered and is threatening to use nuclear weapons?
posted by You Can't Tip a Buick at 9:50 PM on October 25, 2015 [3 favorites]


A: Initiate a reset. If your vehicle hits the motorcyclist or kills your passengers while you are rebooting, claim in court that you suffered a kernel panic and that you remember nothing of the incident.

If you are carrying Gandhi to the sea shore, accelerate and warn him that if his wife looks back she will be turned to salt, ruining his political point. He will be so busy convincing his wife not to look back that their testimony will be worthless to the prosecutor. As a bonus, if he looks back himself and sees the crushed body of the cyclist he may choose nonviolent protest instead of nukes.
posted by Autumn Leaf at 6:23 AM on October 26, 2015 [1 favorite]


I'm honestly more interested in whether there will be new law to deal with unexpected emergent behaviors. For example, the software will likely go into a "limp home" mode if certain sensor checks go out of parameters, but a vehicle owner may not want to bother paying to fix it. After all, the commute is only a few miles so who cares if my car limits itself to a max speed of 12 mph? Except this now messes with everyone else on the road.

A car in a networked self-driving scenario generates a broadcast glitch which intermittently makes it appear to all the other cars on the highway that it is about to initiate a leftward lane shift, despite making no movements to doing so. So, other vehicles while driving nearby are repeatedly initiating momentary emergency break routines and then resuming normal operation after shocking their occupants - and the offending vehicle's operator isn't even aware.

The car thinks the large pumpkin that just shifted in my passenger seat is now an occupant and refuses to operate until I buckle the seatbelt, requiring me to get out of the vehicle at a major intersection, walk around to the passenger side and adjust everything to proceed.

The National Weather Service has proclaimed that the conditions are unsafe for travel and everyone should stay in. My car agrees. I kick it into emergency override mode to try to get to the ER to see a friend. The system refuses to even help disabling all assistive technology beyond anti-lock breaks. I get in an accident and find an engineer who will testify that the accident would have been avoidable if the manufacturer had allowed the auto-drive to assist my journey, and thus by entirely disabling them they put me at greater risk.

These are the sorts of issues I see as much more likely and thorny.
posted by meinvt at 8:18 AM on October 26, 2015 [2 favorites]


I used to be against autonomous vehicles because I thought they would continue to promote sprawl. Then I realized what autonomous vehicles will eliminate:

  • Aggressive driving
  • Distracted driving
  • Speeding
  • Rolling coal
  • Failure to yield to people walking or biking.
  • Failure to give proper space to people walking or biking
  • Toxic car culture in general

    They can't get here soon enough. They would be a massive improvement even with the current state of the art.

  • posted by entropicamericana at 8:31 AM on October 26, 2015 [3 favorites]


    I'm just going to go ahead and mark down "that thing I said about gandhi, nuclear weapons, and a buffer overflow bug" as my most obscure metafilter reference of all time, just barely edging out all those extremely topical and relevant jokes about Kronstadt that I've been making.
    posted by You Can't Tip a Buick at 10:06 AM on October 26, 2015 [1 favorite]


    also this:
    A: Initiate a reset. If your vehicle hits the motorcyclist or kills your passengers while you are rebooting, claim in court that you suffered a kernel panic and that you remember nothing of the incident.
    points to a maybe useful moral framework for determining which answer is best when confronted with a trolley problem, or with problems that can be reduced to trolley problems. To wit, you should always behave in a way such that it's difficult for legal institutions to pin the blame for any negative consequences of that choice on you.
    posted by You Can't Tip a Buick at 10:10 AM on October 26, 2015


    "The car thinks the large pumpkin that just shifted in my passenger seat is now an occupant and refuses to operate until I buckle the seatbelt, requiring me to get out of the vehicle at a major intersection, walk around to the passenger side and adjust everything to proceed."

    Oh yes, the old "seatbelt interlock" (no seatbelt, no ignition) which was required on all US autos sold in 1974.
    posted by guy72277 at 3:30 AM on October 27, 2015


    > to the point of making walking and cycling effectively impossible and illegal in many places.

    Again, this is already the case. My parents moved out to an ex-ex-exurb with 0% walk score that you literally can't get to without a car. Try biking around there. It's awful.

    People are already driving at maybe 30% attention. And it's only getting worse. I think we'll look back on the time after the invention of smartphones and before autonomous vehicles took over and see it as the bloodbath that it is.
    posted by mike_bling at 12:41 PM on October 27, 2015 [1 favorite]


    I'm a little confused why people think self driving cars will automatically be slower and more courteous than regular cars. I don't see what would be the use of buying them then? I mean why would any individual driver want to use it?

    You would have more chance of broad adoption if robot speed limits were higher than regular speed limits - like, you can go 60 on the highway in a regular car, but 80 in a self driving car. That would be a good motivator.

    I think there's a better than zero chance people would also pay people to "jailbreak" their cars so they could program their ethical parameters themselves.
    posted by corb at 5:18 PM on October 27, 2015


    Because people typically drive at least 5-10 miles over the speed limit currently and autonomous vehicles would be programmed to drive the speed limit and obey all the other traffic laws.

    The adoption of AVs would be driven by any or all of the following factors: higher insurance rates for non-autonomous vehicles, government legislation (luckily, the right to drive is not in the Constitution), and cost (for some people it would make more sense to occasionally hail a self-driven Uber vehicle than own a car).

    Jailbroken vehicles altered to speed and/or drive aggressively would probably stick out like a sore thumb, and punishments would be/should be onerous. (Then again, I think traffic fines should be proportional to one's wealth, so.)
    posted by entropicamericana at 5:32 PM on October 27, 2015


    corb: because 45 minutes in traffic is still a chore even if it's faster than the 53 minutes it'd take following all applicable traffic laws. As soon as the choice is between “stress out while tailgating the person ahead of you” and “answer your boss/client's email an hour earlier”/Facebook/Netflix, a lot of people are going to jump as soon as it becomes cost-competitive.

    I'll second everything entropicamerican said for the likelihood of that happening sooner than it might seem, too. One simple example: I'd bet they'll catch on in Florida when the aging Boomer population wants to keep the freedom of driving despite failing eyesight and reflexes, not to mention even less need for restraint with the 4pm happy hour.
    posted by adamsc at 5:49 PM on October 27, 2015


    Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.

    Some of this will be solved by adjusting "right of way" understandings between automated vehicles and people. Like a train will do what it can to stop when someone walks onto a track, it isn't going to jump the track and sacrifice those who are board and not controlling the train. It'll be a shift in understand how we think about cars, the purpose of roads, and how the balance of responsibility plays out between them.

    We'll probably have campaigns to let people know that they need to stay out of the way of automated vehicles, while understanding that more attentiveness needs to be had around these areas will probably become more common place. Hence, roads and driving areas will be considered to be more "automated," albeit more complicated. Infrastructure will probably change a bit to not allow people to be standing in groups in these automated areas as much. Places that can't be protective like this will probably have requirements that drivers take over control of their automated vehicles in certain areas to retain responsibility. People will also be encouraged to be more attentive to these changes and bear some of the responsibility for watching out, just like we tell people not to stand on railroad tracks or in the way of a trolley, or on the interstate.

    I do think a lot of these proposed scenarios are somewhat of a false dichotomy, as new physical infrastructure and sophisticated algorithms (much faster than by a human) will be such that it's not choosing between killing a driver and killing a group of people, but rather avoiding both more often than not. But to solve the remaining problems, we'll have 1) more infrastructure created to make automated driving less likely to compromised pedestrians; 2) a switch of right-of-way understandings that people need to stay out of automated vehicle areas (like we already do); and 3) campaigns to encourage people to watch the heck out, like we do for railroad tracks and other automated sorts of equipment. What will change is that we simply now have a lot more "railroad tracks." I don't think it will eventually require to have complicated Asmovian decision making to figure out liability issues, just creative physical problem solving and social solutions to limit actual danger and increase public awareness.
    posted by SpacemanStix at 9:15 AM on October 28, 2015


    We'll probably have campaigns to let people know that they need to stay out of the way of automated vehicles, while understanding that more attentiveness needs to be had around these areas will probably become more common place.

    If this happens, it will be a tremendous failure and step backwards. Streets are ultimately for people, not vehicles.
    posted by entropicamericana at 9:57 AM on October 28, 2015


    Look I'm just saying you need to clip on this transmitter every time you're out somewhere the cars might get you. It's basic safety equipment like a bike helmet. Why are you getting so dystopian about this.
    posted by RobotHero at 1:15 PM on October 28, 2015 [2 favorites]


    This recent study crash rates seems relevant even though the sample size is still small:
    What the analysis did find, however, was that every crash an autonomous vehicle was in was caused by a driver of a conventional car. In addition, 73 percent of the crashes involving an autonomous vehicle happened when the car was going 5 mph or less, or while the car was stopped.

    While 15.8 percent of crashes involving conventional cars involved a fixed object and 14 percent of crashes involving conventional cars involved a non-fixed object (like a pedestrian jay-walking), autonomous cars only ever collided with another vehicle. In addition, 3.6 percent of conventional vehicle collisions were head-on crashes, but autonomous vehicles have only thus far suffered a rear-end collision, a side-swipe, or an angled collision.
    posted by adamsc at 5:52 AM on October 30, 2015 [1 favorite]


    Money* already trumps safety when it comes to self-driving vehicles. As Mitheral pointed out, many people are killed by the skytrain in Vancouver every year. Instead of implementing recommended safety improvements ($50 million), more than $100 million was spent installing faregates and more will be spent to maintain them.

    *While the faregates are not expected to recoup more money than they cost, they do make people feel like they're saving money by reducing fare evasion.
    posted by congen at 12:37 PM on October 30, 2015 [1 favorite]


    « Older “Bette Davis was right—bitches are fun to play.”   |   Deep down Louisiana close to New Orleans Newer »


    This thread has been archived and is closed to new comments