Deus ex machina
August 19, 2014 7:48 PM   Subscribe

Patrick Lin discusses ethics, responsibility and liability related to safety programming in self-driving cars: Robot Cars With Adjustable Ethics Settings.
posted by paleyellowwithorange (31 comments total) 9 users marked this as a favorite
 
Previously
posted by Johnny Wallflower at 8:29 PM on August 19, 2014


Is there a setting for punishing people who cut into lines of traffic?

It's eventually going to be proven that building Asimov's three laws into a non-trivial AI is equivalent to the halting problem, so we're all doomed anyway.
posted by qxntpqbbbqxl at 8:49 PM on August 19, 2014


Fascinating stuff for ethicists and engineers, that the lawmakers and courts will misconstrue horribly anyway (as they always do with anything remotely technical).
posted by escape from the potato planet at 8:53 PM on August 19, 2014


I think the eventual goal is to have a system that is carefully crafted enough and with enough constant awareness and safety predictability that the trolley problem actually doesn't surface. The car is able to avoid both problems with enough information. So instead of this:

"Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right."

It would be this:

"Thankfully, your autonomous car saved their lives by being aware of this possible deviation, detecting the wheel movement, and slowing down before impact."

Or something functionally similar.

I think we can always come up with a "what if" scenario, but I think most scenarios surface due to a lack of imagination when we don't consider how alert they could design cars to be, how functional and fast acting components will become, and how an ever increasing interrelationship between intelligent roads and cars will create a physical and interacting safety net. In other words, what if the system is constructed carefully enough that it is almost synonymous, safety wise, with being on rails? It's interesting to consider ethical problems with less-than-ideal systems, but I find it more interesting to consider that we could be setting up an infrastructure for an almost ideal system.
posted by SpacemanStix at 9:27 PM on August 19, 2014 [1 favorite]


I think the eventual goal is to have a system that is carefully crafted enough and with enough constant awareness and safety predictability that the trolley problem actually doesn't surface. The car is able to avoid both problems with enough information.

Of course. But the ethical questions come up in the edge cases, and need to be accounted for.
posted by jaguar at 9:52 PM on August 19, 2014


"but I find it more interesting to consider that we could be setting up an infrastructure for an almost ideal system."

This is a really bad approach to take when actually engineering a critical safety system, though. Especially because, given the volume of miles driven on a daily basis, you're going to have horrible scenarios. The proper way to plan and engineer systems like this is to ask "what is the worst possible thing that could happen" and then design around the answers you get. Otherwise you get Phillip Franklin saying "I thought her unsinkable" with regards to the Titanic.
posted by kavasa at 9:57 PM on August 19, 2014


Of course. I just think that a lot of "worse case scenarios" are possibly not actual worse case scenarios due to lack of imagination in engineering possibilities, and that we can possibly devise a system in which edge case scenarios are close to zero. Of course, close to zero is not zero, and it's worth grappling with those edge cases. It just might not have a significant impact on social or legislative concerns at the end of the day when all of the engineering possibilities mature. But who knows. I might be overly optimistic about the future.
posted by SpacemanStix at 10:20 PM on August 19, 2014


It amazes me that people are willing to put so much effort in to circle-jerky mind games about the ethics of self-driving cars. Here's an ethical thought experiment for you: there are 6.5 million accidents a year in the US, and ~100 people are killed every day in car accidents. Even at this early point in their development, Google's self-driving cars are way safer than normal drivers (700,000 accident free miles as of April 2014).

With all these stats in mind, do we have an ethical imperative to (once the technology is fully ready) ban human drivers?, considering the massive unchecked loss of life they cause? I would say we do.
posted by Itaxpica at 10:43 PM on August 19, 2014 [13 favorites]


Developing cars that will get the manufacturers sued regularly will prevent the manufacturers from producing such cars. Figuring out the ethics is a way to move the manufacturing forward.
posted by jaguar at 10:53 PM on August 19, 2014 [3 favorites]


Driverless cars provide an opportunity for ethicists to bring up these classic problems before a wider audience. The problems are useful because they highlight the ways in which our value systems are potentially incoherent. I'm not persuaded, though, that driverless cars raise new ethical questions of practical importance for car manufacturers or the designers of these algorithms.

I think one problem is the notion that the software contains a clear logical calculus that can be understood in detail by its designers. The inputs to the machine learning algorithms are massive, and probabilistic in nature, with thousands of weighting factors. Various patches can be placed on top of the resulting system to control its behavior in places where it unaccountably deviates from what we wish, but in some fundamental way, we just cannot say why the system made one choice over another. We can only observe that over some series of related situations there seemed to be a preference for one or the other outcome. We can choose the weight the outcomes differently, reevaluate the inputs with this new weighting, and verify that this made a difference in our test cases, but we still don't know precisely what outputs some slightly different set of real cases might then generate.

The trolley problem is an artificial one, useful for pedagogic purposes and as a tool to help us clarify our thinking. It is not a practical one. There are not two choices, but hundreds. Steer to the left. Steer a little more to the left. Brake harder. Brake earlier. Sound your horn. Detect erratic behavior on the part of the driver and slow down. Slow down regardless. Do some combination of all of these. Do this, observe the new situation (did the people move? In what direction? Did the car skid? Can we break harder?), make another set of choices, and repeat. The choices extend across a span of time, there is an entire series of choices, no single, binary, moment of decision.

This is true for us as well as for the machines. The machines are just faster than we are, they can make hundreds of decisions when we can just make (consciously) one or maybe two. Our attention comes and goes, the machine's is constant. Our evaluation of the situation is more nuanced, the machine's is more comprehensive, informed by more data and more different types of sensory input. At some point we observe that the machine's decisions, across a wide variety of traffic situations, are better than ours in the aggregate, with no clear outlying failures, and decide it is OK to provisionally cede responsibility to the machine.
posted by dougfelt at 10:55 PM on August 19, 2014 [8 favorites]


I dunno. These problems are fun and challenging, but they so often assume a unified policy maker who comes up with an optimal, efficient, or ethical solution, and as far as I can tell, the policy makers we have are hopelessly compromised by baser interests (as we all are), and so are the solutions.

The points in these discussions that reference insurance and liability, on one side, and money to be made on the other, seem the most relevant, given that view.
posted by notyou at 12:27 AM on August 20, 2014


We are apparently hypothesizing a road where there is no path your car can go without hitting someone. So even if you didn't swerve, the car should never have let you drive down this street fast enough to kill someone.

For this dilemma to work, we're going to have to fill the road with people without warning. Maybe they were all cleverly camouflaged and suddenly jumped up, or they just fell out of a helicopter.
posted by RobotHero at 12:29 AM on August 20, 2014 [1 favorite]


I think the thing is, if you're driving a car with autonomous capabilities but you've taken the wheel for yourself, you've hit the Big Red Button that shuts off the car's AI and have taken over responsibility for yourself.

At least, everything I've read about the self-driving cars suggest that the cars drive themselves 100% up until the point where someone hits the Big Red Button (something that, according to what I have read, doesn't happen often these days), and at that point, the human inside the vehicle is in control.

This isn't about the ethics of the AI, it's about the ethics of the human controlling it.

AI-driven cars are paying attention to things in a radius around them that extends much further than most human drivers even strive for, let alone are capable of.

I drive for a living, have for nearly 15 years now, have zero accidents on my record, and spend most of my time driving observing and anticipating things which are patently obvious to me that others are NOT paying attention to and reacting to very late.

Everything I have read about self-driving cars (and I've read a lot, articles totaling well over a hundred pages at this point, most of them long-form journalism) tells me that they are seeing further than I am, are more aware of hidden and masked hazards than I am, have some minor (or even major) ability to see around corners and through bushes that I do not have...

This thought experiment is cute enough, and I'm sure it's great fodder for people who are trying to come up with ideas about how or why we should not let AI drive our vehicles, or how to make it somehow a profit motive / deterrent to allow / disallow such things... But it fails to account for one very simple base fact about self-driving cars that is entirely different from the auto-braking or smart cruise control autos already on the road today: in a self-driving car, the humans are not driving the vehicle at all, unless they press the Big Red Button.

In which case, the outcome of any decisions made about what the car does don't lie on the AI at all.
posted by hippybear at 1:56 AM on August 20, 2014 [3 favorites]


I think that the ethical questions, while fun, misses the actual point, which is that computer error will lead to someone's death. This is inevitable really, as edge cases do exist: the computer will do something bizzare leading to the "driver" and possibly other's deaths. Now of course this will be much less frequent than when a human does something nuts on the road, but it will be noticable because the behaviour will seem utterly alien to us, based as it is on an algorithm malfuncioning (see the IBM Jeopardy contest for some bizzare points of failures from the otherwise very good AI).
posted by Cannon Fodder at 2:05 AM on August 20, 2014 [2 favorites]


I'm sure that it'll make giant headlines and breathless cable news stories when the first person is killed by an autonomous car but as was pointed out above, it's not like humans are having any trouble killing each other off without help from the machines.
posted by octothorpe at 4:12 AM on August 20, 2014 [2 favorites]


I think that the ethical questions, while fun, misses the actual point, which is that computer error will lead to someone's death.

There used to be elevator operators. We somehow got used to the idea of elevators opening and closing doors and adjusting speed and taking us to the exact floor all by themselves, and I'm sure the accident rate per passenger has gone way down since some guy in a funny suit did all that for us manually. Maybe Otis and the rest get sued every now and then when something horrible happens, but it hasn't stopped them from selling elevators or us from using them.

And I hear airliners are now flying on something called autopilot while the human pilots snooze. Imagine all those innocent people zooming through the air at hundreds of miles per hour high above the ocean at the mercy of a computer program.

Just count the corpses and the decision between manual and driverless cars will be easy. There will be someone killed now and then because a driverless car does something unexpected, and companies will be sued if it looks like they were negligent in testing their products, but there will be a lot more people still alive because driverless cars behave very safely and predictably all the rest of the time. Think of drunk driving alone: it would take some major fucking up in driverless car design to make it worth keeping a system that every year in America grinds up and spits out ten thousand corpses and I don't know how many horrible injuries.
posted by pracowity at 5:35 AM on August 20, 2014 [4 favorites]


I don't disagree that a driverless car future would be orders of magnitude better than our current situation, I just think that it will take a lot of effort to persuade people of it.
posted by Cannon Fodder at 5:48 AM on August 20, 2014


I endorse this post.
posted by deus ex machina at 6:21 AM on August 20, 2014 [2 favorites]


Auto pilot in aircraft can work because there is a high level control over the system that allows the autonomous systems to have leeway in their decision making, and if a situation arises that it cannot control, there is a highly trained operator to fall back on, which is what happens in any emergency situation (as I understand it). Aircraft with autopilot are multi-million dollar vehicles that receive regular maintenance. Air traffic is pretty unconstrained with regard to where it can operate. With automotive traffic, there is no high level control. Additionally, all cars travel in highly constrained traffic corridors in close proximity, so it's not apples to apples.

How can you expect a computer to make the decision to stop whether than swerve and hit someone when the shitheel owner of the car is running on bald tires, has glazed over brake rotors, and a bad tie rods, etc, etc. You can make the assumption that in the near future, sensor prices will be at a point that we can just put a sensor on it, but then that becomes a maintenance point that can break and you end up with a vehicle that is so complex that maintenance becomes a nightmare.

Aside from the practical consideration of maintenance complexity, take into account the system we currently have is an imperfect operator (human) operating a vehicle. With an autonomous vehicle, you have an imperfect operator (computer) operating a vehicle. It's a sideways move. Sure a computer can crunch data like champ, but you still have a lack of high level control over the system. Every computer in every car is still making decisions based on what it optimal for it. Sure you can wirelessly communicate with vehicles around you but there is a practical limit to the amount of collaboration that can be accomplished this way, and that assumes that all cars are capable of this which is certainly not going to be the case for the early stages of the technology rollout. Without some high level traffic controller (big brother), I just don't think it is feasible.

The more I think and hear about autonomous vehicles, the more I am convinced that they are the flying cars of this generation. A great idea that will be rendered unrealistic because of practical concerns.
posted by dudemanlives at 6:51 AM on August 20, 2014 [1 favorite]


Given that these AI cars will be based on hardware and software, and given the general decline in quality, yeah things are cheaper and faster but overall reliability?, should we trust these systems to always make the right decisions? If software engineers can't insure programs to be bug free, what about these extremely complex systems of both hardware and software operating in a very rapidly changing and unpredictable environment? I keep having visions of riding along with the AI driving and just as you approach a tight turn on a mountain road the blue screen of death appears...
posted by njohnson23 at 9:10 AM on August 20, 2014


Have the people coming up with all kinds of reasons why they'll never trust automated cars ever seen how humans drive cars?
posted by octothorpe at 9:28 AM on August 20, 2014 [4 favorites]


just as you approach a tight turn on a mountain road the blue screen of death appears...

Hopefully, they would have the good sense not to run it on Windows. From the hardware and software perspective, reliability should not be an issue. I work in industrial automation and control safe hardware and software has come pretty far in the past 10 yrs. I use programmable controllers over safety rated ethernet connections to monitor safety systems on most all the equipment I design, and it's pretty well proven in the real world. So the technology exists in that respect. And safety-rated wireless communication busses are definitely in the works , and probably not too far in the future(according to some vendors).

Like I said before though, I think the bitch of the whole thing is coordination. It's all well and good to have a communication between vehicles, and each vehicle acting autonomously, but I can't envision how this would work without some kind of supervisory control scheme.
posted by dudemanlives at 10:19 AM on August 20, 2014 [2 favorites]


One of my great modern fears is that articles like this will actually contribute to driverless cars being delayed from broad acceptance and release, thereby causing untold numbers of avoidable human deaths from sleepiness, texting, drunk driving, speeding, distraction, or just general human stupidity.
posted by Inkoate at 11:31 AM on August 20, 2014 [1 favorite]


If software engineers can't insure programs to be bug free

Software engineers CAN ensure programs to be bug free, or at least very close to it. They do all the time in the software that controls airplanes, nuclear power plants, pacemakers, factories, construction equipment... it's not that software can't be made bug-free, it's just that it's extremely expensive to do so, so it generally isn't except in highly safety-critical applications - which I think everyone agrees is a category that self-driving cars fall in to.
posted by Itaxpica at 11:39 AM on August 20, 2014 [3 favorites]


One thing that can mitigate many of your concerns dudemanlives is to make autonomous vehicles owned by manufacturers and available to the public as long term rental/lease only. This would be a huge blow to the enthusiast market but would address all the maintenance issues.
posted by Mitheral at 12:17 PM on August 20, 2014 [2 favorites]


How can you expect a computer to make the decision to stop whether than swerve and hit someone when the shitheel owner of the car is running on bald tires, has glazed over brake rotors, and a bad tie rods, etc, etc

Wouldn't these also be problems for any human driving the car, though?

Also, you don't need to have a direct sensor for literally every part that could go wrong, because you can use feedback from the sensors you already have, much like a person would (e.g., you can tell how well the brakes are working by observing what happens when you use them).

And as others have said already, self-driving cars already exist and have a safety record that beats human operators -- which seems rather unlike the case with flying cars.
posted by en forme de poire at 12:21 PM on August 20, 2014 [2 favorites]


I am convinced that they are the flying cars of this generation.

Remember:
  1. There are already driverless cars on the road, zooming alongside all the manually driven cars without incident. This is not something that just might happen.
  2. A road that is 100 percent driverless vehicles -- which is what will ultimately come -- is much safer than a road full of driverless and manual vehicles mixed. No drunks, no nodders, no searchers for yard sales, no harried parents, no boy racers. No surprises.
  3. People will want them very much. For example, how many trips are made every day by parents driving children to and from places? How many of those trips could be handled by driverless cars instead of parents? Let the kids take themselves to hockey practice or ballet. Let the car drop the kids at school in the morning, with a school representative waiting on the other end to pluck the kids from the waiting car. Let the car go all by itself to get an inspection and tuneup while you're at work.
And they're coming to the UK in January.
posted by pracowity at 12:26 PM on August 20, 2014 [1 favorite]


Unfortunately one solution to car ethics, "let's stop manufacturing fucking death machines for the convenience of some", has so far gained little traction
posted by threeants at 1:10 PM on August 20, 2014


Lin does talk as though the car has a very robust ability to predict the consequences of its actions yet failed to predict this potential dilemma early enough to avoid it.

But he's not really arguing against self-driving cars, he's using it as a launching pad for ethics. I mean, he's talking about setting the car to value people based on religion and sexual orientation. How would the car even know?



Johnny Wallflower linked to a previous Wired article written by Lin and in the related posts there's Mathematics of Murder which quotes Patrick Lin and links to a third Wired article by Lin titled The Robot Car of Tomorrow May Just Be Programmed to Hit You.

The vast majority of the ethical decisions around cars are not these trolley problems of choosing one life over another or the many over the few. They are the decision of how much you will risk a life for the sake of convenience. If everyone drove slower, that would save lives. Same thing for these hypothetical situations. Every single scenario you can imagine where the car has time to choose between lives but doesn't have time to just stop is a scenario where the car would have time to stop if it had been driving slower. And if it collides with someone at a slower speed, they are less likely to die for it.

I remembered the Google prototypes did have a slow speed limit built into them, and when I searched for that, I came across this article pointing out the Google cars will exceed the speed limit, as long as other nearby cars are also exceeding it. So there you have something programmed to technically break the law because everyone else is.

If we stretch the discussion to include animals, I haven't heard whether Google cars have been tested on ducklings walking across the road. Once these things are on highways, they'll need to handle deer and skunks and sloths and whatever. But if they're on Christmas Island during the crab migration or something, will they be useless, or will you be able to force it to risk running over the crabs?
posted by RobotHero at 10:38 AM on August 21, 2014


I guess I'm just saying the trolley problems make for a compelling story that gets Wired pageviews but there are other avenues to discuss that will actually come up in practice more often.
posted by RobotHero at 10:51 AM on August 21, 2014


If we stretch the discussion to include animals, I haven't heard whether Google cars have been tested on ducklings walking across the road.

No one knows what will be legislated, but technically a driverless car would be able to work with other driverless cars over distance. A car that saw something moving in the roadway would be able to stop and examine the problem (compare images to a database) and ask the passengers or an off-site traffic cop for a recommendation, and would also be able to warn all the cars behind it that it is stopping, so there would be much less chance of baby animals causing a horrendous pileup.

Then some human passenger could press the emergency Stop button, jump out into the middle of the road, clear the ducks or crabs away, and get back in, while all the cars behind it patiently waited for the All Clear signal from that person's car. The Stop button might also alert the local authorities (call 911), so you would need to be able to explain why you pressed the Stop button to halt all the traffic on the Brooklyn Bridge during rush hour, and you would presumably get into a bit of trouble if you were just fucking around.

So technically, the driverless cars would be able to handle the situation better than most humans do, but it all depends on how driverless cars are ultimately implemented as a system, not just as individual machines. If the government said all vehicles have to share traffic information with all other vehicles within some distance, you would get much safer cars than you'd get if all vehicles ran independently. Working with shared information, your car would know that there was a train coming towards the railroad crossing ahead, that something (ducks or crabs, as it turns out) were running across the highway you planned to enter at the next on ramp, and that a dump truck with weak brakes was coming around the bend at 75 mph in your direction, and your car would be able to react accordingly.

Your car could also pass that information down the line to cars behind it in the traffic grid. A minor thing might not be passed down very far, but information about an accident blocking traffic would alert emergency services and be passed way down the line so cars could get out of the way for ambulances and formulate alternate routes.
posted by pracowity at 11:38 PM on August 24, 2014


« Older Explosive Crude By Rail   |   15 years later, Fark discovers moderation Newer »


This thread has been archived and is closed to new comments