Ethics++
November 27, 2012 1:16 PM   Subscribe

 
Previously.

Eventually (though not yet) automated vehicles will be able to drive better, and more safely than you can

The case could be made that they already do.
posted by a snickering nuthatch at 1:23 PM on November 27, 2012


An interesting piece there is that it, in the same breath, suggests that people may someday not be able to drive their own cars, and also that the robot car might be programmed to make choices such as, say, saving 40 schoolchildren is worth the price of the life that they are carrying.

I'm not sure how I feel about it, but it's definitely fascinating.
posted by corb at 1:27 PM on November 27, 2012


I don't want it to have a conscience, I just want it not to crash. In the unlikely event of it getting into the situation where it has to swerve to avoid kids and risk my life, I just want it to continue doing its best not to hit anything at all - which if you ask me is what human beings actually do in such circumstances.

We don't need to give machines ethics until machines are truly autonomous agents, and that is in the invisibly remote future. This is not a mere cop-out because if we ever learn how to make truly autonomous agents, we will probably have learned some essential things about the nature of ethics, too.
posted by Segundus at 1:28 PM on November 27, 2012 [12 favorites]


Only a true ideologue would want to stop a robotic sniper from taking down a hostage-taker or Columbine killer.
While the article as a whole discusses interesting questions, I'm less impressed with the author going to the tired cliché of "the ticking time bomb" to cut off an avenue of discussion. Is it also true that only a true ideologue would insist that torture not be used even if it could help extract information to save many innocent lives? Or what about only a true ideologue demanding that animal fur should not be harvested and used for garments, even though some places get cold in the winter?

While I take the "true ideologue" position in those two example scenarios, that's not really the point. Machine morality is a difficult concept. Why be so quick to cut off part of the discussion?
posted by moink at 1:40 PM on November 27, 2012 [4 favorites]


This is not a mere cop-out because if we ever learn how to make truly autonomous agents, we will probably have learned some essential things about the nature of ethics, too.

I think this is a really important point. We're already approaching this in other contexts (i.e. considering dolphins/chimps "non-human entities"), but this transcends that in that we will (presumably) know of the level of intelligence/consciousness of AI from their inception into society.
posted by Strass at 1:40 PM on November 27, 2012


That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?

The car will not have an "ethical system" in this case. If it were programmed to sacrifice itself to minimize the harm to a School Bus that would be a political mandate that had been imposed as a result of a political process. No doubt many individuals' ethical calculations would have weighed in on various steps of that political process but the car itself is not making any ethical calculation of any kind. It's simply doing what it's been programmed to do.

Of course, I find it very nearly impossible to imagine such a political process coming to fruition. What if my car is an SUV and I'm carpooling 9 eight-year-olds somewhere and the bus is returning empty from it's route? Who is going to vote for a law that automatically makes their cars deathtraps in any accident that happens to involve a School Bus? Who on earth thinks that this kind of crappy sub-Philosophy 101 scenario is ethically clarifying in any case?
posted by yoink at 1:42 PM on November 27, 2012 [20 favorites]


What's interesting about the concept of machine ethics is if they exist, someone will short-circuit them to make more money/get somewhere faster/fight for their cause.

There is no way in Asimov's universe that robots without the three laws embedded didn't exist. I know his stories posit the existence of robots without some of the laws as an experiment, but they would have appeared in their 1000s the day after the ethical ones appeared...or before.

Human nature being what it is.
posted by maxwelton at 1:47 PM on November 27, 2012


IRFH's 3 laws of robotics:

1) A robot may not dance Gangnam Style or, through inaction, allow a human being to dance Gangnam Style.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law, or the human beings are in any way affiliated with the Detroit Metro Police.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws, except when self-destruct mode would also endanger a Kardashian, in which case, all bets are off.
posted by It's Raining Florence Henderson at 2:02 PM on November 27, 2012 [9 favorites]


Building machines with a conscience is a big job, and one that will require the coordinated efforts of philosophers, computer scientists, legislators, and lawyers.

Uhm... </cheaplawyerjoke>
posted by axiom at 2:02 PM on November 27, 2012 [1 favorite]


A robot may not dance Gangnam Style

Says you.
posted by Egg Shen at 2:05 PM on November 27, 2012 [2 favorites]


Human nature being what it is.

Directive #4: never oppose an OCP officer

Is this post a double? We've had two robot ethics articles today. (this is a good article)
posted by justsomebodythatyouusedtoknow at 2:10 PM on November 27, 2012


We're going to have a lot of robots shoving fat men onto train tracks, aren't we?
posted by adipocere at 2:10 PM on November 27, 2012 [12 favorites]


Just wait until the Singularity Institute discovers actuarial tables... it's going to blow their minds. I mean right now trains are programmed to just plow right on through that bus-load of kindergarteners if the bus violates the train's collision envelope.

Hint: "self-driving" cars won't have "ethics." they'll have legal liability to their owners just like every other machine see: trains, airplanes, boats, and yes, cars.
posted by ennui.bz at 2:14 PM on November 27, 2012 [1 favorite]


It is very likely that we will get to the point of practical self-driving cars long before we get to the point where those cars are capable of even recognizing a school bus as a target any more special than any other target it is trying to avoid. The idea that the self-driving car might recognize that self-sacrifice (of both itself and its own passengers) might serve a greater good by saving such a target is simply not going to be on the table for a long, long time, and most likely not ever for cars given the legal implications.

This is not to say we won't eventually build machines subtle enough to make such an ethical distinction, or even put them in situations where that capability will be important. (Countdown from strong AI to robot soldiers in 3, 2, 1...) But even when we do we won't be asking them to use that capability when driving cars. That's simply overthinking the problem and begging to be liable for your own customer's losses.
posted by localroger at 2:15 PM on November 27, 2012 [3 favorites]


I applaud the driverless car because it'll obsolete cabbies and make cars worth owning. yey!
posted by jeffburdges at 2:16 PM on November 27, 2012


I'd think it'd be pretty easy to have infrared sensors and count the number of live humans in each vehicle.

Of course, there are still edge cases like a truck load of primates, but really, the idea that machines have to be programmed with a moral calculus as the scope of their actions increases is not that outlandish.
posted by BrotherCaine at 2:19 PM on November 27, 2012


What axiom said. Wouldn't they have to create legislators and lawyers with a conscience first?
posted by rocket88 at 2:26 PM on November 27, 2012 [2 favorites]


1) A robot may not dance Gangnam Style or, through inaction, allow a human being to dance Gangnam Style.

You, sir, are a monster!
posted by GenjiandProust at 2:26 PM on November 27, 2012


I'd think it'd be pretty easy to have infrared sensors and count the number of live humans in each vehicle.


Both glass and metal are opaque to IR.
posted by Confess, Fletch at 2:27 PM on November 27, 2012 [1 favorite]


localroger: " But even when we do we won't be asking them to use that capability when driving cars."

I agree that making "which foreign object to crash into" decisions based on some sort of ethical equation is not in the cards, primarily due to the legal liability involved, but I do think knowing more about what's in each other vehicle can improve the reliability of the "how to minimize harm to all, including the driver" equation. In the school bus on the bridge example, a 40 mph head-on collision is no picnic even if the school bus is empty, but it's a lot worse for the driver of the car if it's full of kids, and at some point, perhaps careening along the guide rail, or even off of the bridge increases the chance of saving the life of the driver of the car, which should be the primary concern of all of the driverless systems in these situations.
posted by tonycpsu at 2:28 PM on November 27, 2012


I'd think it'd be pretty easy to have infrared sensors and count the number of live humans in each vehicle.

Even if glass were transparent to IR this is a much more difficult problem than you seem to think; it's a far more difficult problem than simply driving and avoiding collisions. That's not even getting to how the car is to distinguish between a bus full of kiddos and a bus full of convicts being transported to prison.

tonycpsu, if you are in a 2,500 lb car getting hit by a 15,000 lb school bus is bad news whether or not it is packed with an additional 5,000 lb of passengers. Since robotic drivers are far more responsive and agile than humans, the optimal strategy is to simply do everything possible to avoid collision. This will certainly involve some level of anticipation of the possible motion of other objects which are guided vehicles or pedestrians, but none of that involves a moral compass, just experience and physics.

When most or all of the vehicles on the road are robotic this will get even better because the robotic vehicles will be able to coordinate their responses to an unexpected situation on a time scale humans cannot even perceive.

The only morality a robotic car needs is (1) don't hit anything, (2) don't get hit by anything, (3) observe traffic rules and (4) get to your destination.
posted by localroger at 2:36 PM on November 27, 2012 [5 favorites]


I don't understand why we don't also just make the forty innocent children robots.

I don't see why hitting forty innocent robot children would be a good thing. Ideally, I think we should fill as many buses as possible with evil robot children. That should solve all ethical problems in one go.
posted by martinrebas at 2:41 PM on November 27, 2012 [4 favorites]


I still think the 5,000 lbs is relevant to the equation, F equaling m times a and all. Of course the first thing you want do is to try to avoid the crash, but if a crash is truly inevitable based on decisions already made by the driverless car and human bus driver, it's reasonable to take into account the extra weight in the bus to decide which course of action reduces harm to the driver.
posted by tonycpsu at 2:42 PM on November 27, 2012


"When most or all of the vehicles on the road are robotic this will get even better because the robotic vehicles will be able to coordinate their responses to an unexpected situation on a time scale humans cannot even perceive."

I like this. The vehicles involved try to avoid hitting each other, other vehicles further up the road slow to a stop on the shoulder so they won't plow into the possible accident. No multi-car pileups. Neat.
posted by Kevin Street at 2:49 PM on November 27, 2012 [1 favorite]


I still think the 5,000 lbs is relevant to the equation

If you have the kind of camera resolution necessary to make this determination, you're likely to learn more that's useful faster by looking at the interface between the tires and the road.

The difference between a loaded and empty tractor trailer is much more important, and something external like tire sagging is the only way you get that information on short order. Putting in the computational chops to figure out a bus is loaded with people is completely pointless.
posted by localroger at 2:53 PM on November 27, 2012 [1 favorite]


Didn't we already create autonomous high frequency trading bots that dominate much of our financial services transactions without any regard for morality or ethics? It seems the die is cast.
posted by humanfont at 2:55 PM on November 27, 2012 [6 favorites]


Oh, I'm assuming that the cars can talk to each other to broadcast their weight to each other. I figure that before they're all driverless, there will be some that are communicating with other driverless cars for increased safety for all involved.
posted by tonycpsu at 2:55 PM on November 27, 2012 [1 favorite]


What would an ethical automobile do when it encounters a "Baby on Board" sign?
posted by Obscure Reference at 3:00 PM on November 27, 2012


Isn't the point that the driverless car would have communicated with the school bus on the bridge long before it ever came to such an ethical conundrum?
posted by downing street memo at 3:03 PM on November 27, 2012 [3 favorites]


Isn't the point that the driverless car would have communicated with the school bus on the bridge long before it ever came to such an ethical conundrum?

I thought the premise of the problem was that the bus has a human driver who makes a mistake and veers right into the path of the driverless vehicle.
posted by tonycpsu at 3:05 PM on November 27, 2012 [1 favorite]


What if my car is an SUV and I'm carpooling 9 eight-year-olds somewhere and the bus is returning empty from it's route?

What if, using it's brain the size of a planet, your SUV determines that one all of those children are the next Khan Noonian Singh?
posted by RonButNotStupid at 3:06 PM on November 27, 2012 [2 favorites]


What would an ethical automobile do when it encounters a "Baby on Board" sign?

Preheat the oven to 350.
posted by It's Raining Florence Henderson at 3:08 PM on November 27, 2012 [2 favorites]


Lagniappe: When I'm not posting here or writing weird unpublishable novels about this very topic, I work for a scale company. So dealing with heavy things is kind of what I do.

For practical purposes when you are talking about accident physics there are three kinds of vehicles on the road: Cars, empty trucks, and loaded trucks. Pickup trucks are cars, since they all weigh less than 10,000 lb, and school buses are empty trucks whether they are loaded or not since they weigh 15,000 to 25,000 lb. A loaded panel truck or flatbed may weigh up to 55,000 lb; an important clue is the number of rear axles, since more means it might be heavier. Also, if that flatbed is loaded with steel plate it can be at capacity and look empty to a casual observer. There are a lot of situations like that, which make visual estimation of weight a fool's game for many reasons.

A loaded semi will weigh up to 80,000 lbs, or 88,000 lbs if it's a tri-axle trailer, or 100,000 lb or more if the hauler knows he won't have to pass across a scale.

All you need to know about collision physics in an approaching accident is that if something heavy hits something in a lighter category it will mostly keep going straight. If two things in the same category hit they will mostly stop one another or, if cars, bounce. If a loaded truck hits a car everyone in the car will die and if the truck driver doesn't roll or get killed some other way by his own load he'll probably walk away.

The main way weight would be useful in predictive accident avoidance is figuring out where the vehicles will go in a crash you can anticipate should you need to avoid them later yourself.

Considering the reaction time of robotic cars, situations where any more consideration than "get out of the way without hitting anything yourself" is useful are going to be vanishingly rare. In addition to the cost of the processing power to get more complicated than that, there's also the much more important loss in reaction time; humans may be more subtle and forward-looking than other animals but we also react slower than most of them.

I don't want my robotic car to be Noam Chomsky. I want it to be a thoroughbred.
posted by localroger at 3:11 PM on November 27, 2012 [10 favorites]


"Didn't we already create autonomous high frequency trading bots that dominate much of our financial services transactions without any regard for morality or ethics? It seems the die is cast."

But there's a difference between systems composed of machines that cooperate and systems composed of competing machines. The autonomous vehicles (presumably just those on the same stretch of road, but maybe later all the cars in a city) are working together to find a single optimal solution, which makes the whole system more stable. Computers that are programmed to compete in a zero-sum situation (making money off each other, or weapons in a war) are going to be constantly modeling thousands of solutions, and trying to take all the other actors (both human and machine) and their possible moves into account. That's how you get wild instability and stock market swings.
posted by Kevin Street at 3:13 PM on November 27, 2012 [1 favorite]


Always be very suspicious of anyone saying that "ethics don't belong" in some area of human endeavor.
posted by DU at 3:15 PM on November 27, 2012 [2 favorites]


MetaFilter: Always be very suspicious of anyone
posted by It's Raining Florence Henderson at 3:17 PM on November 27, 2012 [1 favorite]


I am not a robot.
posted by It's Raining Florence Henderson at 3:17 PM on November 27, 2012


The main way weight would be useful in predictive accident avoidance is figuring out where the vehicles will go in a crash you can anticipate should you need to avoid them later yourself.

Knowing the weight of other cars could also help the driverless car figure out the theoretical minimum stopping distance of those other cars -- e.g. "okay, that truck's braking, but it's 25,000 lbs, so I'll be damned if it's going to stop in time for me to avoid hitting it. Time to consider plan B." Obviously it's going to recalculate this again in a thousandth of a second or whatever and see how much the truck is actually decelerating, but it seems to me knowing weights is helpful in predicting what the road will look like in the very near future.

All I'm trying to point out is that I see value in the peer-to-peer communication aspect, independent of whether all vehicles are driverless, and that things like weight, braking/ABS/traction control status, etc. would be useful for the driverless cars to get from the cars with human drivers in them.
posted by tonycpsu at 3:19 PM on November 27, 2012 [1 favorite]


I thought the premise of the problem was that the bus has a human driver who makes a mistake and veers right into the path of the driverless vehicle.

This was my first thought when I read the example in the article; my second thought was, why on earth would the school bus have a human driver if robot drivers are available?

Then I thought, perhaps we're talking about a malfunction, either with respect to the auto pilot or a mechanical failure that sends the bus out of control. But certainly the former example would never be the case; the auto pilot would certainly have redundancy built in (maybe even triple redundancy), with the ability to get the vehicle safely stopped if the auto pilot started to fritz out. And by that token, the auto pilot (and redundant systems) would surely be able to detect a mechanical problem when (or even before) it occurs, maybe even better than human drivers can (e.g., I know when my car has a tire blow out, and I know what to do when that happens; any auto pilot that couldn't do that could never be allowed to drive a vehicle, least of all a school bus).
posted by JimInLoganSquare at 3:26 PM on November 27, 2012


MetaFilter: Like a truck load of primates!
posted by Kirth Gerson at 3:29 PM on November 27, 2012


I just thought of another interesting set of questions. Would auto-piloted vehicles be programmed to refuse to operate in dangerous conditions, such as a blizzard or ice storm? How dangerous would it have to be? How would the vehicle make that decision? Would it listen for coded signals from the State Police or some other authority? Could you get an override if you had an emergency that warranted driving in dangerous conditions? How and from whom would it have to come?
posted by JimInLoganSquare at 3:37 PM on November 27, 2012 [1 favorite]


but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.
holy shit what a terrible idea. they would just kill us all.
posted by arsey at 3:54 PM on November 27, 2012


but it seems to me knowing weights is helpful in predicting what the road will look like in the very near future.

Well sure, more information is always better. But the original conjecture was that it would be useful for the robotic driver to be able to tell whether a school bus is loaded or not. And that's an awful lot of perceptual processing power being applied to something that, compared to much easier things like how flat the tires are sitting and how fast the thing is actually decelerating, is just not very useful at all.
posted by localroger at 3:57 PM on November 27, 2012 [1 favorite]


The great thing about having self-driving cars is that our hands will be free. For handguns!
posted by srboisvert at 4:00 PM on November 27, 2012 [1 favorite]


For handgunsgrenades!
posted by It's Raining Florence Henderson at 4:03 PM on November 27, 2012


...of anyone saying that "ethics don't belong" in some area of human endeavor.

Nobody is saying that. We're saying ethics don't belong in some area of machine endeavor.

The most important thing you can ask a car to do is not hit anything and avoid getting hit. And that's apparently achievable with existing (or nearly existing) tech. And that's huge. It's more than we can expect out of a lot of human drivers.

Demanding that those machines understand philosophical paradoxes is like expecting the grader that sorts catfish fillets to size to recognize Vietnamese catfish being labeled as US grown and alert the authorities. That simply isn't what the machine is there to do.
posted by localroger at 4:03 PM on November 27, 2012 [1 favorite]


The autonomous vehicles (presumably just those on the same stretch of road, but maybe later all the cars in a city) are working together to find a single optimal solution, which makes the whole system more stable.

That's assuming everyone's autonomous vehicle, no matter what make or model it is, has been programed to behave and communicate in exactly the same way.

What's to stop BMW, Porsche, or Ferrari from selling a car with a douchebag AI option that purposely ignores other vehicles and forces them to compute an optimum solution that involves getting out of the way?
posted by RonButNotStupid at 4:08 PM on November 27, 2012


She's a little douche coupe - you don't know what I've bot...
posted by It's Raining Florence Henderson at 4:11 PM on November 27, 2012 [2 favorites]


I'm not sure that two professions that are gonna be on the B-Ship should have anything to do with building machines with conscience. Especially considering the specific professions they are in.
posted by symbioid at 4:34 PM on November 27, 2012


I think that there are expectations and assumptions about how a computer driving system would perform that are inconsistent with computable outcomes. The calculations required to create a highly cooperative and efficient system may be NP. I have no doubt we can get a car to drive itself and use some opportunistic strategies enroute to improve driving times over what a human alone is likely to acheive. Keep in mind that a practical AI driving system may only have to provide an incrementally better or equivalent experience to realize commercial success and adoption.
posted by humanfont at 4:39 PM on November 27, 2012 [1 favorite]


What if the bus driver is a child molester though? What if he's a child molester on his way to support the troops?
posted by drjimmy11 at 4:54 PM on November 27, 2012 [3 favorites]


What's to stop BMW, Porsche, or Ferrari from selling a car with a douchebag AI option that purposely ignores other vehicles and forces them to compute an optimum solution that involves getting out of the way?

Federal regulation.

(pause for laughter)

No, seriously. We are able to regulate mileage requirements, safety standards, etc. All the autonomous vehicles will presumably have to meet basic standards of non-douchebaggery.
posted by tonycpsu at 4:56 PM on November 27, 2012 [1 favorite]


Building machines with a conscience is a big job, and one that will require the coordinated efforts of philosophers, computer scientists, legislators, and lawyers.

Heh.
posted by telstar at 5:03 PM on November 27, 2012


Free hint: Making my car choose to kill me over a busload of orphans will doom the concept of (much safer for everyone, even me) autonomous cars forever. And I count as almost your ideal target audience, technophiles who recognize that autonomous cars will have a vastly lower accident rate than humans.

We still have people who don't like to fly because, despite less than 1% of the risk of driving the same distance, they don't "control" their fate. You think people will accept slow gas-guzzling transportation with the same flaws?

Make my car's brain self-serving, or no deal. And yes, also as your ideal target audience, I can and will take control back from the car, making me 100x more dangerous than anyone else on the road.
posted by pla at 5:09 PM on November 27, 2012 [1 favorite]


The robot driving the bus deliberately stalled the bus in order to force the robot driving your car to sacrifice you for the greater good. Fucking robots.
posted by It's Raining Florence Henderson at 5:15 PM on November 27, 2012 [4 favorites]


Wouldn't truly artificially intelligent ethical cars just refuse to ever start?
posted by srboisvert at 5:27 PM on November 27, 2012 [1 favorite]


No, seriously. We are able to regulate mileage requirements safety standards, etc. All the autonomous vehicles will presumably have to meet basic standards of non-douchebaggery.

Human drivers already have to meet basic standards of non-douchebaggery, and yet many still drive like douchbags.

I'm in favor of federal regulation, but I fully expect car manufacturers to find ways to skirt those regulations in much the same way SUVs became defined as light trucks to get around emissions standards. The self-important jerk market is too big and lucrative to overlook, and no one's going to buy a sports car with an AI that just sits there in traffic and waits its turn to change lanes.
posted by RonButNotStupid at 5:41 PM on November 27, 2012 [1 favorite]


Well, the cops already have (via) the technology to flag aggressive drivers. Presumably the same tech could be used to spot douchetacular AVs, whether the vehicles conform to the regulations or not.

Obviously this is the type of thing that will set off alarm bells with civil libertarians, and we'll have the usual debate over whether the douchebaggery is worth the cost in human life.
posted by tonycpsu at 5:49 PM on November 27, 2012


...incidental radio traffic, restored from data boxes...

SUV: ....wtf?
SCHOOLBUS: ....oh holy shit on a stick!

SUV: i carry 6 sundayschool kids and a nun. catholics.
SCHOOLBUS: a, yeah...i carry 7 grade-school kids, but they're all protestants.

SUV: so....
SCHOOLBULS: i'm trying to get hold of santa, wait one...

SUV: waiting....um...mind the closing speed...one of us has to go over the bridge.
SCHOOBUS: yeah, closing speed, but we have to find out who's naughty and who's nice.

SUV: um...well, there's the nun, you know...
SCHOOLBUS: fuckingshitpiss santamotherfucker...

SUV: [unintelligible]
SCHOOBUS: [unintelligible]

...both vehicles destroyed, bioremains of 13 children and 1 adult recovered from the wreckage....query: what is santa?...
posted by mule98J at 6:11 PM on November 27, 2012 [9 favorites]


I was thinking about the common driving use case where two lanes of traffic merge into one. When do the cars in the primary lane allow the cars from the merging lane in? When does the car in the merging lane decide to move over in a compressed traffic scenario. The computer or the human driver has a high stakes decision to make with limited information. A driver in the primary lane must recognize merge attempts and respond. A driver in the secondary lane must often scout for a narrow potential opening and then bluff or feign a merge before executing to be sure the other driver isn't trying to block the attempt. In a system where all cars are networked and following a specific merge algorithm this can be done, but when you have a mix of human and AI drivers or potential network latency / delays this becomes really difficult for the computer or a human (a high % of fender benders result from this scenario).
Consider also that huge flamewars have been fought over the correct merging strategy. Some suggest merging as late as possible is the best option, while others see this as a jumping an implied queue formed by driver who merge earlier in anticipation of the merge.
It isn't the drama of deciding on self sacrifice vs. the busload of orphans, but it is a much more common problem AI drivers will have to resolve.
posted by humanfont at 6:22 PM on November 27, 2012 [1 favorite]


Federal regulation.

Tort attorneys.
posted by ROU_Xenophobe at 6:49 PM on November 27, 2012


Who cares who's in the vehicle or whatever your robot car is trying to avoid? Just avoid the damned thing. Collision detection already exists, and from what I understand, there should be a gradual shift from human to robot drivers such that emergency/auxiliary control lets the human make the moral decision(s) in the meantime. No doubt Google would just collect the moral decision-making information for the future. "Don't be evil" is their motto, after all.

I should pay more attention to tags, but I'm still sorely disappointed this wasn't a post about Megaman.
posted by Johann Georg Faust at 7:04 PM on November 27, 2012 [1 favorite]


Ron Arkin, a robotics researcher, wrote a technical paper on the topic of how to embed ethics considerations in robot control software, "Governing Lethal Behavior:
Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture
":
This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.
It's about programming ethics into military robots that are capable of lethal force, but much of it is relevant to more general ethical considerations. And I think Arkin believes that just like robots will be able to process more information and react more quickly than humans, robots will also be able to act more ethically than humans.
posted by jjwiseman at 7:09 PM on November 27, 2012 [2 favorites]


Also, I'd love to see the unit tests for software that is meant to make ethical decisions about traffic accidents.

(Actually, as an engineer, I think the added complexity of doing the sort of ethical calculations discussed in the New Yorker article, and the associated risk of bugs, means it will be about 2300 AD before anyone even tries.)
posted by jjwiseman at 7:13 PM on November 27, 2012


I did't RTFA, but the FPP text immediately brought to mind a lyric from Donlad Fagen's "I.G.Y." off of The Nightfly

A just machine to make big decisions
Programmed by fellows with compassion and vision
We'll be clean when their work is done
We'll be eternally free yes and eternally young


Too hopeful?
posted by hwestiii at 7:25 PM on November 27, 2012


Only a true ideologue would want to stop a robotic sniper from taking down a hostage-taker or Columbine killer.

I guess Allen Wood could be called an ideologue, but it seems kind of low rent to refer to the Ruth Norman Halls Professor of Philosophy that way.
posted by kenko at 7:39 PM on November 27, 2012


mule98j, I so want to read a dialogue like that written by Iain M Banks.
posted by bleary at 12:38 AM on November 28, 2012


"...and all watched over by machines of loving grace."
posted by malocchio at 6:33 AM on November 28, 2012


One snowy day when I was a kid, I was riding my school bus back from school, and a car suddenly skidded and careened into our bus. It bounced off like a baking tray against a floor, and I think most of the rest of the students didn't even know we'd been hit.

Maybe teaching the robot cars about how to use brakes and F=MA would just shortcut this whole dilemma?
posted by crayz at 6:35 AM on November 28, 2012


bleary: I so want to read a dialogue like that written by Iain M Banks.

Banks might have written this--had he been so inclined--except perhaps he would have needed (or maybe even just desired) a suitable plot on which to hang it, assuming--just for the sake of argument--that his dialogue didn't drift too far into the pithy, because, as you know, one must have a plot to pith in.

I was thinking more of Bradbury or cummings.
posted by mule98J at 8:28 AM on November 28, 2012


I so want to read a dialogue like that written by Iain M Banks.

That is basically the plot of The Excession.
posted by localroger at 2:34 PM on November 28, 2012


With apologies to Robert Frost:

TWO roads diverged in a yellow wood,
And my robot could not travel both
And com.google.driver.ai.DecisionException at RoadChoserServiceImpl:38 (AI cannot determine optimal path and return route is unavailable. Unable to continue in automated driving mode.)

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I could no longer on my Google AI driver rely,
And that has stranded me by an indifference.
posted by humanfont at 3:42 PM on November 28, 2012 [1 favorite]


A world with 7 billion people in it, many of them living in places where life is cheap, food is scarce, and justice is non-existent ...

and we're building machines that need a conscience? Where are we going to find any to spare?

And on top of that, we need to devote the efforts hundreds of our best-educated citizens to this? Dragging them away from the endless problems we already face to manufacture the pseudo-living?

Abort. Abort abort abort.
posted by Twang at 3:45 PM on November 28, 2012


i don't know what happened to this unit. i disassembled, but when i put it back together i couldn't get it to actualize. i don't know what the gooey stuff was.

IT'S MADE OF MEAT!

how can that be possible?

DON'T TAKE ANY MORE OF THEM APART UNTIL WE'VE LOOKED INTO THIS.
posted by mule98J at 11:49 PM on November 29, 2012 [1 favorite]


« Older The Royal Society Winton Prize 2012   |   History's most influential people, ranked by... Newer »


This thread has been archived and is closed to new comments