The point of this game is *not* to up the body count...
April 26, 2019 10:47 AM   Subscribe

Autonomous cars and the trolley problem: We've talked about self-driving cars and the trolley problem before, but now there are adorable interactive graphics of people being squashed by a moving vehicle.
posted by jacquilynne (104 comments total) 15 users marked this as a favorite
 
I still wanna know what happens when there's an overflow somewhere in a self-driving car's code, and it suddenly decides that it sees -2,147,483,648 people that it might hit.

like the rational play there is to swerve toward those -2,147,483,648 people, right? if you hit -2,147,483,648, that's functionally the same as saving 2,147,483,648 lives.
posted by Reclusive Novelist Thomas Pynchon at 10:56 AM on April 26, 2019 [27 favorites]


My problem with kind of thing is simple: it will give people an outsize impression of what autonomous cars will plausibly be programmed to do. No, your autonomous car won't have bespoke AI deciding on someone's age. It simply won't be programmed to know, so it won't be able to make that decision.

Your care will brake. That's it. In the face of confusing data (moving people are unpredictable), the safest thing to do is to break as hard as possible without losing control, so that is what your car will do.
posted by BungaDunga at 10:57 AM on April 26, 2019 [61 favorites]



Tangent! The LA Dodgers baseball team name comes from their Brooklyn days and a nickname for local pedestrians trying to avoid the numerous streetcars — “Trolley Dodgers”.

posted by Celsius1414 at 10:57 AM on April 26, 2019 [22 favorites]


Oh, they get to it, eventually: "Ryan Robert Jenkins, assistant professor of philosophy at California Polytechnic State University, said his industry sources are quick to brush aside the trolley problem. He expects companies will program their cars to brake in a straight line even if there is a potential to save lives by taking other actions."

Yes. This is because that's the only sane way to program a car. AI won't know whether someone's a child or a group of children or a group of elderly folks or a pregnant person or whatever, because you'd have to proactively train it to recognize them, and nobody would do that because that's an insane thing to spend time on when you're trying to make a car that can even recognize people consistently at all.
posted by BungaDunga at 11:00 AM on April 26, 2019 [17 favorites]


In the face of confusing data (moving people are unpredictable), the safest thing to do is to break as hard as possible without losing control

Which for the most part will be consistent with human drivers, except in most cases the autonomous car will brake faster. Humans would either not have time to weigh the ethics (and just brake) or try and make an ethical decision in the moment (which will probably mean failing to brake earlier and likely making things worse).
posted by thefoxgod at 11:01 AM on April 26, 2019 [11 favorites]


I don't think self-driving cars will ever be able to deal with the uncountable things that happen in cities that would confuse their programming, and my fear is that instead of deciding they're a bad idea, we'll further constrain the mobility of anyone not in a large moving metal box.
posted by Automocar at 11:03 AM on April 26, 2019 [22 favorites]


I suppose the one case I can think of is where braking is impossible and swerving slightly might avoid a human, at the risk of sending your car over the meridian. On an empty road, this is probably the right thing to do if the AI is very good at swerving. If there's oncoming traffic, it's probably a bad idea.

So there's very specific scenarios which are important, but they're not exactly trolley problems, they're much harder. Does a large-but-unknowable chance of avoiding someone make up for a small-but-unknowable chance of losing control of the car and running off the road or incoming traffic? That's a hard question especially when there is absolutely no time to really know what the probabilities on each side are.
posted by BungaDunga at 11:07 AM on April 26, 2019 [2 favorites]


Autonomous cars are the personal jet pack of the 21st century.
posted by The Whelk at 11:07 AM on April 26, 2019 [18 favorites]


Your care will brake. That's it. In the face of confusing data (moving people are unpredictable), the safest thing to do is to break as hard as possible without losing control, so that is what your car will do.

Drive at full speed on a busy freeway, with a cement mixer truck tailgating you and just as you're overtaking a semi, pretend your sensors have failed and brake as hard as possible. Then have your next of kin post about how that was the safest possible thing to do. There's a lot of uncertainty around autonomous vehicles, but the one certainty is that anybody saying there's a straightforward solution is wrong.
posted by Homeboy Trouble at 11:08 AM on April 26, 2019 [12 favorites]


Imagine the scenario where someone programs a car to know the difference between a 30-yo and a 90-yo and uses it to make decisions on who should be avoided. That's a litigation nightmare. Literally no company that doesn't rhyme with Besla would do that, because they'd be sued into the ground.

Much easier just to run down pedestrians and cyclists in a totally blind manner so you can blame them for being shaped wrong for the AI to notice you.
posted by BungaDunga at 11:10 AM on April 26, 2019 [6 favorites]


If this is how it works, I foresee a swift industry in fake pregnancy bellies for pedestrians.
posted by RobotVoodooPower at 11:13 AM on April 26, 2019 [11 favorites]


I don't even think Besla would program a car that way. That's more of a Bomma.ai move.
posted by Kikujiro's Summer at 11:14 AM on April 26, 2019 [1 favorite]


This is a slight derail, but anyone watch Netflix's Travelers? Kinda a wonderfully blown-out take on the idea of "what if we trusted AI to fix everything, but we still have to be people." If Will Smith's AI is a kinda dumb, shallow look at the issue of robot-trolleys(with problems!), Travelers is the beautiful, funny, Russian novel version.
posted by es_de_bah at 11:16 AM on April 26, 2019 [7 favorites]


Sometimes braking is right, sometimes swerving is right, etc. Cars can reasonably be programmed to do that. But no human makes decisions like "should I hit this elderly person or that child" in the fraction of a second you have to make these decisions. In fact, you don't even have time to think "brake or swerve", you have to hope your instinct is right.

The first goal is "reduce total accidents", which would be done if cars can make the "should I brake / should I swerve / should I do nothing" decision faster and better than humans. Complex ethical decisions could potentially be legislated in (I agree no company would willingly do this), but are unnecessary to see a benefit from autonomous cars.

Of course, first they need to get to a solid "better than humans" level of avoiding/reducing accidents. Some progress on this but evidence is still low (current rate is pretty low, but even where they are operating commercially its in a heavily restricted condition).
posted by thefoxgod at 11:17 AM on April 26, 2019 [6 favorites]


Are there any other industries in which an advancement is stopped by hypothetical dilemmas which were literally never faced by the thing they are replacing? I mean raise your hand if you've ever had to decide who to kill while you were driving.
posted by FakeFreyja at 11:17 AM on April 26, 2019 [29 favorites]


A example of this problem already exists. I was talking to a realtor who was driving a client around a rural property with tall grass. He turned off a dirt road to go across a field and the truck immediately locked it's brakes at 20 miles an hour.

Because the collision-avoidance system saw the tall grass as a solid object. They had to get out the operating manual and figure out how to turn the system off before they could go anywhere.

That realtor is now looking for a new, older, truck.
posted by ITravelMontana at 11:18 AM on April 26, 2019 [12 favorites]


At least with a robot driver, you could be pretty sure it's not deliberately trying to kill you.
posted by rhamphorhynchus at 11:25 AM on April 26, 2019 [1 favorite]


AI won't know whether someone's a child or a group of children or a group of elderly folks or a pregnant person or whatever, because you'd have to proactively train it to recognize them, and nobody would do that because that's an insane thing to spend time on when you're trying to make a car that can even recognize people consistently at all.

It would go further. The car might check everyone's criminal background, number of unpaid parking tickets, employment history, based on linked-in profile, number of facebook friends, regularity of pet vaccinations, and perhaps even run your old deleted lj poetry and filk song lyrics through text evaluation engines for objective social merit and grade comprehension level.

The criteria for evaluation of a life's worth really has no bottom for Deep Data, you know.
posted by bonehead at 11:28 AM on April 26, 2019 [10 favorites]


Previously, from me on why you'd just try to not hit anybody. The article plays this approach as avoiding liability, but I say it's more like "hard cases make bad law" applied to software.

The thing that bothers me is the only acknowledgement of speed is the bit comparing human and computer braking distance. But if you're programming an autonomous car, another thing you have to program it to decide is how fast to travel. And these scenarios where your car is going to hit and kill someone, that would happen less often if they were driving slower.
posted by RobotHero at 11:29 AM on April 26, 2019 [2 favorites]




Drive at full speed on a busy freeway, with a cement mixer truck tailgating you and just as you're overtaking a semi, pretend your sensors have failed and brake as hard as possible. Then have your next of kin post about how that was the safest possible thing to do. There's a lot of uncertainty around autonomous vehicles, but the one certainty is that anybody saying there's a straightforward solution is wrong.

"Safest possible" doesn't mean "never going to lead to a crash under any conceivable circumstance". Honestly, in a situation where you're on the highway and the car is suddenly completely out of control, someone is probably going to die regardless of what the default settings are, or even whether a computer or a human is controlling the car. Worst case scenarios don't usually have good outcomes.

Overall, there are a ton of problems with autonomous cars, but I feel like this trolley problem stuff is one of the smallest ones. It's not like humans are particularly good at making these decisions as is, or that these are particularly common driving scenarios that people face on a daily basis.
posted by Copronymus at 11:32 AM on April 26, 2019 [8 favorites]


I feel like any hypothetical worthwhile AI is unlikely to have worse outcomes than humans.

Except for the part where all these networked vehicles are a ripe juicy target for someone to hack into and cause massive amounts of death very quickly.
posted by emjaybee at 11:39 AM on April 26, 2019 [4 favorites]


Drive at full speed on a busy freeway, with a cement mixer truck tailgating you and just as you're overtaking a semi, pretend your sensors have failed and brake as hard as possible. Then have your next of kin post about how that was the safest possible thing to do. There's a lot of uncertainty around autonomous vehicles, but the one certainty is that anybody saying there's a straightforward solution is wrong.

Hopefully the car would be programmed with basic driving skills, and thus not be stupid enough to decide to overtake a semi while being tailgated, and thus not encounter the issue. There's a lot of things about autonomous cars that make me doubt that they'll be taking over any time soon, but basic highway driving with other vehicles in a safer manner than humans is something they do fairly well.
posted by tavella at 11:41 AM on April 26, 2019 [3 favorites]


Then have your next of kin post about how that was the safest possible thing to do.

People’s next of kin are already posting right now, about 30,000 times a year.
posted by sideshow at 11:44 AM on April 26, 2019 [17 favorites]


Autonomous cars are the personal jet pack of the 21st century.

Bookmarking this hot take for posterity.
posted by Barack Spinoza at 11:45 AM on April 26, 2019 [7 favorites]


There's a reason why trolley problems are discussed in philosophy classes and not engineering classes.

A trolley problem is proof prima facie that the engineering failed already.
posted by ocschwar at 11:49 AM on April 26, 2019 [10 favorites]


Autonomous cars are the personal jet pack of the 21st century.

I'd be a lot more bullish on the idea, but then I see that we don't even have the technology to stop a robo-vac from spreading cat vomit all over my floors.
posted by FakeFreyja at 11:50 AM on April 26, 2019 [8 favorites]


> People’s next of kin are already posting right now, about 30,000 times a year.

It is ice cold stupid that we allow human-driven cars on the roads, and if I could rearrange the timeline I'd make it so that municipalities banned the things before they really entered widespread use.

That said: even though humans are on the whole bad at driving (it's not something we're meant to do!) the fail states of human drivers are relatively well-understood and semi-predictable. On the other hand, the fail states of autonomous cars (indeed, the fail states of any of the ML gadgets running around these days) are weird, difficult to understand, and totally unpredictable. My "what if a long overflows and the car sees billions of negative people" example is facile, but the real ways that these things fail are nearly as silly, and when these failures happen the ways they act are completely bizarre if you're used to thinking of cars as controlled by big-brained animals rather than by perceptrons or whatever.
posted by Reclusive Novelist Thomas Pynchon at 11:55 AM on April 26, 2019 [7 favorites]


I think the focus on stuff like the Trolley Problem says a lot about how we tend to approach new technology. It's easy to get hung up on the details of things, often to the exclusion of the big picture. We can spend loads of time thinking about who an AI should aim for if they're unable to brake, because that's the kind of puzzle that can be fun to tease out.

Less fun: I knew a guy who was an absolutely brilliant engineer, I mean, coauthored papers as an undergrad, all kinds of impressive stuff. He was working on nanotechnology to improve the clarity of night vision. It's easy to break that down into a smaller, but no less complex task, like "what is the best physical arrangement of these nano-scale materials to optimize optical clarity." That's it. The lenses are up to someone else. The housing is handled by the materials people. The power supply is handled by someone else. All you have to worry about is the arrangement of these nano-scale materials.

At one point, someone in our group asked him what he thought about the military using that kind of technology, and he admitted he'd never considered that they'd be interested. And this isn't me knocking him, I really think he's one of the nicest and smartest people I'd ever met. It's just that these details can end up obscuring important considerations, because once you break something down into a single problem, you don't really have to think about anything else. So who ultimately handles future usage? Not the nano guy.

That's what this focus on AI ethics feels like to me. It's like the horse-race aspects of political journalism, where we all want to read about things as if we're in the room where it happens, like we've all got the insider view. So we love to ponder the ethics of AI, and the Trolley Problem, and all this stuff we realistically know we don't have much, if any, control over anyway. I don't mean this as a condemnation, because part of why this happens is that these are genuinely interesting questions. You can devote a lot of energy to them, because they're complex and thought-provoking.

It's just -- the more I see stuff like this, the more wary I am. Sure, we can wonder how a car might swerve to minimize casualties, but that feels more like a distraction from the reality that there will be accidents: you can't anticipate everything, you can't control for all variables. This isn't a pure ethical or engineering question, and it feels like there's a disconnect between the hypothetical and the real world. The way we tend to talk about this stuff makes me think that people have more faith in the AI and the cars themselves than they really deserve, and we're not going to give enough thought to the realities until we're forced to confront them in retrospect.
posted by shapes that haunt the dusk at 11:57 AM on April 26, 2019 [18 favorites]


I feel like any hypothetical worthwhile AI is unlikely to have worse outcomes than humans.

Yes. I've been rear-ended twice by people who decided to hit the gas to go around a turn while I was still waiting in front of them for traffic to clear.

I've watched an impatient jerk in a pickup truck swerve to go around unmoving traffic as the light changed to green, and hit a woman in a wheelchair who was crossing a crosswalk.

I've seen people make left turns from the right turn lane in front of other cars, on a red light.

I've seen a maniac in an exotic sportscar blast down a two-lane road at 150MPH in a 30MPH zone, and then swerve into the opposite lane to avoid normal traffic and barely avoid pulping himself on an oncoming city bus.

When I lived in Florida, there was an old woman down our street who ran over a child on a tricycle and kept going because she thought she bumped into a trash can.

I'll take the imperfect AI drivers.
posted by Foosnark at 12:01 PM on April 26, 2019 [12 favorites]


To put what I said more succinctly (read: less unfocused), I feel like these ethical questions present a view of autonomous vehicles that is unrealistic, and which distracts us from asking tougher questions now. It's not like there aren't already critical hot-takes on the subject, but I've never gotten the sense that there's much overlap between those views and the views of the people actually working on this stuff. People always talk about this in terms of its perfection, but I'm reminded of something I heard William Gibson say once: he'd never used a computer when he was writing stuff like Neuromancer, and he said he'd always imagined these perfect machines with crystalline structures humming softy -- not, as he said, these Victorian-seeming devices with their clunky hard drives and clacking keys. Possibilities are always exciting, but reality is a separate thing.
posted by shapes that haunt the dusk at 12:04 PM on April 26, 2019 [2 favorites]


The solution is super simple: hard-code a maximum speed limit of 30KM/h when there are no pedestrians around and 10 Km/h when there are. Make the car stop completely if there's any doubt at all as to its ability to keep moving without harming pedestrians. Make the car unable to move unless it has actual positive data on it being safe to do so. Shape the body and select the materials of the car not to maximize speed or profits, but rather to minimize harm to pedestrians. Design cities so all points of interaction between cars and pedestrians ensure pedestrian safety, even if it's inconvenient for the cars.
Basically, act as if non car-driving humans' lives were as important as car-driving humans' convenience.
posted by signal at 12:05 PM on April 26, 2019 [17 favorites]


I had to enable scripting in five domains before the Fine Article would even let me click on anything. Until then it happily told me I'd already made a choice and ran over one or another group. Not sure what this has to do with the Trolley Problem except to note that we put a man on the moon fifty years ago but we can't create a web page that runs all its code from a single domain. Now get your damned trolleys off my lawn. On an unrelated note:
It is ice cold stupid that we allow human-driven cars on the roads, and if I could rearrange the timeline I'd make it so that municipalities banned the things before they really entered widespread use.
So just what would you replace them with that would make everything into a unicorns-pooping-soft-serve perfect universe? Trains? Horse-drawn vehicles? Bicycles? Unless you live in Mega City 1 it seems that without human-driven cars you would be relegated to an eighteenth-century lifestyle, and I for one am glad your ability to modify the timeline has been curtailed.
posted by Gilgamesh's Chauffeur at 12:06 PM on April 26, 2019 [6 favorites]


> Unless you live in Mega City 1 it seems that without human-driven cars you would be relegated to an eighteenth-century lifestyle

I have genuinely no idea why you think that. I've lived a long while, and I've only just recently learned how to drive. I don't think I've been living a 18th century life, but maybe I'm wrong, I dunno.

Is it because of delivery trucks? like, things will turn all 1700s if we don't have delivery trucks covering the last mile between train depots and stores? Garbage trucks? I guess things would be pretty stinky without garbage trucks, but that doesn't imply the necessity of privately owned vehicles.

Is it because of the relatively small fraction of the population that lives in rural environments? Is that why? That seems off to me, since, well, I know from experience that bikes and trains work in urban areas and rural areas.

Is it because American (and Australian) cities are designed to be difficult to live in without a car? Because that's why I'm like "man if I could rearrange the timeline" rather than "let's burn all the cars right now."

What am I missing here?
posted by Reclusive Novelist Thomas Pynchon at 12:14 PM on April 26, 2019 [10 favorites]


Basically, I don't see any way to resolve this that doesn't end up really pissing off Will Smith.

Or Sarah Gailey.

That said, pedestrian deaths by human-operated vehicles have exploded in the last ten years, pretty much wiping out a thirty-year decline. That's a lot more real than hypothetical edge cases involving a technology that isn't even fully baked yet.
posted by Naberius at 12:17 PM on April 26, 2019 [3 favorites]


While we're reorganizing the timeline, I say we keep fire trucks and ambulances. There are problems where a motor vehicle is the best solution, but I am not at all sure the benefits of personal automobiles outweigh the blood-drenched costs.
posted by bagel at 12:18 PM on April 26, 2019 [4 favorites]


Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars. "Until the automotive industry and regulators reconcile a cartoonish version of semi-autonomous features with the reality of how to use them safely, the future may not be nearly as safe as one might hope."
posted by showbiz_liz at 12:24 PM on April 26, 2019 [7 favorites]


In rural areas the trolley problem has been solved by deer/brush guards because the evidence says "never swerve".

Also The Trolley Problem episode in The Good Place should be required watching for all philosophy classes. Also, if the trolley is under control enough to where you can push a fat guy on the track and save everyone, then I'm thinking you could just pull on the doors to slow it down, and it says a lot about the designers of the trolley problem that they think pushing a fat guy on to the tracks should be a viable option.
posted by The_Vegetables at 12:33 PM on April 26, 2019 [6 favorites]


The most interesting thing I've recently heard regarding autonomous vehicles is the question of jaywalkers. If you know (and trust) the vehicle will not kill you, why not step into traffic to cross? The behaviour is regulated now by the fact that you don't trust the human operators to see you and halt not rendering you squished. With a self-driving car you can safely assume the sensors will detect you in traffic and the algorithm running it will not maliciously murder you due to having the gall to step onto the roadway. Without a new social protocol, traffic will not move at all due pedestrians walking wherever they like.
posted by Keith Talent at 12:33 PM on April 26, 2019 [3 favorites]


Mandatory Good Place exploration of the Trolley Problem.
posted by emjaybee at 12:33 PM on April 26, 2019 [2 favorites]


One of my Interpretations of Johnny Wallflower's Boeing 737 Max Disaster Post is that it failed partially *because* of the trolley problem. When every sensor tells the AI that safety is best found by pointing the plane's nose down to prevent the plane from stalling *AND* your pilot is telling you to point the nose up - who knows better, Boeing's AI or the Pilot?

The same thing is about to be at hand with driverless cars except we're removing the thousands of hours of experience acquired over a lifetime of driving. Yes, the first few generations, yes, have decreasing amounts of skill over time... but - are driverless cars non-drivers going to keep up with all skills necessary to drive reliably when they have to? And when does the car decide to drive you instead of let you drive it? When will cars insist on driving you - protecting you from self-harm - except instead, drive a non-driver to their death?

So - trolley problem... who wins? Who really knows how to read and prevent a disaster?
posted by Nanukthedog at 12:34 PM on April 26, 2019


Without a new social protocol, traffic will not move at all due pedestrians walking wherever they like.

Seems like a feature not a bug
posted by Automocar at 12:35 PM on April 26, 2019 [16 favorites]


The trolley problem is only a problem because the brakes failed. Build a trolley with redundant/backup safety systems and we won't need philosophers.
posted by rocket88 at 12:41 PM on April 26, 2019 [8 favorites]


One of my Interpretations of Johnny Wallflower's Boeing 737 Max Disaster Post is that it failed partially *because* of the trolley problem.

Well, considering that by next week, more people in the US will have been run over by cars and killed since the 737 Max was grounded than killed in the plane crashes, I'll take the comprehensive safety and accident review policies of the airline industry.
posted by The_Vegetables at 12:45 PM on April 26, 2019 [6 favorites]


Basically, act as if non car-driving humans' lives were as important as car-driving humans' convenience.

Why that’s crazy talk!
posted by Celsius1414 at 12:47 PM on April 26, 2019 [2 favorites]


Well, thefoxgod, you pretty much summed up one manuscript I currently have under review in three sentences, which is a kind of amazing trick (I just sent in the revisions a couple hours ago!). "Sometimes braking is right, sometimes swerving is right, etc. Cars can reasonably be programmed to do that. But no human makes decisions like "should I hit this elderly person or that child" in the fraction of a second you have to make these decisions. In fact, you don't even have time to think "brake or swerve", you have to hope your instinct is right."

A vast amount of both the thinking about ethical AI, and, more proximate to my own work, human visual perception, assumes that drivers are extracting, processing and using that kind of detailed information in order to drive safely. I'd argue otherwise, because (much of the time) it's not relevant to the task they're actually doing, which is avoiding a collision. For example, there's literature on how long drivers actually take between when something unexpected happens in the world and when they stomp the brakes (on average, about 1.3 seconds, per Green, 2000). So, you've got a bit more than a second to look at the world, notice that something's gone wrong, and start to brake. Which means that you don't remotely have that entire 1.3 seconds to just look at the world - you've got, maybe, half of that, because you need to actually send the signals from the brain to your foot (or arm) and that takes time, even given the speed of transmission in the nervous system.

So, how much can you learn from a moving, dynamic scene in a few hundred milliseconds? Note that it's enough time to make at most two eye movements, and that's assuming you get the first one off about as quickly as you can.

Turns out, 600 ms of looking at the world is plenty if I show you a clip of a near-collision or collision event in the lab and say "tell me where you'd turn." If you're in your 60s, you need 600 ms. In your 30s? 400 ms. Doing this requires a pretty detailed understanding of the world: you don't just need to know that something in the scene is a hazard to you, but you also need to know where other objects are in the scene (like other drivers, pedestrians, cyclists, immobile features of the environment): you need to detect the hazard, localize it, understand its trajectory and do the same for enough of the rest of the scene to understand where you can go. That's a lot of information, and none of it asks you to say, for instance, what color the car to your left was, because it's not relevant.

What about if you're just detecting hazards? You need even less time. Younger drivers (again, in their 30s) need 200 ms of video; older drivers need about 400 ms of video - so, you can look at the road for a fraction of a second, acquire enough information to say "yep, something's wrong" and begin to make a plan to do something about it.

So, yes, it comes down to your instincts as a driver, but your instincts are fed by your visual system, and it's remarkably good at figuring out enough of the scene, and if you don't have a reason to care what
the hazard is (and you usually don't - you care where it is), it's unlikely to spend the time to help you figure it out.

This actually makes sense based on what we know about scene perception (the gist of the scene): if I show you a static image of a road scene for 100 ms and ask you if the road was blocked or not, you have no trouble getting it right (Greene and Oliva, 2009). You probably don't know where in the world that road scene is with much accuracy, but there's a lot you can get in the blink of an eye. You're also really good at noticing that something's wrong in an image (my favorite example is radiologists being shown mammograms for 500 ms, and being able to say "yes, there's an abnormality" - even though they don't know where it is, they just know something's wrong; Evans et.al 2016).

And as I've been writing this, showbiz_liz linked to a really excellent piece at Jalopnik that seems to hit the nail on the head in terms of the larger problems with understanding the behavioral side of vehicular automation. Aviation has been dealing with this for decades, and they've mostly figured out how to deal with it. They're a great place to look, but the world of aviation and the world of driving, while they're somewhat similar, aren't the same thing, and not being willing to ask "what does the driver know, how do they know it, and how does vision work" is a major problem if you want to, say, build a midlevel autonomy system that has an implicit model of what the driver does and doesn't know about the world at any given time. The question of figuring out how the driver knows what they know (which is really "how does the driver's visual system enable them to acquire information in a fast environment that isn't going to wait for them") is the focus of a different paper I've got under review, that draws on vision science research from the last 40 years to try to put a foundation under these questions.

Um. That was a very, very long comment.
posted by Making You Bored For Science at 12:47 PM on April 26, 2019 [42 favorites]


I'll take the comprehensive safety and accident review policies of the airline industry.

And to go farther, that means that in any car accident the software, driver training and accident conditions are reviewed and perhaps all are shutdown until the problem is solved.
posted by The_Vegetables at 12:49 PM on April 26, 2019 [1 favorite]


The question of jaywalkers is interesting. The main thing that keeps pedestrians from strolling out into moving traffic is the probability of death. That is, we don't trust the driver to stop in time, whether the cause is distraction or bloodlust. So our trust in our fellow motorist only goes so far -- more so when we are both protected by metal boxes, less so when our tender flesh is exposed. Will we develop a greater degree of trust towards our fellow AI cousins? It seems we'd have to for autonomous vehicles to coexist with humans.
posted by RobotVoodooPower at 1:00 PM on April 26, 2019 [1 favorite]


Here in my neighborhood in San Francisco, Cruise, a division of General Motors, has been testing their cars for months. There are usual two or three people sitting in the car as it travels around the neighborhoods. It took me a whole afternoon of phone calling to find out who to complain about these cars in San Francisco. It’s an email address that some guy reads. I think he’s with the DMV who are supposed to be monitoring the testing. What did I have to complain about? Over the past few months I see these cars start from a stoplight or stop sign at an intersection, accelerate as a reasonable rate, and then in the middle of the block it will come to almost a complete stop for no apparent reason. I had one right in front of me head across the intersection and then stop. Again for no reason. I had to honk to get the thing moving. Usually the guy behind the wheel always has his hands on the wheel so I can’t tell how automated these cars actually are. I told the DMV guy that when driving we are always making predictions about the behavior of cars and people around us. These predictions are based on reasonable assumptions learned through years of driving. One reasonable assumption is that the car in front of you is not going to stop in the middle of the block for no reason. There are never any turn signals on to suggest turning. I told the guy that these cars do not behave in a predictable manner. The Cruise office here is only an info email address. Armed with the DMV email address, autonomousvehicles@dmv.ca.gov, I can report this behavior. I’ve done it three times in the last two weeks.
posted by njohnson23 at 1:05 PM on April 26, 2019 [9 favorites]


(indeed, the fail states of any of the ML gadgets running around these days) are weird, difficult to understand, and totally unpredictable.

This is a .... fun.... plot point in Peter Watt's Rifters series.
posted by the man of twists and turns at 1:07 PM on April 26, 2019 [3 favorites]


I can report this behavior. I’ve done it three times in the last two weeks.

Get a dashcam, then continue driving as normal - and the next time this happens upload the video - and again. (Don't forget to monetize your channel, call it "Cruise Control" or something...)

Until they are publicly shamed, most companies have no incentive to act on anything. And the DMV? All they have is your emailed complaint, not exactly evidence.
posted by jkaczor at 1:13 PM on April 26, 2019 [4 favorites]


"Driving" a fully autonomous vehicle really means being a passenger in a taxi.
Think of your last cab ride. How skilled and experienced was your (human) driver? Was he impaired? Sleep deprived? Distracted? Did he drive at or below the speed limit? Did he obey all traffic laws?
Or did he speed, make rolling stops, run yellow/red lights, swerve between lanes without signaling, talk on the radio, talk to his passengers and check his phone?

Self-driving cars can't come soon enough.
posted by rocket88 at 1:19 PM on April 26, 2019 [3 favorites]


Something I find fascinating and extremely frustrating about self-driving car rhetoric is that, if you ever look at the visualizations of how smooth and efficient roads and intersections will be in our all-self-driving car future, there's never any crosswalks. Or pedestrians.

Which makes all the handwringing over the Trolley Problem extra annoying, because in every other aspect of self-driving cars, there's no thought about pedestrians.
posted by SansPoint at 1:30 PM on April 26, 2019 [8 favorites]


It is ice cold stupid that we allow human-driven cars on the roads, and if I could rearrange the timeline I'd make it so that municipalities banned the things before they really entered widespread use.

Cars are a lifesaver in many circumstances, and they bring a diversity of goods and services to places that otherwise would never have them, including a great deal of medical support.

There's just no reason to allow them to go faster than 20 MPH in any place that has pedestrians. Have them operate on freeways and specialized delivery roads, and those that might need to be in commercial-residential zones (for accessibility, emergency situations, etc.) can travel at no faster than most people can run for short bursts.

Reduce that to 15 MPH around schools, hospitals, and anywhere zoned for "high pedestrian traffic."
posted by ErisLordFreedom at 1:31 PM on April 26, 2019 [3 favorites]


Self-driving cars can't come soon enough

You have far too much faith in the nice white dudes who write your firmware.
posted by Pogo_Fuzzybutt at 1:33 PM on April 26, 2019 [14 favorites]


Making You Bored For Science - citations, please! Such as Evans et al.

Not having read your drafts of course but it seems like the use of the word 'know' here is quite slippery (which is what you are maybe looking at?). We know things. We know how to do things. (Differences here already.) In doing we pull info from our environment, process it, produce decisions and actions, but I think it's a mistake to think that machines always do this in the same way, and that they are somehow sped up humans.

Another example that comes to mind is facial recognition - is the way we spot a friend in a crowd, the same as a machine crunching through a stack of images with an algorithm?

Anyway I'm interested in this from the pov of trying to think about tacit knowing/knowledge.
posted by carter at 1:37 PM on April 26, 2019


There's just no reason to allow them to go faster than 20 MPH in any place that has pedestrians.

I daydream about cities being criscrossed by slow, milk-float charabancs, which both serve as delivery vehicles and as transit -- they're slow enough that you just wave them to a pause to hop on and off, or get on while the delivery person is going to-the-door, or many people can just hop on at speed. And all the rest of the traffic has slowed down this much too, except emergency vehicles, and all the slow flows to the side when the emergency comes through the way protest marches do. And it's *quiet*.

These could be robo-driven, even. But I expect instead to get asshole SUV robocars defended by some horrible legal farce, like the changes in the laws that made our cities car-dominated in the first place.
posted by clew at 1:58 PM on April 26, 2019 [3 favorites]


Those Cruise cars are the worst.

I've seen them turn left onto a busy street and then just stop for no apparent reason.

Several times, when entering my parallel-parked car I've noticed a car come to a stop. I figured maybe an Uber or Lyft stopping to pick up a passenger (gripe for another time: right in the lane, not pulling over at all, etc.), but no, it was a Cruise that simply couldn't figure out what I was doing there. I mean, cool, a cautious approach and I appreciate that they didn't want to run me over, but the car following them has to deal with a completely unexpected stop and that's dangerous too.

Anyway, I don't know who these Cruise clowns are, but their tech is in no way ready for city streets and I do not remember signing up as a beta tester.
posted by sjswitzer at 2:00 PM on April 26, 2019 [3 favorites]


Another potential issue here is software updates - you get in your car one morning, and head out, and it's handling in a subtly different and unidentifiable way (maybe an update to address a previous incident). - How does this affect overall safety?

Kind of related: Elon Musk says Tesla will allow aggressive Autopilot mode with ‘slight chance of a fender bender’. I did check the byline for April 1 but apparently it's genuine.

And, showbiz_liz's Jalopnik link is indeed interesting and informative!
posted by carter at 2:05 PM on April 26, 2019 [1 favorite]


How often have you had to choose between killing one person and killing five with your car? The real question is whether robot cars will be safer in the typical conditions that currently kill many thousands every year. How often will your robot car get drunk and swerve into oncoming traffic? How often will it fall asleep and crash into a tree? How often will it run a red light or sideswipe a bicyclist because it was checking its phone?

If your robot car sees a pedestrian in the way, it will stop or swerve faster than any human would have been able to do, and probably no one will be hurt. That covers the 99.99999 percent of potential accidents that don't involve one fat man and a gaggle of schoolgirls waiting at a fork in the road.
posted by pracowity at 2:13 PM on April 26, 2019 [6 favorites]


If your robot car sees a pedestrian in the way, it will stop or swerve faster than any human would have been able to do

THEORETICALLY. As of right now, I'm pretty sure AVs have a higher kill count per mile than conventional cars.
posted by showbiz_liz at 2:22 PM on April 26, 2019 [2 favorites]


It's a tautology - "once we make a car that won't crash, it won't crash!" Like, sure, but that car does not currently exist. But you wouldn't know it from the way people talk about AVs.
posted by showbiz_liz at 2:27 PM on April 26, 2019 [3 favorites]


The justification for all this philosophical wankery about the Trolley Problem in self-driving cars is all "What if the brakes fail"? How often does that happen?
posted by SansPoint at 2:28 PM on April 26, 2019 [1 favorite]


In aviation we have a saying: "a good pilot uses skill and experience to avoid the situations that require them".

With that thought in mind, imagine if all the effort spent figuring out how autonomous cars should handle the trolley problem was instead put into improving redundancy of their braking systems.

Or, y'know, dealing with literally anything else actually fucking useful in the entire universe.
posted by automatronic at 2:37 PM on April 26, 2019 [5 favorites]


I’m waiting for self-driving car races.

Want to show me the cars can adjust to complex, changing conditions without crashing into each other? And not just randomly stop? It’s even a closed, controlled course!
posted by Huffy Puffy at 2:43 PM on April 26, 2019 [4 favorites]


I’m waiting for self-driving car races.

Yes, but the internal combustion engines should be retired. I would like to see electric F1 (Formula E?) cars, robots vs humans.
posted by pracowity at 2:50 PM on April 26, 2019 [1 favorite]


I feel like any hypothetical worthwhile AI is unlikely to have worse outcomes than humans.

For it to have worse outcomes than humans in terms of collisions it would have to kill more than 35,000 people per year in the USA.

However, cars whether human- or AI-driven will continue to hurt the planet, our cities, our health, and our quality of life. (Though an eventual transition to all cars being self-driving might make traffic flow smoother, AI-driven cars inherit all other car problems, plus there are some not-often-considered ways in which AI-driven cars might make traffic and carbon emission worse.) Roadways aren't carbon free, nor is sprawl or the other environmental effects of the car-shaped world.

Tell your politicians to prioritize transit, bikes, and pedestrian infrastructure instead.
posted by splitpeasoup at 3:16 PM on April 26, 2019 [3 favorites]


I would like to see electric F1 (Formula E?) cars, robots vs humans.

Yep, that's what it's called. Formula E, and its autonomous infant cousin Roborace. No combined races as of yet.
posted by zamboni at 3:28 PM on April 26, 2019 [4 favorites]


Philosopher checking in to remind all of society that the Trolley Problem in self driving cars is not the classic example, with you as the driver/operator of the lever, but the car is the operator and you as the passenger are on the tracks. So programming a car to “choose” between swerving and potentially harming you, the paying customer, or a scruffy pile of pedestrians who are already in the street is the binary decision.

I worry that the “kill gramma or the cute puppy” version is an intentionally obvious via engineering (brakes, duh) problem which will be lauded as Mission Accomplished without the input of any, uh, actual ethicists. And then, yes, walking around a city will be seen as even more unlikely, ridiculous, dangerous and unnecessary for city planning than it already is.
posted by zinful at 3:42 PM on April 26, 2019 [1 favorite]


THEORETICALLY. As of right now, I'm pretty sure AVs have a higher kill count per mile than conventional cars.

They do, at least so far. We will have to subject an unwilling public to hundreds of billions of miles of automotive driving testing to give a statistical plausibility to the statement "self driving cars are safer" (because early data suggests quite the opposite.) And even this early, unpromising data comes largely from cars that are only partially autonomous, meaning a human driver is still watching the road at all times (or is supposed to be.)

The promise of the fully self-driving car is a canard to separate credulous Silicon Valley venture capitalists from their capital, a wasteful investment in the corrosive dream of perpetual individually-owned car culture. It will require spending tens or even hundreds of billions on R&D to (possibly) achieve a technology that (possibly) won't work much better, and will always be more dangerous than public transit - or hell, even those annoying electric scooters that at least don't kill people. But we'll keep hearing about how an AI breakthrough is "just around the corner" because Tesla and Uber press releases masquerading as articles have convinced the public we are much closer to statistically safer automated cars than we really are, and billionaires desperately want to keep selling us expensive cars and cab rides.
posted by joechip at 3:50 PM on April 26, 2019 [2 favorites]


As of right now, I'm pretty sure AVs have a higher kill count per mile than conventional cars.

I'm sure they do. But they're brand new. They'll get a lot better quickly. I don't trust them now, but I will trust them (more than I trust human drivers) when the robots have a lower kill count per mile than humans. I don't expect them to be perfect. Just better than humans.
posted by pracowity at 4:01 PM on April 26, 2019


As of right now, I'm pretty sure AVs have a higher kill count per mile than conventional cars.

If you remove scumy scammer Uber's hit due to removing safeguards and consider that the Tesla accidents were not in autonomous mode the count changes significantly. Google (Waymo) has the biggest program with millions of miles and almost zero fender benders let alone a serious accident.
posted by sammyo at 4:07 PM on April 26, 2019 [3 favorites]


Waymo accident report.
posted by sammyo at 4:20 PM on April 26, 2019 [1 favorite]


I feel like some kind of Chidi reference needs to happen here somehow.
posted by jenfullmoon at 4:49 PM on April 26, 2019 [2 favorites]


From the Waymo accident report:

For every dynamic object on the road, our software predicts future movements based on current speed and trajectory. It understands that a vehicle will move differently than a cyclist or pedestrian.

So, each object is like a projectile and simple physics will deal with its motion? What about people who suddenly change their mind and shift direction. These cannonballs have a mind of their own and physics be damned. Current speed? It’s not moving. It’s waiting for the light to change. Trajectory? The way it’s currently pointing?

Last week at a busy intersection here in San Francisco, I saw a car in the left lane suddenly whip out at the green light, cross in front of the car in the right lane who had to slam on his brakes, and then make a right turn.

We are not dealing with a predictable environment. It’s a low rumble of chaos. Add software and hardware into the mix, knowing how reliable they are, and the low rumble may turn into a roar.

I’ve been hearing this line about the next AI breakthrough is coming soon for years. It hasn’t come, ever. Yup, I’m a nonbeliever.
posted by njohnson23 at 5:01 PM on April 26, 2019


"Driving" a fully autonomous vehicle really means being a passenger in a taxi.

It's not though. Taxi drivers are The Fucking Worst because they are trying to make money. If you hop in a robot taxi, that robot taxi will be owned by people who want to make money the same way. All the bad things that taxi drivers do will still be incentivized, and it's just a matter of time before they get around to more fine grained monetization. Maybe tomorrow's robot taxis won't be lying dangerous pieces of shit, but I feel like the day after tomorrow's robot taxis will.
posted by fleacircus at 5:12 PM on April 26, 2019


the day after tomorrow's robot taxis

I knew I missed an Azimov short story in there somewhere.
posted by RolandOfEld at 5:19 PM on April 26, 2019 [3 favorites]


As someone with degrees in both engineering and ethics, I felt uniquely qualified to code this. My algorithm weighs my personal convenience against environmental impact and the nonzero hazard to nonconsenting bystanders. Interestingly, it returns the same result for self-driving and manual mode. The display reads "take the bus, jerk" at all times.
posted by justsomebodythatyouusedtoknow at 5:28 PM on April 26, 2019 [13 favorites]


This whole debate is really frustrating to me every time it comes up, because it seems ignorant of the legal reality our society works under, and even to some degree misses the point of the trolley problem, creating a "dilemma" out of nothing for clicks and views.

The trolley problem is (in my view) a simple illustration of two opposing moral frameworks -

Utilitarianism, which is only concerned about the outcome and ignores the means. It's ok to murder someone, as long as you were doing it to save 5 other people. This view was advocated by philosophers like Jeremy Bentham, etc.

Deontology, which is only concerned about the means and ignores the outcomes. It's not ok to murder someone, even if you were doing it to save 5 other people. This view was advocated by Immanuel Kant.

Our society operates under a mix of both. We really don't want to be fully operating under Utilitarianism, for example. If we were, the concept of Universal Human Rights would be bunk. The government would be perfectly within its rights to identify that you had organs that could be harvested to save 4 other lives, and forcibly kill you and take your organs, because that would be the morally superior outcome.

On the other hand, we "do" operate under Utilitarianism in the case, of say, vaccines. Even if there are negative effects, say 1 death every 10 million doses, we still administer vaccines to people, because when you look at the outcomes we save so many lives.

---

For now self driving cars have to operate under the same legal framework that everyone else does. Car companies have to first and foremost "not" break the law, the same as humans.

So first we throw out the entire idea of "swerving" to another lane. Remember, legally, we must indicate for 3 full seconds before changing lanes. And we are DEFINITELY not allowed to switch to the opposing lane or mount the pavement.

Secondly, the whole idea that this scenario would occur with any regularity insults the civil engineers who built our roads. Roads are designed to give you ample time to stop for hazards assuming you are driving within the speed limit. Notice how highways can't have blind corners or buildings built right up to the edge where a human could suddenly dash out. Denser areas have correspondingly lower speed limits. If a hazard could appear, the AI could stop in time. This is why, say, in Australia, when emergency vehicles are stopped by the side of the highway, you need to slow to 40kmph when passing them. If you were driving at 100kmph and the suspect / officer had an altercation and stepped onto the road, you would not be able to stop in time. If after all that, the AI still hit someone? It would already be better than the majority of human drivers. Cars with autonomous emergency braking installed already have a much lower incidence of frontal collisions, and that's with the system only allowed to work in specific conditions at present - it's estimated if all cars were fitted with the current system 1 million crashes a year could be avoided.

If brakes failing were a regular feature, the first step would be to put in a second redundant brake system, not write some AI feature to choose who to kill.
posted by xdvesper at 6:16 PM on April 26, 2019 [8 favorites]



It's not though. Taxi drivers are The Fucking Worst because they are trying to make money. If you hop in a robot taxi, that robot taxi will be owned by people who want to make money the same way.


No, they don't. A robot taxi that's idle is far less of a financial drain, so the incentive to finish with the current fare and chase the next one is far weaker.

What's more, the robot taxi's owners are not the same people as the robot taxi's programmers. And the programmers know that if they program the taxi to break the traffic laws, there is no question of intent, there is no way to characterize the decision as a moment's inattention, or anything like that. If it's in the code, it's mens fucking rea.
posted by ocschwar at 7:25 PM on April 26, 2019 [1 favorite]


AVs have a higher kill count per mile than conventional cars

Probably, but it's an unfair comparison. Not many AVs are doing long distance highway trips.

The AVs will be safer for many reasons: They can "see" in the dark, in fog, in rain, in changing light conditions, in more wavelengths, and in all directions at once, They have faster reaction times. They don't get drunk, stoned, sleepy, frustrated, bored, or angry. They learn from their mistakes and from those of other AVs, constantly improving. The can run self-diagnostics and stop themselves safely if any of their sensors or control systems aren't working right.
It's no contest who will ultimately be the safer driver. Even more so once we take the unpredictable human drivers off the roads.

Also, AVs won't need as much roadway since they can drive closer to each other safely. The extra lanes can be repurposed for cyclists & pedestrians.
posted by rocket88 at 8:06 PM on April 26, 2019


"AVs will be safer".

When and for whom? I haven't seen too many AVs practicing in the winter (edit: or any season, now that I think about it) up on the dirt roads by my mother's house in NE Vermont. And what is the state of the art regarding pouring rain or even the paltry amount of snow required to obscure lane markings?
posted by Earthtopus at 8:21 PM on April 26, 2019 [1 favorite]


They can "see" in the dark, in fog, in rain, in changing light conditions, in more wavelengths, and in all directions at once

They can in theory. In practice, they cannot. LIDAR has real limitations on range, and you can improve it by boosting the output power - but as they say, do not look at the laser with your remaining eye. Bizarre reflections and anomalies still exist.

Also, lets talk about computer based image recognition. It's hard. Did you ever wonder why sometimes a website makes you pick out pictures of containing street signs from a bunch of pictures ? Because it's hard for a computer to do that.

Semis are not hard to detect, and yet there are non-zero decapitated Tesla owners.

They don't get drunk, stoned, sleepy, frustrated, bored, or angry. They learn from their mistakes and from those of other AVs, constantly improving. The can run self-diagnostics and stop themselves safely if any of their sensors or control systems aren't working right.


Boeing 737s do this thing where the autopilot will cheerfully fly the plane into the dirt based on an erroneous sensor. It does this despite the stupid meat pilots turning the stupid thing off. I'm sure the firmware update that fixes it totally won't introduce new bugs - that like, never happens, man.

Think of all the stupid microsoft jokes you've ever heard. Now, when the computer crashes, it actually crashes into a barrier or a semi.

I mean, seriously, have you ever used software for anything ? I've spent the past 25 years doing IT shit - I am legit terrified of AVs in the hands of capitalists and MBA scrum managers.

Anyway, there isn't even a working prototype of a general purpose autonomous vehicle. I'm not convinced it's a solvable problem.
posted by Pogo_Fuzzybutt at 8:26 PM on April 26, 2019 [15 favorites]


Anyway, there isn't even a working prototype of a general purpose autonomous vehicle.

While there are still outstanding questions about "when" it will happen, I definitely wouldn't bet against it. Up till 2015 computer scientists were saying computers may never master playing Go, no matter how fast processors became, because it was "too hard", and that all changed with deep learning algorithms. A lot can change in just a few years.

Anyway, the latest self driving demo from Tesla is pretty impressive.
posted by xdvesper at 9:01 PM on April 26, 2019


I don't understand how you think people's estimations for computer performance at Go map to performance at driving. Go's got one easily-definable board space and two types of pieces (and blank spaces). The latest self-driving demo stacks the deck in its own favor, addressing no concerns about driving in suboptimal weather, light, or road conditions.

I'm not as certain as you are that "when" is the only domain in which there are still outstanding questions.
posted by Earthtopus at 9:40 PM on April 26, 2019 [2 favorites]


Well I'm using that example to say that very smart people who are the leaders in their field have been proven very wrong in a very short period of time.

Why would the AI be any worse than humans when driving in poor conditions? Computer vision incorporating lasers, radar, and high definition cameras are far and away superior to the wetware we have. It's an inevitability that when put to a 1:1 test self driving cars will outperform humans in the same conditions once the software is sufficiently developed. There is nothing special about the human brain - it has a high error rate and poor reaction time.
posted by xdvesper at 10:14 PM on April 26, 2019 [1 favorite]


So, you know how amusement parks have rides that go around on tracks? We should have automated tracks in the cities. Like, you drive in, and *click*, the street grabs your car, and then you pick where you want to go, and the street guides you there, while keeping everyone moving at a rational speed, and everyone can go back to looking at their phones, trimming their beards, putting on lipstick, and the myriad of other things everyone on the damn road seems to be doing.
posted by SecretAgentSockpuppet at 10:19 PM on April 26, 2019 [5 favorites]


I believe the issue is about fundamental barriers, so the complexity class of Go is not even fully understood, depending on the ko rules it could be "almost" in PSPACE, or in PSPACE-complete, or even EXPSPACE-complete, they just don't even know which class it belongs to.

The lesson of AlphaZero is that what is formally computationally intractable is in fact really actually tractable, in practice. You can run it on a single computer containing some proprietary neural network processing units. This practical reality in face of theory is a really big deal. It's what happened with SAT as well; NP-completeness is formally thought to be intractable (at least, that's the consensus around P vs NP), but in practice nowadays, NP-complete problems of interest can and are routinely solved using computers (of course, the "in-practice" argument which researchers use still invokes subjectivity which is one source of technical disagreement on this consensus).

Researchers have shown that different types of robotics problems range from P to PSPACE to NEXPTIME to double-NEXPTIME, to Undecidable. Which complexity class depends on the problem specifics, thanks to Moore and heuristics.

So apparently there's still a lot that computer science doesn't understand about robotics at this foundational level. But the lessons of SAT and of machine learning (and even of theorem proving/decidability, if you look at some of the practical attempts in performing program termination analysis) remains, in that the theory said something would be hard, or literally impossible, but for all human practical purposes, that was not the case.
posted by polymodus at 10:27 PM on April 26, 2019


Typo: "thanks to Moore and heuristics" goes at the end of the previous paragraph, to mean that computers routinely solve SAT thanks to Moore's law in late 90's/00's and clever algorithmic heuristics.
posted by polymodus at 10:35 PM on April 26, 2019


It's an inevitability that when put to a 1:1 test self driving cars will outperform humans in the same conditions once the software is sufficiently developed

This viewpoint is both true, and completely irrelevant in the context of the real world.

The only thing that's truly inevitable is that the software will be put on the roads long before it's sufficiently developed. Because that's already happening.

Once it's already there, and making money for the people who put it there, there is then little incentive for them to develop it any further at all.

For them, the definition of "sufficiently developed" is that they can get away with putting it on the roads and making money off doing so. That is the only standard that will actually be hit in practice, and the efforts to do so will involve far more lobbying, bribes and shenanigans to lower the standards than they will engineering to meet them. You only need one glance at the current state of government to see how effective that approach will be.

Furthermore, once it's out there, and "certified" - whatever that comes to mean - then the incentive will then gradually become to not change it, because to do so will come to require lots of expensive testing and paperwork and the risk of being the guy who broke it.

So the point at which we as a society first accept the technology as "good enough" is critical, because once it gets out there, progress will slow down drastically and the processes of ossification and cost disease will set in against further improvement.

Having set out that context, let's get back to the trolley problem. Because the relevance of the trolley problem here is not the questions it poses. The questions are almost completely irrelevant. The point of the trolley problem here is that we're talking about it. The proponents of the technology seem to have successfully convinced a great many people that this is now a central dilemma for autonomous vehicles. And I don't think we should accept that at all.

If the car is facing the trolley problem it has already unacceptably failed. When debating what it should do at that point, we are implicitly allowing the premise that we should be expecting cars to get into these situations at a frequency sufficient for us to care how their software handles them. And we should not be doing so.

The challenge for the technology is not how to handle the trolley problem scenario. The challenge is to not get into those situations in the first place.

And likewise, the challenge for us is not to handle the trolley problem scenario. The challenge is to avoid being dragged into this stupid distraction of a debate.
posted by automatronic at 2:12 AM on April 27, 2019 [14 favorites]


Think of all the stupid microsoft jokes you've ever heard. Now, when the computer crashes, it actually crashes into a barrier or a semi.

All airliners, airlines, and air traffic control systems run on software. All trains are controlled by software. All the electronic equipment doctors use to make decisions about you is just computer hardware and software. All your your money is just blips in a database. All the world's biggest armies, navies, and air forces and all the world's nuclear weapons are controlled by software. Software installed Donald Trump.

If you're afraid of software running things, robot cars are among the least of your worries. Those things already drive better than half the people on the road.
posted by pracowity at 3:23 AM on April 27, 2019 [4 favorites]


The main thing that keeps pedestrians from strolling out into moving traffic is the probability of death.

This is a uniquely American viewpoint. And not even consistent across the entire country.
posted by eviemath at 4:37 AM on April 27, 2019 [5 favorites]


There is nothing special about the human brain - it has a high error rate and poor reaction time.

You are ISSAC from the Orville and I claim my $5
posted by some loser at 1:53 PM on April 27, 2019 [1 favorite]


How much rare earth material are going into these potentially billions of sensors let alone the main grid?.

And what about the traffic light consortium.
posted by clavdivs at 9:51 PM on April 27, 2019


How much rare earth material are going into these potentially billions of sensors let alone the main grid?.

Just buy less phones and flatscreen TVs per year
So that people get outside more
posted by polymodus at 11:01 PM on April 27, 2019


No, they don't. A robot taxi that's idle is far less of a financial drain, so the incentive to finish with the current fare and chase the next one is far weaker.

Ask factory workers about how all that robotics and automation stuff has led to them enjoying 20-hour work weeks at full pay.
posted by chrominance at 8:37 AM on April 28, 2019


I can't go outside, I'll get hit by a trolley, is the conclusion I've come to.
posted by RobotHero at 8:58 AM on April 28, 2019 [4 favorites]


I can't go outside, I'll get hit by a trolley, is the conclusion I've come to.
posted by RobotHero


I have a feeling you’ll be safe.
posted by Celsius1414 at 5:24 PM on April 28, 2019 [2 favorites]


I do have to say that auto-parking that car at the end of the article was incredibly satisfying.
posted by Mchelly at 9:05 AM on April 29, 2019


"You've had a long day killing. Now you can rest."
posted by RobotHero at 9:11 AM on April 29, 2019 [1 favorite]


I'm late to the game here... Anyway, I got so fed up with the uselessness of the Trolley Problem that I wrote up a blog post in Communications of the ACM about it, and suggest many other research problems that people should be studying with respect to autonomous vehicles.

https://cacm.acm.org/blogs/blog-cacm/236606-is-the-trolley-problem-useful-for-studying-autonomous-vehicles/fulltext
posted by jasonhong at 7:13 PM on May 2, 2019 [3 favorites]


That's a good list of questions, many of which are more likely to have practical consequences.

If it becomes universal that cars are aware of their specific location within a city, I wonder if parking and road tolls should be folded together. Like, right now, if your car is down town, you're charged money only if it isn't moving, but it's occupying just as much space if it's moving, isn't it?
posted by RobotHero at 7:51 PM on May 2, 2019


I assume -- in cities that charge for parking but not driving -- that the right to use of the public space inheres in a person, not in a vehicle. So an self-driving car with no passenger should pay on Shoupian principles.
posted by clew at 10:33 PM on May 3, 2019


Yeah, okay. I guess I was assuming the main reason we usually charge for parking and not moving was the practicalities of doing so. But yeah, once nobody's inside the car, you're taking up space for your possessions in addition to your person.
posted by RobotHero at 10:51 AM on May 5, 2019


« Older The End Of The Neoliberal Experiment?   |   Masters of the art of hyperbole Newer »


This thread has been archived and is closed to new comments