Kant Stop The Trolley
July 30, 2013 5:01 PM   Subscribe

 
And airplanes kill different people than those who used to die in train accidents. It is an interesting problem, but it seems to require some sense of intentionality to be ethically challenging. If it is just different because accidents change in nature that seems rather straightforward.
posted by blahblahblah at 5:05 PM on July 30, 2013 [4 favorites]


This is an article about illustrating a particular set of philosophical concepts. It has nothing to do with autonomous cars.
posted by kagredon at 5:12 PM on July 30, 2013 [9 favorites]


This seems like a bunch of existing philosophical arguments with self-driving cars substituted for whatever choice you have to make.
posted by GuyZero at 5:13 PM on July 30, 2013 [1 favorite]


Wait, sorry, reading on my phone and I didn't realize that the article continued past the authors bio. Reading the rest of it now.
posted by kagredon at 5:14 PM on July 30, 2013


Okay, finished, and I stand by my first comment. The only scenario he actually details as being unique to electric cars (the car driving you into a tree because of people playing chicken) is one that would likely be fatal in a regular car, because it's quite likely you would swerve into the tree on instinct (not as a "choice".)

I'm reminded of this xkcd.
posted by kagredon at 5:19 PM on July 30, 2013 [2 favorites]


But what if the existence of autonomous cars is the only thing keeping us from having to push an enormously fat man onto the train tracks? What then, ethicists?
posted by Copronymus at 5:19 PM on July 30, 2013 [12 favorites]


geez guys just take a taxi.
posted by The White Hat at 5:26 PM on July 30, 2013 [1 favorite]


But not a robot taxi.
posted by His thoughts were red thoughts at 5:29 PM on July 30, 2013 [2 favorites]


Actually, a robot taxi would probably be less ranty about politics/religion/immigrants/women/lizard people. So there's a a tick in the 'plus' column.
posted by His thoughts were red thoughts at 5:31 PM on July 30, 2013 [3 favorites]


This is an article about illustrating a particular set of philosophical concepts. It has nothing to do with autonomous cars.

Yeah there were a few interesting bits but god damn. When he got to the point about the unknowable ethics and counterfactuals of releasing WordPerfect software I stopped paying attention to the words and started wondering how these type of philosophers ever muster up enough certainty to tie their shoes in the morning.
posted by crayz at 5:32 PM on July 30, 2013 [5 favorites]


Here's a more interesting quandary presented by autonomous cars: when your automated car screws up and causes damage, who is liable, you or the manufacturer?
posted by Edgewise at 5:33 PM on July 30, 2013 [1 favorite]


when your automated car screws up and causes damage, who is liable, you or the manufacturer?

The insurance company, same as now.
posted by GuyZero at 5:33 PM on July 30, 2013 [2 favorites]


Whose insurance company? Yours or the manufacturer's?
posted by Hairy Lobster at 5:38 PM on July 30, 2013


Actually, a robot taxi would probably be less ranty about politics/religion/immigrants/women/lizard people. So there's a a tick in the 'plus' column.

They can still be pretty irritating.
posted by ActingTheGoat at 5:44 PM on July 30, 2013 [5 favorites]


I think that 'Will my autonomous car willingly drive me off a cliff in order to avoid a head-on collision with a bus full of school children?' is a deeply interesting question, and one that's only likely to be settled by very expensive lawsuits and very detailed legislation or industry standards.

Airplanes have auto-pilot and there are collision warning systems, but do the auto-pilot systems take input from the collision warnings?
posted by jacquilynne at 5:50 PM on July 30, 2013


The non-identity problem does not arise because (at least initially) the vast majority of lives saved are going to be the lives of people who would otherwise have died by gross driver negligence (falling asleep, running red lights). These are "hey, don't disconnect the brakes on the trolley" problems.
posted by justsomebodythatyouusedtoknow at 5:51 PM on July 30, 2013 [5 favorites]


His thoughts were red thoughts: "Actually, a robot taxi would probably be less ranty about politics/religion/immigrants/women/lizard people. So there's a a tick in the 'plus' column."

Well, that might not be strictly true, given past events like image recognition algorithms that fail to find black people.
posted by pwnguin at 5:56 PM on July 30, 2013 [1 favorite]


"In other words, they arguably owe their very existence to our reckless depletion policy."

Funny thing about a sentence containing the term 'arguably,' you can figuratively crap
all over it.

"It’s one thing when you, the driver, makes a choice to sacrifice yourself. But it’s quite another for a machine to make that decision for you involuntarily."

Why is a random choice made by nature (that butterfly leads to a society in which idiot gangs prowl the streets looking to make people's cars kill them) more acceptable?

I've been narrowly missed more by pops and gramma in their land barge than any amount caused by road rage.
Given the way the laws look on taking seniors licenses and the state of public transportation, I'll happily roll the dice on robot cars.

"These same philosophical problems exist in any form of engineering"

See, that's an interesting point. I don't know wheretf this FPP guy was going, but how its going to impact us socially is going to be interesting. Much like the written word (Neil Postman gets into this). Any new technology is going to change legal issues, human landscape, etc.

What strikes me though is why we'll think robot cars are somehow worth the trade off in efficiency. The only real value a personal car has to it is the ability to go distance (or off road) or directly to your destination with the illusion of autonomy.
Once the illusion of autonomy is gone, that "freedom of the road" thing, what then?
Why not just build a goddamn workable public transportation system instead of having every single car on the road with its own brain?

Ah, either way cars still won't see motorcycles.

....actually, how would a machine deal with riding in formation? Would it figure out/anticipate the flocking behavior or would it expect each bike to act autonomously?
I suppose, whatever the algorithm, it'd self-correct fast enough to open the doors on you if you split lanes.
posted by Smedleyman at 6:11 PM on July 30, 2013


Speaking as a software engineer, the day autonomous cars are legal is the day I stop driving and build a thick brick wall between my house and the street.

Have you SEEN the software out there?
posted by DU at 6:20 PM on July 30, 2013 [5 favorites]


DU -- automotive software is not the same as, say, enterprise software. There's an entirely different philosophy to it, and there's usually a healthy dose of formal verification, watchdog software, triple redundancy, election correctness, etc. How many ABS systems have you seen fail in the real world? How about airbag systems? It's remarkable how much of it they get right, considering how easily it could go wrong.
posted by spiderskull at 6:28 PM on July 30, 2013 [3 favorites]


How many ABS systems have you seen fail in the real world? How about airbag systems? It's remarkable how much of it they get right, considering how easily it could go wrong.

Well just two years ago in Australia, a Hyundai car was sent for ANCAP safety testing, the airbag misfired in the side impact test. But there was enough redundancy in the safety systems to still score the maximum 5 star safety rating despite the misfire.
posted by xdvesper at 6:42 PM on July 30, 2013 [1 favorite]


DU -- Have you seen the current wetware controlling those things? You know that the GC latency's are bad in Java, but the "Florida Grandma" VM has Stop the World pauses in the 10's of seconds!
posted by PissOnYourParade at 6:43 PM on July 30, 2013 [7 favorites]


Humans fail, but they fail in ways humans are used to. Computers fail in ways humans are not used to. That alone breaks the fundamental rule of automotive safety: be predictable.

As for ABS or airbag systems: Do you have any concept of how much harder the problem of safely navigating the real world is than "when do I deploy the airbag"? At least 3 orders of magnitude, probably more like 5.
posted by DU at 6:47 PM on July 30, 2013


Airplanes have auto-pilot and there are collision warning systems, but do the auto-pilot systems take input from the collision warnings?

No it doesn't. It is the pilot's job to fly the avoidance maneuver. Autopilot and auto flight systems in general are designed to reduce pilot workload, not to make the planes autonomous. Pilots are taught to always fly the avoidance maneuver though, because doing so should not put the aircraft in danger, but there are a whole host of ways that the pilot second guessing TCAS can cause a collision.
posted by kiltedtaco at 7:21 PM on July 30, 2013


Also, life critical software is already all around us. We don't notice it most of the time though, because it doesn't fail.
posted by kiltedtaco at 7:26 PM on July 30, 2013 [3 favorites]


God philosophers can be tiresome. Jacquilynne identified the only part of the entire grueling article which presented anything of actual interest; will autonomous cars deliberately send you off the road and over a cliff (or similar) rather than create a crash with a bunch of other people.

But I suspect the answer would be prosaic and less dramatic than the author hopes. Something like "the cars can stop well enough and are safe enough that the impact with another vehicle will be quite survivable, so you'll stay on the road."

The real issues are legal liability issues not philosophical ones. Sorry, philosopher guy.
posted by Justinian at 7:30 PM on July 30, 2013 [3 favorites]


Philosophy is awesome, but this wasn't. Of all the technologies coming around the bend, the self-driving car seems the least philosophically interesting.
posted by voltairemodern at 8:05 PM on July 30, 2013


Speaking as a software engineer, the day autonomous cars are legal is the day I stop driving and build a thick brick wall between my house and the street.

Have you SEEN the software out there?


Whether you blame shitty developers, shitty managers, or a general culture of shittiness and low standards, the fact that most software is shitty doesn't mean that good software is impossible. Other engineering disciplines have been doing reliable work for a long, long time. It's a solved problem. It's just that, right now, no one in software development wants to or can take the time or pay for it.
posted by gd779 at 8:08 PM on July 30, 2013 [1 favorite]


“robot” or automated cars (what others have called “autonomous cars”, “driverless cars”, etc.)

How about we just go with "Autobots" and call it a day?
posted by Strange Interlude at 8:10 PM on July 30, 2013 [2 favorites]


Of all the technologies coming around the bend, the self-driving car seems the least philosophically interesting

That's because you haven't pondered the philosophical implications of sintered brake pads.
posted by Monday, stony Monday at 8:14 PM on July 30, 2013 [1 favorite]


I think that 'Will my autonomous car willingly drive me off a cliff in order to avoid a head-on collision with a bus full of school children?' is a deeply interesting question

A similar question made for one of the more memorable episodes of Knight Rider. Except there was no schoolbus, just a game of chicken between K.I.T.T. and the earlier prototype K.A.R.R. (voiced by Peter "Optimus Prime" Cullen).

K.A.R.R. was expecting K.I.T.T. to swerve, because it knew, that unlike its own self-preservation routines, K.I.T.T. had been programmed to protect human life. What K.A.R.R. didn't realize was that The Hoff had taken full control of the wheel and was going to refuse to swerve, forcing K.A.R.R. into triggering those self-preservation routines at the last second, without enough time to process all the relevant data, sending K.A.R.R. sailing right off a cliff.
posted by radwolf76 at 8:20 PM on July 30, 2013 [2 favorites]




This is a terrible example:

On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing “chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.

...because if your speed, even after some unknown period of maximum braking, was high enough to kill you when you hit the tree or whatever, then you were also virtually certain to die when you hit the other car or the bus.

I think that 'Will my autonomous car willingly drive me off a cliff in order to avoid a head-on collision with a bus full of school children?' is a deeply interesting question

I think it's less interesting, because the actual decision process would probably look much more like this if anthropomorphized:

(1) Hey! Big thing in the road! Brake and swerve or my passenger will die!
(2) Off the road now! Keep braking! Save my passenger!
(3) Shit! Didn't make it! Going off a cliff!

Which is a very different ethical story than one where it's "willfully" sacrificing you for the greater good. THE GREATER GOOD. It's just trying to save you, and failing, because it's not hard to imagine that road travel will always involve some probability of being caught in a situation where your death is, at that point, unavoidable.

Also too and as well, in all likelihood one of the things that an autonomous car would do is restrict its speed so that it always has a 99.99... percent chance of being able to safely stop when an object in the path is detected. So the autonomous car would have taken the road at a speed such that it could see the school bus and just boringly pull over and stop.
posted by ROU_Xenophobe at 8:36 PM on July 30, 2013 [9 favorites]


Double also, even a maximally intelligent autonomous car wouldn't kill you to avoid hitting a school bus just to save the kids, as it would know that an impact between a school bus and a passenger car is likely to leave the kids with minor injuries while they have to pick your miniscule remains out of the bus's grille using dental tools.
posted by ROU_Xenophobe at 8:40 PM on July 30, 2013


And here's the other relevant Simpsons reference about robot-controlled cars.
posted by obscure simpsons reference at 8:55 PM on July 30, 2013


Double also, even a maximally intelligent autonomous car wouldn't kill you to avoid hitting a school bus just to save the kids, as it would know that an impact between a school bus and a passenger car is likely to leave the kids with minor injuries while they have to pick your miniscule remains out of the bus's grille using dental tools.

Yeah, see I think these were the kinds of scenarios that the article was groping towards and failing to really articulate. I think there is something there, though the author didn't really get at it in his post.

Like, is the first duty of the robot car to its owner/occupant? Flip the scenario, say your robot SUV is bearing down on a skidding motorbike. Swerving puts you in the path of a semi in the lane of oncoming traffic, but speeding up will allow you to roll right over that pizza delivery guy such that it's likely the airbag needn't even deploy. What's the car do then?

Or: Is the responsibility of the driver of a robot car akin to that of a passenger or that of an airline pilot? Even though automated systems do much of the routine flying, the guys in the winged caps get paid the big bucks to be the emergency back-up --- to always have someone paying attention and poised to take control. If the robot car driver is supposed to do that as well, or else be liable for accidents that occur while he's in the seat, then the robot car itself becomes considerably less compelling as a technology.

But if the driver is really just an occupant, then much more of the moral calculus will be left in the hands of the car --- I mean, I don't know dick about planes and I'm sure there's plenty of situations in which things can go from Totally Fine to Totally Fucked pretty quickly, but for the most part you've got a lot more room to maneuver up there. A driver on a crowded highway can have tenths of a second to react to an obstacle, often even when you do see the doom right away there's not enough time to avoid it. Would you still be liable for drunk driving in a robot car, if it had a malfunction and hit something and those two martinis at lunch prevented you from reacting in time? What if it wasn't martinis making you fuzzy but pain meds for your cancer treatment or eyedrops from the ophthalmologist? If you're having a heart attack, can you verbally order the car to break the speed limit and get you to the hospital ASAP?


Obviously I'm spitballing here just as much as the author was. But while I wasn't convinced that his Final Destination philosophy problem or bus-full-of-nuns quandry was all that compelling, I do think he has something in that driving in particular is maybe the first really big challenge which requires entrusting human lives to the judgement of an artificial intelligence.
posted by Diablevert at 10:24 PM on July 30, 2013 [1 favorite]


> "Self-driving cars are about reducing human input to zero in the driving process. I don't want one until I can safely get in one loaded to the gills on Vodka."

This is why I think the dream of self-driving cars is a futile one. Can you imagine the liability nightmare if people were legally allowed to do this? I mean, heck, you can't be a train conductor unless you're sober, and those things are 99% automated and go along a single axis. At best, I expect "self-driving cars" to always have a manual override and only go into full-auto mode on highways. (If they even get there at all. I feel like people vastly underestimate the complexity of AI required for these vehicles to be street-safe in all scenarios.)
posted by archagon at 2:23 AM on July 31, 2013


I feel like people vastly underestimate the complexity of AI required for these vehicles to be street-safe in all scenarios.

I think it's the inverse, actually; people vastly overestimate the competence of human drivers. Human drivers routinely fail at the "attentively monitor everything in a 50-yard radius of you" and "adjust your acceleration to anticipate far-off obstacles" and "clearly indicate what you're planning to do" tasks that even the baby AIs have down cold.
posted by kagredon at 2:53 AM on July 31, 2013 [1 favorite]


The baby AIs are good for dealing with baby scenarios. We don't live in an ideal world. Some roads run through residential areas that require driving below the speed limit and stopping on a dime. Some roads require driving in reverse to let other drivers through (two-way, single-lane cliffside roads in Hawaii, for example). Some roads require directions from a human traffic controller. Some roads are barely roads at all. For the general case an AI may be better than a human driver, but every case? I don't think so, unless we massively restructure every road in the country and replace every car with a self-driving version. (And that's to say nothing of spreading the technology around the world. I mean, half of Europe still drives stick FFS!)

Plus, there are new, dangerous scenarios created by fully automated cars. What happens if you're driving down a sketchy road at night and a bunch of dudes surround your car to rob you? You'd simply have no way out!

In my opinion, any self-driving car will have to have a manual override. I just don't see a way around it.
posted by archagon at 3:02 AM on July 31, 2013


Yes, it is true, we can tie ourselves into pretzels coming up with edge cases where an experienced driver who already knows the particular road well will beat a robot. That doesn't mean that automated cars won't do better in the routine "baby scenarios" that account for millions of accidents every year.
posted by kagredon at 3:09 AM on July 31, 2013


Yeah, sure, but they'll still need to handle those edge cases. You can't just ignore them if you want full automation without manual override. And that's going to require massive legal reform, widespread infrastructure changes, and probably significant gains in AI research.

I mean, it's not even about road safety — it's about the feasibility of a fairly dumb algorithm (by human standards) to cope with the myriad complexities of a human designed and lived-in environment.
posted by archagon at 3:51 AM on July 31, 2013


While I think autonomous (or, heck, even just a good cruise control that keeps the car moving in lane without hitting anything) would be neat and something I'd want, I do wonder about how we'll get past the social and legal hurdles.

Insurance will be interesting -- I would think that as part of the deal, at least at first, the auto manufacturer would offer some sort of deal -- while the car is in full autonomous mode, they handle the insurance as part of the package, or insurance companies offering discounts based on how much you let the computer drive.

Fully autonomous, self-driving trucks will probably be among the first vehicles on the road -- why pay for a driver if one's not needed. Though I can imagine that said trucks would be popular theft targets -- just make one stop, loot, then let it continue to it's destination empty, possibly mandating armed guards on certain cargo.

But, at least at first, I expect more of the scare-mongering that I took away from this article. If you weren't aware that it was a boilerplate philosophical argument with "autonomous cars" substituted for personal computers, airplanes, telephones, trains, printing presses, and probably all the way back to iron, bronze, and rocks, you could mistake the arguments for real objections. I know if my mother were to read this article, it would push her fear buttons so hard she'd never even consider owning one, even as she nears the age where she may soon not be able to drive herself safely.

I also note that someone at Wired really, really wants you to read this article -- it's listed no fewer than 5 times in different sections of the main splash page, more than any previous article I've seen on that site, which makes me wonder whose agenda drove that editorial decision. Someone wanted to object to self-driving cars in a big way, and pulled strings to ensure maximum exposure to that objection.
posted by Blackanvil at 9:29 AM on July 31, 2013


Speaking as a software engineer, the day autonomous cars are legal is the day I stop driving and build a thick brick wall between my house and the street.

Have you SEEN the software out there?


The cars are already legal, at least in the Bay Area. They've already driven 500,000 miles accident free, and some Google employees commute to work in them. I remember reading somewhere that for the cars to be cleared for wider release, they have to be driven several millions of miles, and Google is just waiting until that's the case to expand the program.
posted by ultraviolet catastrophe at 9:44 AM on July 31, 2013 [2 favorites]


A reasonably solution may be to have dual-mode cars in which the car can be set for full-autonomous or human-driver mode prior to each outing. That would allow high volume commuter routes to be fully automated.

Or, as the first part of the article suggests, the autonomous system would be available in certain situation -- say, on a highway or major road -- and when leaving that environment would try to relinquish control. I think Google is trying to shoot the moon here with an end-to-end solution; meanwhile, there are plenty of more incremental efforts with more limited scope that seem more likely to define practical "boundaries" for autonomous vehicles.

I think the edge cases that people are up in arms about are sort of a two-way street -- assuming that autonomous cars become fairly widespread, it seems likely that things like road design would adapt to eliminate these. In fact, I see a major long-term virtuous cycle where the increased predictability of autonomous vehicles spurs and feeds on an increased consistency in road conditions.
posted by bjrubble at 10:54 AM on July 31, 2013 [1 favorite]


I think the edge cases that people are up in arms about are sort of a two-way street -- assuming that autonomous cars become fairly widespread, it seems likely that things like road design would adapt to eliminate these. In fact, I see a major long-term virtuous cycle where the increased predictability of autonomous vehicles spurs and feeds on an increased consistency in road conditions.

I mean, I suppose any car accident is an edge case in terms of how many miles people drive without getting in one. But I think waving away the concern about edge cases because edge cases are rare misses the whole thing that's interesting about the problem, socially and morally: you're handing over absolute control of a deadly weapon to a machine. I mean, perhaps that seems like a terribly overblown way of stating it, but that's what's concerning. If the car is to be fully autonomous in any situation, that means it will be on the car to deal with it when something bad happens, whatever form that bad thing takes. This will require the engineers if these cars to make choices about design which are fundamentally moral in nature, which will have to account algorithmicly for the value of human life and weigh it against the other design requirements of operating the vehicle.

Of course, one might say hey, we've been making moral choices about the acceptable range of risks to human life posed by design ever since the first cave man strung up the first vine bridge. But the risk that a bridge cable might snap or a wall fall down under such-and-such a stress is a static decision, whose moral weight ultimately rest with the human who made the bridge or the wall. And if they fucked up they can be held accountable.

An autonomous car must assess and react to novel situations again and again. It must decide upon a course of action and undertake it. Is it, then, responsible for whatever havoc it wreaks? Can we go back to the designers and say to them, you should have known that the car would opt to run this guy over rather than hop a curb in this use case, therefore you are to blame for a death which would have been avoided if a human were at the wheel? How such a car behaves in an accident situation cannot simply be reduced to a physics or engineering problem.
posted by Diablevert at 11:33 AM on July 31, 2013


I may be echoing the cries of a curmudgeon, but honestly, to Hell with autonomous cars.

Ethics and philosophy aside, what about the love of driving? That inherent feeling of control (illusion or not), visceral power, adrenaline and freedom.

There is a very good reason a significant portion of the populace spends money on higher horsepower, better cornering traction, and sportier vehicles. It certainly has very little to do with making it from point A to B with the greatest amount of fuel efficiency in the absolute safest possible manner.

I don't condone people using public roadways as their own personal racetrack, but I do appreciate the joy of driving.

Also, considering the amount of tinkering people want to do to their phones, I'm sure there would be an immediate underground culture of "overclocking" and bypassing certain safety features, just to be able to make it to work 2 minutes faster than everyone else.

Lastly, a much more important debate to be had would be the significant loss of revenue from city and state agencies. Robotic cars = no traffic laws broken, which would be a loss to the tune of millions annually for almost every state.

For me, well, I plan to keep my 1957 Bel Air with no seat belts, no air bags, no ABS or air conditioning for as long as I can, and I'll just continue to drive myself, thank you very much...
posted by Debaser626 at 11:41 AM on July 31, 2013 [1 favorite]


I enjoy driving, but there's joyful driving and there's commuting to work in rush hour traffic, and the two are not really related activities.
posted by jacquilynne at 12:06 PM on July 31, 2013 [5 favorites]


That inherent feeling of control (illusion or not), visceral power, adrenaline and freedom.

Dunno, seems like a fair trade to me for some percentage of 32,000 additional deaths a year.

Alternately phrased: How many microdeaths/micromorts is that feeling of driving worth to you?
(Your own added mortality, and more importantly, the added mortality of everyone around you)
posted by CrystalDave at 2:03 PM on July 31, 2013 [1 favorite]


Dunno, seems like a fair trade to me for some percentage of 32,000 additional deaths a year.

Alternately phrased: How many microdeaths/micromorts is that feeling of driving worth to you?
(Your own added mortality, and more importantly, the added mortality of everyone around you)



Considering while driving I've never killed myself (obviously), anyone else, or played a part in anyone's death as a driver?


I'm ok with it.
posted by Debaser626 at 8:19 PM on August 1, 2013


I'm ok with it.

Are you saying that people who are killed in car crashes or kill others in car crashes did so because they made a mistake? Because you can do everything "right" and still kill yourself or someone else in a car crash. Also, everyone who has killed himself or someone else in a car had gone their whole lives without doing so until they did.
posted by ultraviolet catastrophe at 11:08 AM on August 2, 2013


DU: Speaking as a software engineer, the day autonomous cars are legal is the day I stop driving and build a thick brick wall between my house and the street.

Have you SEEN the software out there?
Yes. And I now work for the train industry, writing software so human drivers are no longer able to miss a stop signal by even one second anymore, and any violations of local speed limits result in emergency braking on the assumption that something has gone terribly wrong.

Redundant software & computer systems that all fail the same direction: remove engine power and clamp on full emergency brakes (which are only disabled by active power and hydraulics pressure).

I'll take any of the autonomous LRT systems over human-operated ones any day. With a mountain of statistics to back it up.

And human train operators are about a billionty times safer than human car operators. You know what the punishment is for creeping through a stop sign the 2nd time, as a train conductor? License revoked for life.
posted by IAmBroom at 3:03 PM on August 6, 2013


Yes. And I now work for the train industry

Hey I know this company in Canada that really needs to talk to you.
posted by GuyZero at 3:07 PM on August 6, 2013


That clusterfuck is waaay beyond software to fix, GuyZero. It's a human issue, being made worse by a bigmouth human CEO.

OTOH, the high-speed rail train that went nearly twice its rated speed had everyone I've talked to upset. THAT is something that shouldn't happen on modern systems. And... Google now tells me it wasn't modern software, but (go figure) a human in charge that killed all those passengers.

We recently spent a week trying to coax a train to break its speed limits by 5mph while in test mode, and got shut down time and time again by yet another safety system we had forgotten to put on temporary hold. ATO (Automatic Train Operation) and double-gate crossings (so you physically can't drive through a crossing, because both lanes are blocked) will become standard, and they're going to make the train industry much, much safer. And it's already one of the safest transport industries in the world.
posted by IAmBroom at 3:20 PM on August 6, 2013 [1 favorite]


“robot” or automated cars (what others have called “autonomous cars”, “driverless cars”, etc.)

How about we just go with "Autobots" and call it a day?


Surely automated/autonomous automobiles = "auto-autos"?
posted by naoko at 7:04 PM on August 7, 2013


« Older Input Translation Output Rotation   |   Xtranormal gets X'ed out Newer »


This thread has been archived and is closed to new comments