First Law of Robotics
May 24, 2018 6:31 PM   Subscribe

Uber’s Self-Driving Car Didn’t Malfunction, It Was Just Bad. There were no software glitches or sensor breakdowns that led to a fatal crash, merely poor object recognition, emergency planning, system design, testing methodology, and human operation.

Uber self-driving car saw pedestrian but didn’t brake before fatal crash, feds say:
  • The vehicle’s radar and LIDAR sensors detect an object in the road about six seconds before impact.
  • As their paths converge, the vehicle’s self-driving software classifies Herzberg first as an unknown object, then as a vehicle, and finally as a bicycle, with varying expectations of the future travel path.
  • At 1.3 seconds before impact, the vehicle’s computer decides that an emergency braking maneuver was needed. But Uber has disabled the Volvo’s factory AEB system, “to reduce potential for erratic vehicle behavior.” The system is not designed to alert the driver that braking is needed.

Uber chose to disable emergency braking system before fatal Arizona robot car crash, safety officials say:
The NTSB listed three reasons the brakes were not applied before the fatal crash:
  • Uber had chosen to switch off the collision avoidance and automatic emergency braking systems that are built into commercially sold versions of the 2017 Volvo XC90 test vehicle. Uber did that whenever its own driverless robot system was switched on — otherwise, the two systems would conflict.
  • However, Uber also disabled its own emergency braking function whenever the test car was under driverless computer control, “to reduce potential for erratic behavior.”
  • Uber had expected the test driver to both take control of the vehicle at a moment’s notice, while also monitoring the robot car’s performance on a center-console mounted screen. Dashcam video showed the driver looking toward the center console seconds before the crash. The driver applied the brakes only after the woman had been hit at 39 mph.
The National Transportation Safety Board's preliminary report on the incident (pdf).
posted by peeedro (124 comments total) 29 users marked this as a favorite
 
And yet Uber just announced that they're ready to restart testing their cars on Pittsburgh's streets this summer.
posted by octothorpe at 6:35 PM on May 24, 2018 [3 favorites]


So there's your smoking gun. There is a human being who is responsible for the decision to disable the second emergency braking function. That person, and the C-suite executive that that person reported to, should be held criminally liable. Uber should be held civilly liable to the degree that the company should be run out of business.

Every automaker and autonomous vehicle company should tremble in fear that disabling an emergency system would make the company, and all that wonderful tasty shareholder value, evaporate in milliseconds.

Burn Uber down.
posted by tclark at 6:36 PM on May 24, 2018 [102 favorites]


I don't know much about anything but what I do know is that when you are testing software, you absolutely want AS MUCH ERRATIC BEHAVIOUR AS POSSIBLE because that is how you BUG FIX. So what the fuck were they testing, since they evidently weren't testing?

Autonomous vehicles are an important future technology and it's remarkable that it is being left in the hands of clowns like Uber.
posted by turbid dahlia at 6:43 PM on May 24, 2018 [52 favorites]


I'm not sure what I think an appropriate punishment would be here. Certainly there are extant companies that have done much worse and still exist, although maybe they shouldn't. (Dow Chemical, anyone? Union Carbide?) It makes me wonder, though: is there actually any precedent at all for forcibly disbanding a corporation due to negligence or malfeasance? Not just fining them into oblivion, but actually, like, directly revoking their charter or whatever—I'm not even sure what the correct term would be. What would that look like? Has that ever been done, anywhere, ever?
posted by Anticipation Of A New Lover's Arrival, The at 6:46 PM on May 24, 2018 [7 favorites]


As I've heard it said, I'll believe in corporate personhood when Texas executes one. There are plenty who deserve it, as our friendly Culture GSV states. Uber is just the latest. But they should be shut down immediately, and completely. It is long overdue to pierce the corporate veil and have more executives charged with crimes the company commits, and, in extremis, disband the corporation entirely.
posted by tclark at 6:51 PM on May 24, 2018 [20 favorites]


I would prefer to just burn down the fallacy that self-driving cars are an immediate panacea, just barely over the horizon, that will solve all car based problems immediately.

Self-Driving cars solve one very small subset of human mobility problems, those primarily baked in to distracting modern car design and elderly populations, but currently only in theory. In practice, they generate far more basic problems about decision and control that are dead fucking obvious if there's a human at the wheel and brakes vs. a poorly trained machine. Should the machine's trainers be held accountable as a human driver would be?

I would, in this case, hold them accountable as an example, to show that this fatality is not going to be an acceptable speed bump on the way to (inevitably) poorly conceived autonomous vehicle dominance in the name of a bullshit, cavalierly engineered future that some dumbshit thought would solve all our problems.
We may need this tech in the future, but we do not need to compromise an inch as to what shapes it as it becomes utile. We do not need it now, so it's better that it can be perfected before it is pervasive.
posted by Cold Lurkey at 6:59 PM on May 24, 2018 [26 favorites]


Uber is an organized criminal enterprise. It’s hardcoded into its DNA.
posted by Horace Rumpole at 7:01 PM on May 24, 2018 [17 favorites]


SOMEONE AT A DESIGN MEETING: "And then here, we disable the emergency thingamabob..."
TRAVIS: "Sounds good."
posted by rhizome at 7:01 PM on May 24, 2018 [2 favorites]


Uber self-driving car saw pedestrian but didn’t brake

It really is just like a human driver!
posted by rodlymight at 7:11 PM on May 24, 2018 [22 favorites]


Proponents of driverless cars are always talking about how they're safer, except here we have extremely human decision makers who turned off the fucking automatic dont-hit-a-person feature. This ain't exactly one of the difficult versions of the trolley problem!
posted by entropone at 7:12 PM on May 24, 2018 [16 favorites]


However, Uber also disabled its own emergency braking function whenever the test car was under driverless computer control, “to reduce potential for erratic behavior.”

What does this even mean? The latest release was buggy and getting in the way of other things they wanted to test so they turned off that feature flag? Was it activating too often?
posted by JauntyFedora at 7:16 PM on May 24, 2018 [1 favorite]


This ain't exactly one of the difficult versions of the trolley problem!

Given enough pedestrian and cyclist fatalities, all bugs are shallow.
posted by peeedro at 7:27 PM on May 24, 2018 [9 favorites]


Thinking about it a little more, I guess what you'd do is nationalize the company and then liquidate it. Would be pretty rough on the low-level workers, though. The fundamental problem I think is that a corporation is made of people, and not all those people are equally culpable. Still, there are probably times when the best thing to do would be to just burn the whole company to the ground. Not sure if this is one of them, though; might be better to just prosecute everyone who looks like they had a direct hand in this particular bad decision, plus everyone who should have stopped them but didn't.
posted by Anticipation Of A New Lover's Arrival, The at 7:29 PM on May 24, 2018 [2 favorites]


I appear to work right near where a number of self-driving startups must be headquartered. I see them driving around all the time.

Now Google has been doing this for many years with lots and lots of cars. They even had these little pods that had no steering wheel or brakes! (They only went up to 25mph). They also have billions of simulated driving hours.

I see them on the road all the time. I would trust the Google cars over most human drivers in any situation, even unusual, unexpected situations.

These startups, though, I won't even get on the road until they are long out of sight. I take walks around work, and if I see one nearby, I'll end my walk.

We have required driving tests and licensing for humans--why don't these exist for self-driving cars? It seems any idiot with VC money can put one on the road, and Uber is one of the biggest idiot companies.
posted by eye of newt at 7:31 PM on May 24, 2018 [19 favorites]


And yeah (on further consideration) I'd include the front-line people who obeyed the order to throw the switchto disable the safety feature. Engineering (both software and otherwise) is far too amoral, as a profession. If your job is to design, say, shiny new guns that kill people better than ever before, you should be personally liable when the guns you design are used to commit murders. "It's just my job" is no more a defense than "I was just obeying orders" was for Nazi war criminals. Engineers (and everyone everywhere) should have to weigh the moral and legal consequences of the orders they're being given.
posted by Anticipation Of A New Lover's Arrival, The at 7:33 PM on May 24, 2018 [14 favorites]


>> “to reduce potential for erratic behavior.”
What does this even mean?


As per one article, apparently it means breaking at a plastic bag.

With regards to Google ... how well can the car deal with ice? Or slick roads in general? Because those conditions are the ones in which I'd really like a driverless vehicle.
posted by steady-state strawberry at 7:44 PM on May 24, 2018 [1 favorite]


Frankly, if you gave me (as someone who studies the human side of this exact problem) dictatorial power to control which companies were permitted to engage in research, development and testing of autonomous vehicles, I'd ban Uber from ever touching this set of problems again. Everyone in this space with a shred of common sense or responsibility (e.g., automakers, but also Waymo) is, correctly, very conscious of the real-world implications of getting it wrong. As a consequence, they're damn careful, and they're petrified of screwing up like this.

Move fast and break things is a bad motto in general, but it's a legitimately murderous motto when it comes to road safety.

How close am I to this problem? I just spent a week at a conference talking about how quickly drivers can perceive hazards and respond to them, and used this collision as an example of what happens when you do the engineering wrong.

The largest failure here on Uber's part, aside from disabling the XC90's emergency braking and pedestrian detection features (which would have easily stopped the vehicle in time; this is not a hard problem in this space), was deploying a L3 autonomous vehicle (where drivers are told they can take their eyes off the road, because the vehicle is supposed to alert them if they're needed) and then treating it as a L2 (keep your eyes on the road) when it came to writing the damn code. Telling the safety driver "hey, take your eyes off the road, do this demanding visuomanual task, and peripherally monitor for things our shitty code doesn't catch" expects too much of the human in the driver's seat. And if they bothered to understand what humans can and can't do, they wouldn't have designed their crappy system like this.

I've spent a lot of time talking with people about the peril of overtrusting automation (e.g., the drivers who put their Tesla in Autopilot [a L2 system] and think they can take their eyes off the road), but this is dramatically worse. This is a system engineered through benign malice to expect the driver, who has no anticipation of needing to take control, to do so when they aren't alerted to any problem except through their own senses.

It's crap like this that might be why I told a fellow conference attendee a few days ago that I'd view a stint working at Uber (not driving for them, but working for the company itself) as only slightly less of a stain on a person's reputation as working for the Trump Administration.
posted by Making You Bored For Science at 7:47 PM on May 24, 2018 [131 favorites]


Some stats from Business Insider:
- Google's cars need human intervention, on average, every 5000 miles.
- Uber's autonomous cars need human intervention, on average, every 0.8 miles.

There is some question about whether the definition of "intervention" is the same for both companies, but still, the difference is ridiculous.
posted by nnethercote at 7:50 PM on May 24, 2018 [18 favorites]


To steady-state strawberry: I work in Mountain View, California, where Google has done all its preliminary testing. Weather here is really nice, which means no ice, and 'slick' just means a few days with wet roads. So their early testing isn't really applicable anywhere else. I've heard they are now doing testing in areas with worse weather, but don't know much about it.

I'm not surprised that their first self-driving cars for the public are in Arizona.
posted by eye of newt at 7:51 PM on May 24, 2018 [4 favorites]


deploying a L3 autonomous vehicle (where drivers are told they can take their eyes off the road, because the vehicle is supposed to alert them if they're needed) and then treating it as a L2 (keep your eyes on the road) when it came to writing the damn code.
This exact thing is what stood out to me. They disabled both the automatic braking and any warning system for the driver to engage in emergency braking and told the driver to be distracted while driving. The engineers created a system that is fundamentally more prone to catastrophic error than any car on the road.
posted by demiurge at 7:53 PM on May 24, 2018 [28 favorites]


#1. If have have not done so already, you need to accept right now that the supposed safety benefits of self-driving cars are a PR construct, and a perhaps never-to-be-attained future hope. They are by no means a current reality.

#2. The supposed safety benefit of self-driving cars is the PR approach that massively financed large technology firms have chosen as a useful wedge to garner public acceptance, and even embrace of the technology from which they stand to make billions in profits.

#3. The actual purpose of the technology is not safety; rather it is to make a profit. There are two basic ways to make a profit: Displacing current paid drivers via completely driverless technology and selling the technology to car owners as a convenience (ie, read and tweet on your way to work--your commute will be fun instead of a grind!). Both of those depend on convincing people the technology is safe enough (ie, "PR safety") but not so much on being actually safe (ie, "real-life safety").

#4. Furthermore, the safety and convenience of vehicle occupants is paramount--because if those people are threatened, you won't be able to sell your system and make money--and the safety of anyone outside the vehicle is a very, very low level secondary consideration. People outside the vehicle are not customers and do not figure into the equation at all, except insofar as that consideration is forced by, for example, law.

#5. We are already seeing the "occupant-first" dynamic playing out, even in this very first fatality. Uber dialled down the sensitivity of the system precisely because it was annoying to the vehicle occupants--too many 'unnecessary' sudden stops when the system couldn't identify an object, which was disturbing to potential customers.

#6. We are going to see this dynamic play out on an ever-larger scale as this technology rolls out: Companies, who are in control of a secret, proprietary technology, are going to turn the knobs for their benefit and their profit, and will only turn them to favor safety for the general public when forced to.

#7. There is absolutely no proof whatsoever that driverless car systems are ever going to be safer than human drivers. Human drivers go more than 86 million miles, on average, between fatal crashes, more than 588 million miles between fatal pedestrian crashes, and around 4 billion miles between fatal bicycle crashes.

We often complain about human drivers, how inattentive they are, and so on. But those numbers are going to be very, very hard for any automated system to beat.

Perhaps impossible for a system that cannot even discriminate a person pushing a bicycle from an "unknown object", a "vehicle", and a "bicycle".

Uber, by way of contrast to 'unreliable' human drivers who consistently drive almost 100 million miles between fatal crashes, barely made it 3 million miles before killing a pedestrian pushing a bicycle. This is a system that is far, far worse than human drivers are.

#8. Regardless of whether these systems might be reasonably safe at some future time, they clearly are not safe at all right now under current testing regimens.

#9. The Uber self-driving vehicle program is like a textbook example of every possible thing you could do wrong in engineering and testing a safety-critical system.

#10. Many other companies are currently testing these self-driving systems on our public roads and some of them are probably doing an extremely admirable job of engineering and testing safety-critical systems; others--quite certainly--are more like Uber and doing a completely terrible job.

#11. Because these systems are proprietary we have absolutely no way to know which are being developed responsibly using industry standard safety-critical engineering standards, and which are being hacked together at lightning speed by socially irresponsible corporations motivated solely by the potential profit and who live by the slogan, "Move fast and break things."

Somehow we are allowing these proprietary, untested systems on our public roads without requiring any safety standards whatsoever, and also falling for the PR line that they are somehow going to make roads safer.

Don't buy it. Demand that they prove safety first and demand real accountability when they screw up--which they will, many times.
posted by flug at 7:57 PM on May 24, 2018 [97 favorites]


They've actually done something even dumber: they've distracted the safety driver and made them take their eyes off the road. The distraction is bad, but peripheral vision, while useful, will not save you.

In fact, in the lab, when we've distracted subjects and made them take their eyes away from the forward road, they can still detect things happening on the road ahead, but they're slower and they miss more targets. What does that mean on the road? It means a collision, and as we see in the case of the collision in Arizona, an entirely avoidable death.

(Edited to change "unavoidable" to "avoidable". Blame post-conference brain.)
posted by Making You Bored For Science at 7:57 PM on May 24, 2018 [15 favorites]


Wow.. This is a chapter in a textbook a hundred years from now levels of stupidity. The people responsible for these choices including the entire engineering team plus their managers up the the Cx level need to be brought up on criminal negligence charges, the company needs to be hit with the biggest wrongful death suit ever, and ultimately shut down in the fastest record skip implosion imaginable.

Just. Wow.
posted by seanmpuckett at 8:09 PM on May 24, 2018 [11 favorites]


I will trust "self-driving" cars with the public roadways (and my fragile, unshielded cyclist body) only after our civilization has self-flying passenger planes. Not one second before. Flying is far and away simpler than driving in traffic with heterogeneous obstacles and we still haven't gotten autopilot reliable enough to lose the humans.
posted by traveler_ at 8:19 PM on May 24, 2018 [8 favorites]


Making You Bored For Science, thank you for your contribution in this thread.
posted by ActingTheGoat at 8:21 PM on May 24, 2018 [9 favorites]


It makes me wonder, though: is there actually any precedent at all for forcibly disbanding a corporation due to negligence or malfeasance? Not just fining them into oblivion, but actually, like, directly revoking their charter or whatever—I'm not even sure what the correct term would be. What would that look like? Has that ever been done, anywhere, ever?
posted by Anticipation Of A New Lover's Arrival, The at 6:46 PM on May 24


Eric Holder wouldn t dismantle BP America. There are still dead whales washing up and still uncompensated cleanup workers made sick from Corexit. They lowered the fines to keep the company out of bankruptcy.

If the USA was ever going to dismantle a corporation, it would have been BP. They could have easily sold its assets to Shell or Stone or Anadarko, and kept federal revenues flowing. I know our lives are cheaper in District 11/ Southern US, but I don t see the feds bringing down the hammer for one death, when they wouldn t for the eleven men who BP burned to death.

EPA tried the ban the company, god bless Lisa Jackson. Holder was not about to dismantle that company, tho, nor Obama, frankly.
posted by eustatic at 8:22 PM on May 24, 2018 [16 favorites]


I'm a very big supporter of the development of autonomous (Level 4/full autonomy) vehicles. I disagree strongly with everyone who says it can't be done or it won't actually result in fewer injuries and deaths. But we also need responsible companies doing this work, and executives with mortal, existential personal fear of prison if they take shortcuts like Uber did.

We already knew Uber was run by jackals. As angry as I am about the situation, we honestly can't know what would've happened had the software been working as it should have because it was crippled. Crippled by empty husks of humanity who unchecked a box on a SDK UI because dealing with erratic behavior would've totally been a pain in the ass. Tech frat-bros with Galt's Gulch tattoos, who don't realize Tyler Durden was the villain, and who wouldn't have even needed Henry Milgram to berate them before administering a "lethal" shock.
posted by tclark at 8:35 PM on May 24, 2018 [16 favorites]


Yeah, I lived in New Orleans when the Deepwater Horizon spill was happening. That was… hoo boy. Makes this seem like—well, I don't want to trivialize a person's death, but you see the difference in scale. And yet BP appears to be doing just fine in 2018, thanks for asking.
posted by Anticipation Of A New Lover's Arrival, The at 8:35 PM on May 24, 2018 [3 favorites]


So there's your smoking gun. There is a human being who is responsible for the decision to disable the second emergency braking function. That person, and the C-suite executive that that person reported to, should be held criminally liable. Uber should be held civilly liable to the degree that the company should be run out of business.

You can be assured that Uber is working very very hard to ensure the driver takes every inch of the blame here.
posted by dilaudid at 8:38 PM on May 24, 2018 [7 favorites]


working very very hard to ensure the driver takes every inch of the blame here.

I suspect the pedestrian will be blamed here. She tested positive for both pot and meth, after all.

(Nevermind that that’s all the more reason for us to WANT her to be walking rather than driving, or that she might have been a deer instead. She wasn’t a fully sober adult, so she can be blamed for being hit by a 1000+ lb motorized vehicle.)
posted by steady-state strawberry at 8:45 PM on May 24, 2018 [13 favorites]


Not only is disabling the Volvo autobraking stupid from a safety standpoint, it's also stupid from a testing standpoint. The frequency of it getting triggered would've been a great metric to use to deny bonuses to the software engineers working on the system.
posted by ckape at 8:50 PM on May 24, 2018 [4 favorites]


re: Google testing their cars with places "with weather" -- it rains pretty regularly in Kirkland, WA, aside from the summer months.

I hope the "safety driver" is ok, physically, emotionally, and economically.
posted by batter_my_heart at 9:06 PM on May 24, 2018 [2 favorites]


They disabled both the automatic braking and any warning system for the driver to engage in emergency braking and told the driver to be distracted while driving. 

Vehicular Chernobyl.
posted by justsomebodythatyouusedtoknow at 9:27 PM on May 24, 2018 [7 favorites]


I mean, if the car was suddenly braking whenever it saw a shadow or something, that could also be life-threatening to cars behind. Maybe they thought that leaving emergency braking to the human was the safer of the options.

But I’d say, if you have a car that needs its safety features to be switched off in order to function, you have no business driving anywhere but the most controlled environments until you figure that out. Rent out an abandoned neighborhood somewhere and let Uber execs (those who stand to profit) be the ones teaching the car how to behave around other drivers and pedestrians, rather than using unsuspecting private citizens for that purpose.
posted by mantecol at 9:38 PM on May 24, 2018 [3 favorites]


the drivers who put their Tesla in Autopilot [a L2 system] and think they can take their eyes off the road

Apropos of this, Tesla Model S driving on Autopilot accelerated before crashing into a Utah fire truck
According to the Associated Press, police suggested that a car traveling in front of the Tesla slowed down, causing the Autopilot-controlled Tesla to do the same. The leading car then changed lanes, prompting the Tesla to accelerate in order to regain its pre-set speed of 60 mph shortly before the crash.

Police said in an earlier statement: "Witnesses indicated the Tesla Model S did not brake prior to impact."

The driver told police that the Tesla was on Autopilot before the collision, and that she had been looking at her phone. She suffered a broken ankle in the crash.
posted by Existential Dread at 9:43 PM on May 24, 2018 [4 favorites]


the vehicle’s computer decides that an emergency braking maneuver was needed. But Uber has disabled the Volvo’s factory AEB system

The executives who made this decision need to be charged with criminally negligent homicide.
posted by ErisLordFreedom at 9:56 PM on May 24, 2018 [5 favorites]


I don't believe the primary reason for Uber disabling their emergency braking would be driver convenience. There is a cost associated with slamming on your brakes - you might get rear-ended! - but Volvo et all have obviously estimated their signal-noise ratio much more responsibly.
posted by rocketbadger at 10:03 PM on May 24, 2018 [1 favorite]


I'd like to have a look at Uber's source code. I'm really curious how well-written and maintainable the programs of a company that wants to "move fast and break things" are.
posted by Harald74 at 10:08 PM on May 24, 2018 [4 favorites]


Robocars seem like a bad way for rich people to invest money. They don’t work very well and the major costs in transport aren’t labor, they are capital and fuel. Robocars are going for the high hanging fruit. Why do that? Because that is more fun than investing in more efficient motors or more efficient siting of distro centers. (Side note: fuel costs, oddly, go up when pro biz Republicans are in charge, like now, and down when pro people Democrats are in charge. Seems like a simple connection for folks to make.)

I think a major part of this problem is that rich people need somewhere sexy to invest all their money, and robocars are it, ATM. I think one solution to the problem of robocars killing people is to tax the hell out of rich people.
posted by notyou at 10:09 PM on May 24, 2018 [9 favorites]


I don't believe the primary reason for Uber disabling their emergency braking would be driver convenience.

This is Uber we're talking about. You know, the company who has such a toxic culture that it's kind of a pariah in the heart of tech-bro Babylon. They don't get a benefit of the doubt here.

Besides, it would never be for driver convenience. It was disabled for coder convenience. Or, more precisely, some executive's convenience.
posted by tclark at 10:12 PM on May 24, 2018 [6 favorites]


It was disabled for some investor’s convenience.
posted by notyou at 10:14 PM on May 24, 2018 [9 favorites]


The fundamental problem I think is that a corporation is made of people, and not all those people are equally culpable.

You know this is the line of thinking that somehow only ever gets trotted out as meaningful for the corporate death penalty. We don't act on this basis in making military decisions. Or individual-level criminal decisions. Somehow it's only once you get to the concept of forcibly disbanding a corporation we need to worry about knock-on effects - and it's only forcible disbanding by the state we care about, apparently, because otherwise every corporate bankruptcy would end in a government bailout.

This is a pernicious line of thinking. It encourages company towns and every other method of making employees as dependent as possible. It encourages "Too Big To Fail" as a corporate goal. Oh and somehow coincidentally it means that corporations will never face meaningful criminal penalties. Ever. Neither the corporate entity nor the individuals driving it. Very convenient, that.
posted by PMdixon at 10:59 PM on May 24, 2018 [22 favorites]


I think a major part of this problem is that rich people need somewhere sexy to invest all their money, and robocars are it, ATM. I think one solution to the problem of robocars killing people is to tax the hell out of rich people.

Reminds me of an article I read the other day, about how the hot new thing in Silicon Valley is donating your money to charities that never spend it rather than to organizations that actually do good in the community (or letting it be taxed and benefiting communities that way).
posted by mantecol at 11:04 PM on May 24, 2018 [7 favorites]


I don't understand the autonomous vehicle bashing crowd. Nobody said autonomous cars would be a panacea, nobody even said there would be zero fatal accidents. The narrative has been very cautious the whole time, namely that autonomous cars, if executed properly, have a potential to vastly reduce human-error induced accidents on roads, and improve traffic flow.

Fatal accidents WILL happen, which means real people will die on contact with whatever technology we are testing, but that does not mean we should scream BACK TO THE HORSE-CART!

This case, as some have suggested, may be down to some very toxic, anti-human, anti-social corporate values at uber (and many other tech startups). Uber very clearly is evil and is trying to prove it on every occasion.

The reason to switch off Volvo's emergency system may have been well-argued (to an engineer at least), like for instance a mismatch of its sensing technology with uber's, but the reason stated is ridiculous. It is precisely the point of the testing stage to catch as many bugs and mismatches BEFORE they become fatal accidents.

It almost seems as if uber's only raison d'etre is to undermine autonomous driving.
posted by Laotic at 11:09 PM on May 24, 2018 [9 favorites]


I don't understand the autonomous vehicle bashing crowd.

It's not robot cars that we object to; it's that the corporations that are designing them are going to work very, very hard to not be held accountable for any deaths or injuries they cause.

We don't trust any corporate activity that's inspired by venture capitalism rather than a hope to make communities better, and we don't want silicon valley's techbro culture - you know, the guys who brought us Gamergate and Reddit and Twitter's "we can't possibly block Nazis (except in Germany where it's illegal not to)" policy - to be given a handwaved free pass to operate two-ton death machines on public streets where our kids are.

I would love to see self-driving cars... or rather, self-driving buses. I don't think we need cars with FEWER than one person on our already-crowded roads. But I'm willing to posit that that starts with cars.

But I want the people who design them to be held accountable for problems they cause. If a car with a human driver oops-accidentally-crashes-into-someone, and they die, there's often a trial; there's at least an investigation. I want an autopilot car to have the same possible result: if that car injures or kills someone, a person can be held responsible.

Maybe the death was just a tragic accident with no need to file charges - but I want the option for a trial to be there. And right now, it looks like Google and Uber are working to distance themselves from any consequences from their choices, which apparently include "disable the safety features we've spent the last hundred years learning how to add to these rolling death machines."
posted by ErisLordFreedom at 11:32 PM on May 24, 2018 [22 favorites]


mantecol, your link (never spend it) deserves a separate post. It makes one think that the U.S. is a country which thinks, hey, we'll just give power to whoever has the most money and let them decide about the distribution of money. Incredible.

ErisLordFreedom, I was referring to flug's one-sided manifesto above ("safety benefits of self-driving cars are a PR construct" - no, they are an ongoing discussion).

As for your self-driving buses - we've had self-driving trains for some time.

A self-driving car has successfully been executed in 1985.

I think in the end we all agree companies need to be held under scrutiny, every accident and incident must be investigated and studied, but let's not knee-jerk the budding technology into oblivion.
posted by Laotic at 11:52 PM on May 24, 2018 [7 favorites]


You mean UBER might DO SOMETHING WRONG?!?!?
Unpossible!
posted by Docrailgun at 12:10 AM on May 25, 2018 [1 favorite]


I wonder what Volvo lawyers and engineers will have to say about this. Not in the well crafter PR statement, but behind closed doors in a meeting room taking the wankers to task. Oh to be a fly on the wall.
posted by thegirlwiththehat at 12:14 AM on May 25, 2018 [2 favorites]


As for your self-driving buses - we've had self-driving trains for some time.

And these trains, they drive on the open road, do they?
posted by Sys Rq at 12:18 AM on May 25, 2018 [5 favorites]


I think we never should have left the horse cart to be honest. None of this shit is worth it. The idea of self driving cars and the practicality of it is great but we are not good or smart enough to be trusted to do it the right way. Ie without killing a bunch of innocent people for stupid reasons. I thought that the minute I heard about them and have not been proved wrong. I have an idea maybe I can build a rifle that does very delicate heart surgery. Well it’s not done yet and it has shot a bunch of patients who haven’t consented to this and will shoot many more before it’s ready for sale. Hm maybe there is some other way to do the surgeries? Um, no??? There isn’t, obviously.
posted by bleep at 12:24 AM on May 25, 2018 [6 favorites]


A self-driving car has successfully been executed in 1985.

There are no bad cars, only bad road conditions.
posted by FJT at 12:25 AM on May 25, 2018 [4 favorites]


The narrative has been very cautious the whole time, namely that autonomous cars, if executed properly, have a potential to vastly reduce human-error induced accidents on roads, and improve traffic flow.

Sure, that's been the narrative from some corners as justification for the need or use of autonomous cars, but that isn't at all why companies are developing them, out of some purely altruistic notion of social betterment, they're racing to develop them to be first and profit. Their goal is to control the streets and highways, putting professional drivers out of work and charging everyone else rent to use the nations roads. That's their goal and the safety issue is just the hurdle they have to be seen as clearing to reach it.

It's almost impossible for me to imagine this radical level of change happening without greater government oversight. It seems to me that would have been the case had this attempted change happened from the '50s to the '80s, but we abandoned that set of values for the "disruptive innovation" approach that is claimed as the birthright of entrepreneurs by those who worship capitalism as the one true way.

For me, it's about time we look back to last century and think about starting a Technology Protection Agency within the government to function as the EPA is supposed to for the environment. It isn't just Uber, but Facebook, Amazon, Google, Wells Fargo, Cambridge Analytica and all the other bad actors in tech field or using technology that affects us all in major ways without any serious attempt to regulate those aims. We're allowing profit seekers to drive our entire society without looking at the road to see where they're taking us or what they might running over along the way. It's idiotic and reckless.
posted by gusottertrout at 12:28 AM on May 25, 2018 [10 favorites]


I’m beginning to wonder if self-driving cars will never really happen; except in the USA where it will be accepted that regular fatal catastrophes are just one of those things we have to accept, with thoughts and prayers.
posted by Segundus at 1:34 AM on May 25, 2018 [4 favorites]


I don't understand the autonomous vehicle bashing crowd.

It's tech and tech is becoming something Metafilter doesn't do well.
posted by MikeKD at 1:51 AM on May 25, 2018 [17 favorites]


As for your self-driving buses - we've had self-driving trains for some time.

And these trains, they drive on the open road, do they?


Even the TGV, which runs on its own tracks (its own specially-designed, tested, dedicated tracks that, among other things, specifically do NOT have any road crossings and are bordered by fences), still has human operators.

There have been zero fatalities in high-speed operation since 1981. That's 37 years of trains zipping by at 300kmh across France and now more of Europe. Just to make it all the more clear, as stated in that article: 1.2 billion passengers have travelled on the TGV.

There was one fatal accident in testing, three years ago, when a TGV derailed and killed 11 passengers; 42 survived and were injured.

It's safer than air travel.

Uber's decisions are inexcusable.
posted by fraula at 2:18 AM on May 25, 2018 [6 favorites]


I don't understand the autonomous vehicle bashing crowd.

It's tech and tech is becoming something Metafilter doesn't do well.

That's the thing, it isn't just "tech" it's people's lives and human society, decisions about which that are being left to those who think they can best profit off the rest of us in implementing whatever "vision" they have.

Personally I'm not "anti-tech", but I am strongly against letting those who develop technology force the rest of the world into their desired vision without first accounting for what the ramifications of it will be as these "innovations" not only can end up killing some by corporate neglect but can often scale up so quickly as to reshape the entire society, changing the lives of virtually everyone with minimal consideration for the costs or risks involved to the greater whole simply because some techbro wants to be the next Elon Musk and some venture capitalists will give him the dough to do what he wants for a cut of the profits if it works.
posted by gusottertrout at 2:33 AM on May 25, 2018 [20 favorites]



I don't understand the autonomous vehicle bashing crowd.

It's tech and tech is becoming something Metafilter doesn't do well.


I work in tech and love my job and I'm still seriously down on Uber's autonomous vehicle program. That woman who was run down could have been me or a friend or my son. Until that "accident" they were running their cars through Pittsburgh and I'd see them daily and I'm really not happy about the idea of letting them back on the open roads in my city with some serious advances in the technology. Uber has graphically demonstrated that they're not ready for prime-time and also that their management is not to be trusted.
posted by octothorpe at 3:16 AM on May 25, 2018 [20 favorites]


> is there actually any precedent at all for forcibly disbanding a corporation due to negligence or malfeasance? Not just fining them into oblivion, but actually, like, directly revoking their charter or whatever

Arthur Andersen and the Myth of the Corporate Death Penalty: Corporate Criminal Convictions in the Twenty-First Century

If I get the authors gist, A.A. (helped Enron defraud) wasn’t charter-revoked, but oblivion-fined and the fact that that was effectively a death sentence freaked out regulators so there were no more de facto “corporate death penalties” for the subsequent study period (2001-2010).

So.. first hire some regulators with backbones?
posted by ASCII Costanza head at 3:46 AM on May 25, 2018 [6 favorites]


the fact that that was effectively a death sentence freaked out regulators so there were no more de facto “corporate death penalties” for the subsequent study period (2001-2010).

right! It wouldn't do to play the "blame game" after these chucklefucks destroyed the economy
posted by thelonius at 3:52 AM on May 25, 2018


. Rent out an abandoned neighborhood somewhere

It occurs to me that this would be something Detriot would be excellent for.

Not to mention dealing with weather at the same time.
posted by steady-state strawberry at 4:07 AM on May 25, 2018


Rent out an abandoned neighborhood somewhere

It occurs to me that this would be something Detriot would be excellent for.


In context of company owned autonomous vehicle running over pedestrians with de facto impunity, it reads like a quote from some crapsack world dystopian sci-fi. Maybe we should update the definition by adding "like present-day Earth".
posted by hat_eater at 4:28 AM on May 25, 2018 [2 favorites]


I think in the end we all agree companies need to be held under scrutiny, every accident and incident must be investigated and studied, but let's not knee-jerk the budding technology into oblivion.

It's not knee-jerk to suggest that the company that for years now has been violating the public's trust and interest in favour of its shareholders should not be given the benefit of the doubt with respect to technological development in the public realm after someone is killed. Uber has never demonstrated that their interest is the collective one, yet people are willing to sacrifice the collective's safety in favour of "budding technology."

There's plenty of private land, maybe the executives and board can spend some time wandering around being the pedestrian test dummies for the next time they decide to turn off manufacturer safety features in favor of beta testing their own technology. This company has clearly shown it is not ready to be testing these cars around unwilling participants and should be held to the highest possible standard for getting whatever permits they're getting back.
posted by notorious medium at 4:29 AM on May 25, 2018 [10 favorites]


Uber's homeless-people stabbing autonomous robots were involved in the stabbing and execution of an unshaven homeowner last wednesday.

A spokesman explained the unshaven homeowner caused a false positive on the homeless person classifier and that people could easily avoid this in future by shaving more regularly, and that their robots had successfully stabbed hundreds of thousands of homeless people to death before the unfortunate incident. It did not indicate a software nor a hardware defect, but merely a failure for external and unrelated participants to fully comply with the unaccountable expectations of the homeless-people stabbing autonomous robots.

Damn pinkos questioned whether there was any benefit to having homeless-people stabbing robots at all when there were alternatives available, such as welfare systems or providing accommodation for those in need, but this didn't address the fundamental need for the provision of jobs for the workers and engineers of the homeless-people stabbing robot industries.

The government announced the forced purchase of thousands of homes today in an effort to support the homeless-people stabbing robot industry.

In unrelated news, temperatures reached the highest monthly levels for the 48th continuous month since records began.

posted by davemee at 5:16 AM on May 25, 2018 [6 favorites]


It's tech and tech is something tech doesn't do well.
posted by seanmpuckett at 5:40 AM on May 25, 2018 [14 favorites]


I don't understand the autonomous vehicle bashing crowd...

It's tech and tech is becoming something Metafilter doesn't do well...

I work in tech and love my job and I'm still seriously down...


Me too. Self-driving features are basically a software problem coupled with a sensor problem. I admit complete ignorance about the sensors. But I spend 14 hours a day trying to keep the products of the modern software industry faguely functional. Maybe it's different elsewhere, but I haven't yet met even one person in a similar job who isn't scared as hell of self-driving cars.

Your nearest IT person probably spends most of their time dealing with flagship products of gigabuck corporations where the vendor can't accurately describe all the ramifications how a given feature is supposed to work. But it doesn't matter, because when implemented, it doesn't work as designed anyhow. So we plan our work-arounds, accept the feature loss, file our fix requests, train our users in how they can do their job anyhow, and say "maybe it will be better in the next release"

That's annoying-to-infuriating when you're dealing with an email solution, dangerous for a firewall. In a software-driven car it's obviously lethal, and it's very damned clear that the people building them aren't trying to treat this as a categorically different problem.

There were once ways to create software that didn't fail: NASA did amazing things for the space shuttle. Start from a blank sheet of paper: here's a processor, some RAM chips, some I/O lines. Build everything else yourself so that every single piece can be tested exhaustively. While you're doing that, your peers will be in the next building doing the same thing from different foundations, so if you make a mistake hopefully they won't make the same one. They we'll run five copies and put them through a voting logic, so that if one copy is damaged, it'll get voted down.

It's not at all clear that the people making life-safety critical software could do that today, even if they wanted. Even the behavior of your CPU is defined by thousands of lines of other people's software, which are known to be flawed and, even when working as designed, contain major unexpected risks.

The teams implementing self-driving aren't even trying: they're using COTS embedded operating systems or open-source systems on commercial hardware (and firmware), introducing thousands of components of other people's software, none of which will ever be re-examined or comprehensively tested. Unless this changes, and it won't, there's no reason to believe that your autonomous car will be any more reliable than any other [cutting|bleeding]edge new IT product.

And that's profoundly not good enough.
posted by CHoldredge at 6:23 AM on May 25, 2018 [18 favorites]


On the contrary, Metafilter often does tech well because we understand the underlying motivation behind much of tech is the relentless drive to produce a 10x return on venture capital invested, not to improve lives or solve problems. Uber exists to provide a multi billion dollar exit for their VCs. In order to do that, they be decided to arbitrage the equity people have in their own cars, undercut the established taxi business, and drive them out of business leaving Uber with a monopoly. They're the worst kind of rentseeker, saddling their drivers with the costs and attempting to saddle them with the liability for accidents. They entered into the autonomous vehicle space by stealing data and trade secrets from Waymo. We should absolutely view all of their efforts with a jaundiced eye, because they've shown us who they are again and again.
posted by Existential Dread at 6:24 AM on May 25, 2018 [27 favorites]


For what it's worth, there was also some reporting after the Arizona crash suggesting that, true to its image, Uber's approach to autonomous car technology has been unusually reckless:

https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html
"Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona, according to 100 pages of company documents obtained by The New York Times and two people familiar with the company’s operations in the Phoenix area but not permitted to speak publicly about it."

https://arstechnica.com/cars/2018/03/video-suggests-huge-problems-with-ubers-driverless-car-program/
"Uber is widely seen as a technology laggard among industry insiders, yet Uber recently placed an order for 24,000 Volvo cars that will be modified for driverless operation. Delivery is scheduled to start next year.

This rollout plan is more ambitious than anything industry leader Waymo has announced and more ambitious than any of Uber's other competitors have announced with the possible exception of GM's Cruise. It puts a lot of financial pressure on Uber—and Uber's driverless car engineers—to have their software and sensors ready in time. It will be a financial disaster for Uber if the company has to take delivery of thousands of vehicles that it can't use because its software isn't ready."
posted by floppyroofing at 6:45 AM on May 25, 2018 [4 favorites]


I appear to work right near where a number of self-driving startups must be headquartered. I see them driving around all the time.

Now Google has been doing this for many years with lots and lots of cars. They even had these little pods that had no steering wheel or brakes! (They only went up to 25mph). They also have billions of simulated driving hours.

I see them on the road all the time. I would trust the Google cars over most human drivers in any situation, even unusual, unexpected situations.

These startups, though, I won't even get on the road until they are long out of sight. I take walks around work, and if I see one nearby, I'll end my walk.

We have required driving tests and licensing for humans--why don't these exist for self-driving cars? It seems any idiot with VC money can put one on the road, and Uber is one of the biggest idiot companies.


In the area around my wife's job some of the black university students crossing the road at 4 way stop intersections would not start to cross until a white person was also crossing.

Now I am pretty anti-car having not driven for more than 20 years and I often think of drivers as being like blind people swinging baseball bats in a crowded shopping mall completely oblivious to the chaos they cause but to these kids all cars were clearly a real and active threat. They opted out of trying to determine which ones were driven by racists who would try to kill them and just assumed they all were.

Even without racism one of the most important and earliest things that parent's teach their children is to fear cars. I've heard parent's say that they would never hit their children except to enforce 'stay away from the road' rules.

Cars are terror.

Arguments that automated cars are less terrible might be correct if the tech gets done right but it doesn't change the fact that cars are still terror. At the heart of every trolley dilemma is the unexamined assumption that the trolley must move and somebody must die. There is no option to abandon the trolley.
posted by srboisvert at 7:35 AM on May 25, 2018 [22 favorites]


I also think everyone needs to dial it back a bit. So maybe Uber isn't the best at developing self-driving technology - maybe they are the worst. But the are still good enough that the person whose job it is to monitor the car wasn't doing so. I mean, you either have to assume that this person is a serious risk-taker, or you have surmise that the car for the most part drives pretty well and doesn't run over people regularly. The same day Uber killed this one person, approximately 16 other people also died and 100 were injured by being run over. And those numbers are actually artificially reduced in the US because just simply walking in most city and suburban environments is so incredibly hostile.

If Uber should get the death penalty, then a lot of other people deserve it too. If Uber is being reckless, then so is your local highway department, your local city council and they all deserve to be disbanded too.
posted by The_Vegetables at 7:40 AM on May 25, 2018 [1 favorite]


There are two major pieces to the question of autonomous vehicles, and teasing them apart is something that the automakers aren't great at, and Uber is absolutely terrible at. Basically, for any form of autonomy aside from L5 (which is what Waymo is working on; I usually describe it as "driver? what driver?" autonomy), the largest problem probably isn't making the automation itself work. It's making it work with the human in the driver's seat.

Yes, there have been some notable failures of automation (e.g., the various Tesla Autopilot collisions in the last couple of years), but they're (1) comparatively rare, and (2) often injurious / fatal because the driver is using the system in a way it isn't designed to be used. As I mentioned earlier, Tesla's system is - on paper - a L2 system, which requires the driver to remain attentive and to keep their eyes on the road. Drivers have a nasty habit of assuming that automation is more capable than it is (grumble, don't name it "Autopilot", grumble), but it's also staggeringly reckless of Tesla to not monitor the driver and to just trust them to do what the engineers think they should do. Other automakers (e.g., GM) have put active eyetracking in vehicles to keep tabs on the driver when they use systems like this, and while that's not a perfect solution, it's damn better than what Tesla did, particularly when they first released Autopilot.

The big problem in L2-L4 automation is that there's an expectation that the driver will take control when alerted, and we don't really know what this handoff process looks like. It's bad enough in a L0/L1 vehicle where drivers have a habit of taking their eyes off the road to do other things (e.g., checking their email, texting), but at least at lower levels of automation, there's some expectation on the driver's side of things that they need to pay attention to what's going on around them. In a higher-level autonomous vehicle, while people say they aren't willing to trust the vehicle, their behavior often says otherwise. They're nervous at first, and then they start trusting the machine, without really understanding what it's limits are.

To go back to the question of handoffs (which are mostly a L2/L3 phenomenon; the idea being that as the automation gets more capable, handoff events become progressively less frequent, which is its own prevalence problem), what the vehicle is expecting is that the driver, who has (probably) been doing something else while the vehicle drives, will look at the road, orient, and reestablish control in a safe manner, bearing in mind that they probably don't have much of a clue of what's going on until they look at the forward roadway. That's an interesting assumption, and it's not necessarily a bad one (we can perceive what's going on around us very quickly), but it's something that the autonomous driving engineers need to be much more aware of than they are. It's bad enough to contemplate a handoff if the driver is daydreaming / emailing / texting / talking on the phone... but what if they decide to take a nap? What if they're incapacitated from alcohol / drugs / stroke / heart failure? What if they haven't had to take over from their vehicle in weeks (the deskilling problem)?

There's a lot of thought going in to working around these problems, but a lot of it (from my perspective) is coming from the world of human factors, rather than basic science, so what we're getting is models of how drivers get information about the world that are wildly at odds with what we actually understand about, for instance, how drivers acquire, represent and use visual information. Essentially, what I do as a postdoc is whacking on the deep problems in human perception in this space, and (often) finding that no one has done the basic science rigorously. They've often started with the right questions, but tested them with the wrong methods, leading to plausible but invalid data, leading to models and theories that are head-scratchingly bad. It's a really bad idea to start theorizing about high-level consequences of representing visual information when you don't know how that process works, or even have any clue about it. But that's a different comment.
posted by Making You Bored For Science at 7:46 AM on May 25, 2018 [14 favorites]


If Uber should get the death penalty, then a lot of other people deserve it too. If Uber is being reckless, then so is your local highway department, your local city council and they all deserve to be disbanded too.

I'm largely on board with that. The current DOT employees can apply for their old jobs, and be rehired just as soon as they can show they plan to take the interests of people who are not currently in a car seriously. Probably they should be required to use other forms of transportation as part of their job responsibilities, regularly. I think this is true for every state in the US that renamed its highway department to [state] department of transportation without changing anything else.
posted by asperity at 7:58 AM on May 25, 2018 [7 favorites]


I work in tech and love my job and I'm still seriously down on Uber's autonomous vehicle program.

I also work in tech, and that's why I'm down on all of these autonomous vehicle programs. The concept is sound, but the specific corporate environments creating them are utterly devoid of sound engineering ethics. If Boeing were doing autonomous vehicles, maybe I'd be a little more bullish about the tech. But since it's Google and Tesla and fucking Uber -- a company whose entire business model is based on blatantly violating regulations for livery vehicles -- instead, I don't trust a goddamned thing.
posted by tobascodagama at 8:03 AM on May 25, 2018 [18 favorites]


If Boeing were doing autonomous vehicles, maybe I'd be a little more bullish about the tech.

This is exactly right. Consumer grade autonomous vehicles need to be made to avionics grade standards, and none of the entities currently in that area are capable of working to those standards, AFAICT.
posted by PMdixon at 8:07 AM on May 25, 2018 [7 favorites]


I would be thrilled if autonomous vehicles were being made to avionics standards. Interestingly, a lot of the classic ideas in driver behavior (e.g., Situation Awareness; Endsley 1988, 1995) are really adaptations of aviation theory. This (mostly) isn't great, because the in-air environment (and even the taxi/takeoff environment) are wildly different from the road environment, but it's the framework. As a general rule, understanding pilot behavior is considerably more advanced than understanding driver behavior, but in a lot of ways, it's simpler than driver behavior, particularly non-commercial driver behavior.
posted by Making You Bored For Science at 8:19 AM on May 25, 2018 [9 favorites]


There were once ways to create software that didn't fail: NASA did amazing things for the space shuttle.

The way that we do commercial software in 2018 is just not how you should design, build and test control systems that can cause instant death on failure. These systems should be developed in the way that we develop fighter jet control systems or nuclear power plant safety systems. It's not like building a fucking angular website. I design and write tests for enterprise software and I fully expect that bugs will get past me because of the fast and loose way that we work in this industry today but fortunately, I don't work on anything that can kill an innocent bystander if it fails.
posted by octothorpe at 8:29 AM on May 25, 2018 [14 favorites]


The move fast and break things rot has also spread to Boeing. Look at the MBA-driven debacle of the 787 development: “Let’s make our numbers look better by outsourcing all of the engineering”, indeed.
posted by monotreme at 9:04 AM on May 25, 2018 [3 favorites]


It's tech and tech is becoming something Metafilter doesn't do well.

My, has Metafilter changed, if that's true ...
posted by yarly at 9:08 AM on May 25, 2018 [1 favorite]


Essentially, what I do as a postdoc is whacking on the deep problems in human perception in this space, and (often) finding that no one has done the basic science rigorously.

That disappoints but doesn't surprise me. It seems like every control system solution where people are out of the loop, but monitoring, is based on completely unrealistic assumptions about skillfully the operator can intervene. I'd have thought that Air France 447 (previously) would have been a huge red flag to everyone who plans that solution. And airline pilots have minutes to react to most emergencies in cruise. Drivers are lucky to get two seconds.

Background: I used to help run a highly automated IT Ops desk. Not life or death, but "miss this, and you'd better have a good excuse to keep your job. No, 'my attention was elsewhere' is not good enough." After a few 12 hour shifts where no action is needed, nobody I ever met can actually do that job well. I've literally watched operators stare right at the screen with the flashing red alert for minutes on end, before realizing "shit! that one's mine?"

There were work-arounds for our situation, but the underlying lesson sure seemed to be that monitoring almost-perfect automation for the one, rare case where it goes wrong is something human brains are inherently bad at: Once some subconscious part of you gets enough training that the system works, your attention will wander, even if consciously you know that you're eventually going to have to take action fast and well.
posted by CHoldredge at 9:15 AM on May 25, 2018 [13 favorites]


The move fast and break things rot has also spread to Boeing.

Yeah, that's why I had to qualify my statement with a "maybe". But I'll still trust the aerospace industry, which at least had a sense of engineering ethics once upon a time, over the software industry, which ever since breaking free of academia and the military has laughed at the very concept of ethical behaviour.
posted by tobascodagama at 9:25 AM on May 25, 2018 [3 favorites]


The permissive environment for autonomous vehicles in AZ is significantly looser than in any other state, see here. There's a reason Uber was in Tempe.

I don't think blaming the national administration has any use; here, it's the specific state regulators that should be blamed for not adequately regulating Uber and the other companies in this space. Yet another reason to loathe Doug Ducey.
posted by nat at 9:39 AM on May 25, 2018 [1 favorite]


The negative reaction by many (here and elsewhere, including myself) is not to autonomous driving per se, but to what a lot of us probably see as a premature promotion of autonomous cars as right around the corner. There is a silicon valley mindset that (a) only breakthrough technology is cool and (b) you can (and must) have a product launched and widely used in, like, a year, and (c) if you just hire what seem to be the smartest and most productive software developers you can find, it will be easy and fast.

All the self-driving car stuff current companies are working on, and their employees, are coming right out of the academic robotics and engineering labs, based on work that has only really been developed in the past few decades. Neither of these cultures (silicon valley and academia) are, in general, as experienced in in stuff having to do with reliability and safety as they are in demonstrating big leaps forward in capabilities which are impressive but only have to really work in a few tests and demos to show that the capability is possible, not as a whole system that is reliable and which can be reliably updated and iterated without problems (regressions). (This is generally true, not specific to autonomous driving.) Of course it doesn't mean that they can't generate stable reliable software, any startup that ends up with a success of course has to transition to including that in their development process. But when part of your R&D process is rushing stuff out either to the public or the public roads, there are very different implications to errors.

Other industries (traditional automation industry and car industry) are by nature going to move slower and more conservatively than silicon valley software companies. Google/Waymo got started much earlier that a few others, and it was not rushing onto the public roads like some others, so it is a little bit more mature and stable.

The biggest problems have been Tesla, which markets semi-autonomy with the name "autopilot" suggesting full autonomy (despite the fact that their current semi-autonomous driving features tend to work very well) and now Uber.
posted by thefool at 9:46 AM on May 25, 2018 [6 favorites]


I'm 100% pro-self-driving-cars and also very anti-Uber. I think having any hypergrowth driven company choosing when to deploy autonomous cars is very dangerous. But a company that has continually shown contempt for regulations, founded by someone as awful as Travis Kalanick, and with an autonomous car division that was led by someone as amoral as Anthony Levandowski... It's a setup for disaster.

(Also want to register disgust that the NTSB press release chose to blame the pedestrian up front. A bunch of news articles followed that lead. The first paragraphs are about the dead woman, her clothes, and her toxicology. The actual report is more responsible, it does mention those contributory factors but mostly talks about the role of the killing robot. Whoever did the PR seems to be carrying water for Uber.)
posted by Nelson at 9:47 AM on May 25, 2018 [6 favorites]


The NHTSA or something similar really needs to be looking out for the public's interests in safety (during tests on public roads *and* engineering and design behind autonomous cars.) I think they are a bit behind the ball or that, at least AFAIK.
posted by thefool at 9:48 AM on May 25, 2018


Making You Bored For Science: "The big problem in L2-L4 automation is that there's an expectation that the driver will take control when alerted."

Highly trained airline pilots have trouble with this occasionally, with disastrous consequences (eg AF447).
posted by thefool at 9:52 AM on May 25, 2018 [5 favorites]


If Boeing were doing autonomous vehicles, maybe I'd be a little more bullish about the tech. But since it's Google and Tesla and fucking Uber

It's also Cruise, which is (now) General Motors. I have a little bit more faith in them understanding the consequences of poor engineering, considering they're coming into it from a background where Ford Pinto and Firestone Tires are held up as examples of what not to do. Recalls are more expensive than over-engineering -- you can't stay in business if you ship a Minimally Viable Truck and issue 50 iterative patches over the next 52 weeks. For GM, this is an easy truth to accept. For Uber? Well, they might just try it, even if revision 0.98p31 somehow fails to "see" elk on the road.

Cruise also equips their testing cars (usually built on Chevy Bolts) with two employees in the front seats.
posted by toxic at 10:14 AM on May 25, 2018 [2 favorites]


"The big problem in L2-L4 automation is that there's an expectation that the driver will take control when alerted."

This expectation would be easier to meet if the alert were given in a way that can be perceived by human senses.

But otherwise... yeah, there's no such thing as "we'll send you an alert when you need to pay attention." Either you need to watch the road or you don't; an expectation that people will look up from their reading or texting or playing video games in time to react to something the sensors failed to catch in time, is just going to result in more deaths.
posted by ErisLordFreedom at 10:17 AM on May 25, 2018 [8 favorites]


thefool makes an excellent point: human takeover from automated systems isn't perfect, and I certainly don't claim it will be. The point I want to make is that, unless we understand the limitations on the perceptual processes which underlie such handoffs, we probably shouldn't be asking for them.

ErisLordFreedom hit it pretty much dead-on: yep, the alert(s) need to be useful, and even then, they're no guarantee that the driver will do what the engineer wants them to. It's this problem that scares the shit out of the more responsible automakers working on autonomous systems: why trust the human, when they're really good at finding interesting ways to screw up?

Jumping back up a bit; CHoldredge's description of a truly skull-crushingly boring job is one of the big looming problems in this space. Essentially, what we're asking for is vigilance, and that's a hugely difficult problem for humans. We're bad at detecting infrequent events, and we're really, really, bad at detecting infrequent events if we've been sitting there for hours looking for them. Ever wonder why the TSA screeners are getting cycled off the Xray bag scanner machines every fifteen minutes? To minimize the impact of vigilance decrement, and even then, it's been standard practice for a decade to digitally insert additional false positive targets in the image stream to keep them at an acceptable level of performance.
posted by Making You Bored For Science at 10:49 AM on May 25, 2018 [7 favorites]


Compared to human drivers, autonomous vehicles have everything going for them. They can see in all directions at once. They can see in foggy and dark conditions. They aren't blinded by glare. They don't play with phones. they don't eat and drink while driving. They aren't distracted by passengers. They don't drive drunk, stoned, or sleep-deprived. They don't get road rage. They don't show off to their friends. They don't speed because they're late or impatient. They don't have Fast & Furious or race car driver fantasies. They don't spill coffee on their lap. THEY DON'T PURPOSELY DRIVE INTO CROWDS IN A MURDEROUS RAGE, TARGETING WOMEN BECAUSE THEY'RE FRUSTRATED 'INCELS'.

They are already safer that average human drivers in every way. If we took the humans off the road and had 100% driverless vehicles they would be safer still (It's reacting to unpredictable human drivers that's the biggest operational issue).

But opponents won't ever be happy with better than humans. They won't be happy with 10x, 100x, or 1000x better than humans. They will call anything less that zero fatalities a failure and continue to ignore the carnage that's happening every day right now.

Autonomous driving is not a bad concept, It's not bad programming or bad hardware or a lack of concern for safety on the part of developers. The problems only start when you get to the non-technical decision makers.

Change the way we're monetizing it, for sure. Do more off-road testing for sure. Take profit motives out of the equation. But for the sake of saving lives, don't stop developing this technology.
posted by rocket88 at 10:56 AM on May 25, 2018 [3 favorites]


They are already safer that average human drivers in every way.

Could you back that claim up with evidence? I agree that they should be, eventually, but I don't think there's any data to suggest the technology as deployed already is. It's hard to talk about statistics with such a tiny and irregular sample. But that 1 fatality tipped the scales of "deaths per million miles" very, very far.
posted by Nelson at 11:08 AM on May 25, 2018 [11 favorites]


That 1 fatality was not the fault of the "driver". Someone else effectively cut the brake lines by disabling a critical safety feature.
posted by rocket88 at 11:12 AM on May 25, 2018 [3 favorites]


Autonomous driving is not a bad concept, It's not bad programming or bad hardware or a lack of concern for safety on the part of developers.

Citation needed. I'll grant it's not a lack of concern for safety on the part of the software teams, but "the developers" includes "the people throwing money at this instead of into projects that might actually improve infrastructure."

(And, "safer than human drivers" is also not proven. "Safer than human drivers when limited to specific driving conditions, which were selected for their ease of safety," sure. But there's no evidence that they'd be safer than human drivers under standard road conditions--and especially not for people who aren't inside cars.)

If we took the humans off the road and had 100% driverless vehicles...

Who do you think is paying for those vehicles? This is a rich man's fantasy in which nobody is driving 28-year-old vehicles because they can't afford anything newer. One of the real issues in the driverless-car push is the expected incoming claims that "they're safer than people-driven cars, so those who can't afford them, shouldn't be on the roads at all."

I'm a Batman fan; I love the robot car idea. But I am not fond of the current handwaving around "eh, someday we'll learn to identify pedestrians and bikes and strollers, but first we gotta get on the roads a lot more."
posted by ErisLordFreedom at 11:13 AM on May 25, 2018 [4 favorites]


The last estimate I heard for the transition period to a fully autonomous non-commercial fleet in the US (and this was a couple years back, but the underlying logic is still sound), from the point at which L4/L5 autonomous vehicles are available, is 20-30 years for the US. This was an estimate straight from the then-head of NHTSA, so from the horse's mouth, as it were.

Even if we can go *poof* and make L5 autonomous vehicles available tomorrow at the price of a midrange sedan (~30k), that's what we're looking at as a transition period. No one I've ever talked to in this area has thought we'll avoid this, because of the size and longevity of the existing road fleet. We might accelerate this once real L4 or L5 vehicles become available with a suitable safety record (mostly, the expectation is that insurance companies will view human drivers, compared to a level of automation which doesn't currently exist, as unacceptably dangerous), but that's still 5-10 years out at a minimum.
posted by Making You Bored For Science at 11:23 AM on May 25, 2018 [3 favorites]


Citation needed.

Sure, let's get citations for each and every anti-autonomous related claim upthread, and let me know when we get to mine.
posted by rocket88 at 11:26 AM on May 25, 2018 [2 favorites]


Compared to human drivers, autonomous vehicles have everything going for them. They can see in all directions at once. They can see in foggy and dark conditions. They aren't blinded by glare.

Their eyes are fantastic. Their brains are not.
posted by hwyengr at 11:30 AM on May 25, 2018 [8 favorites]


the expectation is that insurance companies will view human drivers, compared to a level of automation which doesn't currently exist, as unacceptably dangerous

Translation: poor people will be shoved off the roads, no matter how good their personal driving skills are.

I'd be a lot more sanguine about the robot-car movement if the test vehicles were buses.
posted by ErisLordFreedom at 11:33 AM on May 25, 2018 [6 favorites]


That 1 fatality was not the fault of the "driver".

No, of course not. The fatality was the fault of Uber's autonomous car. And the variety of Uber employees' bad decisions that went into the design of that car, including the insane idea that the best thing to do when an emergency stop is needed is to flash a warning message on a screen and hope the human being has fast reflexes.

But that doesn't matter to my question. You asserted that autonomous vehicles are "already safer than average human drivers". Do you have any data for that claim? The question is of significant interest to everyone: here, the industry, the NTSB. I think it's wrong.

FWIW, in the US there's about 1 fatality every 100 million miles driven. Uber's autonomous cars hit 2 million miles in Dec 2017. Being very charitable and guessing they did double that before they killed Elaine Herzberg, that's still 25 times the fatality rate of human drivers. That doesn't sound "already safer" to me. (And yes a sample of 1 does not establish an average rate. There's not enough data. Which is another reason to be very, very worried about how aggressive they're being in taking this thinly tested stuff to public streets.)
posted by Nelson at 11:34 AM on May 25, 2018 [7 favorites]


I read a lot of the research around autonomous vehicles for work - particularly from academia (as published from IEEE and SAE). It's interesting to see the leap (or sadly not) from some of the approaches academic groups are taking to the deployment by mobility companies. When I saw the NTSB report and the 6 seconds it took to classify the "object" as a pedestrian, I was kind of shocked how behind that seemed to be. Lots of people are looking at ways to classify vehicles, pedestrians, traffic signs, and other features of the streetscape through a variety of methods (though most of it seems to hinge on machine learning and LIDAR or video imaging data). That said - there's still a long way to go in terms of the robustness of these detection and classification systems under a wide variety of conditions - nighttime, rain, fog, glare, etc. They have the potential to be better than humans but are not there yet.

And I think this whole thing (and much of the discussion here and other places) reflects the dissonance between the research going into AVs, the technology, and the hype machine. It's good and bad that Uber has been on the bleeding edge because they are forcing these difficult conversations to happen with a wider range of people other than transportation researchers and policy wonks, but they also poisoned the well by all of their bluster and being total assholes as a company. (See also Tesla.)

That's also one of the problems with projecting the fleet deployment and turnover. 1 - the technology is rapidly evolving but 2 - regulations need to be in place for the L4 and L5 to be commercially viable (which also is hard because there are so many jurisdictions at play). 3 - It's not clear what the final model will be and the idea of Mobility as as Service (MaaS) or Mobility on Demand (MoD - which I think is the USDOT term) is being touted for urban areas, but that will require a pretty big philosophical jump for a lot of people. The conservative projections of similar to the current system seem likely, even if they will probably lead to a dystopian nightmare.

Citations needed.
Do you want academic research, the new PR-like white paper research, or popular pieces from places like The Atlantic and Medium? (I often find the call for citations in places like this appealing to authority.)
posted by kendrak at 11:34 AM on May 25, 2018 [6 favorites]


Hmm, building on my last comment, maybe they should be connecting the LIDAR sensor output to goggle headsets in the cars so that the superior optics can be hooked up to the superior spatial processor. Plus you can't look down at your phone while wearing them.
posted by hwyengr at 11:40 AM on May 25, 2018 [2 favorites]


As far as I know, ErisLordFreedom, there's not a whole lot of work going in to autonomous buses at the moment, although there is work going into autonomous trucking. Buses are a really interesting problem from a safety standpoint, because, for a human operator, their blind spots are gigantic as a consequence of their design (in the US, crossing on the left hand side of the bus means you're out of view for considerable portions of what you'd think the driver could see, because of the pillar structure around the windscreen; yes, that's why they have the sets of mirrors, but their coverage isn't perfect). Thinking about this from the point of view of the local public transit authorities, I don't see the incentive for them to push for autonomous public transit buses, unless they also push for either totally free public transit or electronic payment only. Or, we can keep the model we have now (cash and electronic payment) and a human operator to help those who need it.

Reading kendrak's comment, I agree entirely. The computer vision systems which are the foundation for autonomous vehicles are fundamentally brittle (as in, they break in weird and unpredictable ways). In theory, one can train these systems with a sufficiently huge dataset to make them less brittle, but recognizing objects in the world is hard, given the staggering visual diversity that exists outside the lab.

On the topic of the existant and evolving standards in this space (e.g., the SAE levels of autonomy I've been talking about, but also IEEE's documents on related points), I'm not a huge fan of the basic science grounding evident in either. Unsurprisingly, both SAE and IEEE put out documents written by engineers for engineers (or for human factors researchers), which are usually grounded in the human factors literature, rather than the basic science and theory literature. The consequences are interesting: the human factors take on the human problems in this space are usually superficially studied, and insufficiently interrogated, because the field of human factors is focused on solving applied problems in specified settings, rather than understanding the mechanisms. The problem (from where I sit, as a basic scientist slamming his head against this) is that without a real understanding of the basic mechanisms, the applied solutions are only modestly useful. Understand what the driver needs to know, and then you can understand why they do what they do.
posted by Making You Bored For Science at 12:10 PM on May 25, 2018 [3 favorites]


As far as I know, ErisLordFreedom, there's not a whole lot of work going in to autonomous buses at the moment, although there is work going into autonomous trucking.

There are quite a few driverless bus systems in the US, and many more going live in the next year.
posted by The_Vegetables at 12:35 PM on May 25, 2018 [1 favorite]


Buses are a really interesting problem from a safety standpoint, because, for a human operator, their blind spots are gigantic as a consequence of their design ... Or, we can keep the model we have now (cash and electronic payment) and a human operator to help those who need it.

Buses actually seem like a good candidate for L2 systems. Automate the basics, have a human on-hand to handle passenger issues in the cabin as well as emergent road conditions or maintenance problems that the automated driving system can't reasonably be expected to cope with. Bus drivers already need specific training above and behind what ordinary non-commercial drivers get, so a lot of the problems with rolling out L2 systems to the general public can be addressed through that training. It seems like a very nicely analogous situation to the one with commercial airliners

But of course our Venture Capitalist Overlords think buses are icky and prefer transit solutions that don't require interacting with the hoi polloi -- see also: Hyperloop -- so of course all the money is going into low-occupancy vehicles instead.
posted by tobascodagama at 12:36 PM on May 25, 2018 [1 favorite]


The_Vegetables as in, full-on, mixed-roadway autonomous buses? Like direct, drop-in replacements for city buses? If so, I'm thrilled to hear it. I know about a couple cases of autonomous, restricted-roadway bus-like things, but I sort of think of them like the slightly more advanced version of the autonomous people transporters you see at large airports. I'm much more focused on non-commercial driver behavior (behavior in commercial driving situations is pretty much its own world, and even weirder than what I deal with), so it's also entirely possible that this exists, and I just don't know about it.

I need to think about tobascodagama's idea of L2 for public transit buses. Something about it makes me a little twitchy, but I'm not sure what. Then again, if you can build a L2 system that'll work in the environments where buses are usually deployed, it'll be a much more robust system. I wonder if what you really want for public transit buses isn't SAE levels of automation, but more in the way of really good driver assistance; cover the blind spots, monitor the environment with senses (e.g., LIDAR) that the human operator can't match, but work in concert with the operator to improve safety for everyone. I know of at least one automaker who is aiming for assertive systems as well as leveled automation, and that seems promising.
posted by Making You Bored For Science at 12:47 PM on May 25, 2018 [1 favorite]


Robot trains. Trains trains trains. Cars are inefficient and deadly and ridiculous, and the infrastructure supporting them is hostile to humans. They are a dumb way to move humans. Trains are efficient, require far less maintenance, are far far safer, and automating them is relatively painless.

If we threw a hundredth of what they're blowing on this techbro fantasy at Amtrak, maybe transit in the US could catch up to where it should have been thirty years ago.

And Elon Musk is a libertarian poo-weasel.
posted by aspersioncast at 12:52 PM on May 25, 2018 [11 favorites]


All the developers would have inspired a lot more confidence if we'd seen them logging millions real-world of miles with the full sensor suite running, but a human driver in full control. Gather enough data to be reasonably confident you've solved the "six-seconds to recognize a person walking a bike" and "mistook a semi for a traffic sign" problems in the lab, before testing them in parallel with every other component in a safety-critical situation.

Sure, eventually these systems will need to be tested in the real world. But the fact that these problems are cropping up after only a few million miles of testing suggests that they weren't nearly ready to be engaging in the high-risk tests they are.


The obvious lack of the data-gathering which could have driven a safer testing approach suggests that they either didn't understand, or didn't care, how badly humans would do at backstopping their auto-driver's faults. I hope it's the former. I fear that the real analysis was closer to "We put a human in the chair. We told them they have to fix it when the computer goes wrong. If they can't, it's their problem, not ours"

Horrifically, I think they might turn out to be right.
posted by CHoldredge at 1:31 PM on May 25, 2018 [6 favorites]


I'm late to the party, but have a couple thoughts.

First, unless Uber is extremely callous (which I don't discount), it seems like a number of their issues stem from making two opposing decisions. The first decision is the one that's being reported on now: that all of the emergency systems in the vehicle had been disabled, leaving emergencies in the hands of human drivers. The other decision is to just have a single safety driver in the vehicle. Jalopnik did some reporting and Uber appears to be unique among self-driving cars in having just a single safety driver on an experimental platform. ("Waymo told Jalopnik it only uses a single driver in cases when the vehicle being driven uses validated hardware and software. When it comes to new test vehicles testing out new hardware or software, drivers, in new cities or road types, Waymo said its policy is to have two test drivers in the vehicle.") Details on the single driver decision are here.

As to why those two decisions might have been made, it appears that the engineers at Uber may have been trying to rig the system to make their vehicles appear further along in development than they actually were at the time of the fatal crash occurred. As the NY Times reported in March, engineers felt "pressure to live up to a goal to offer a driverless car service by the end of the year and to impress top executives. Dara Khosrowshahi, Uber’s chief executive, was expected to visit Arizona in April, and leaders of the company’s development group in the Phoenix area wanted to give him a glitch-free ride in an autonomous car." It's hard not to think that some of the systems that potentially were identifying false positives were switched off because of this, especially since it's been reported that Khosrowshahi had given serious thought to shutting down Uber's self-driving car program in the past.

Finally, one thing that strikes me about all of this is that most of the coverage treats the logic in the car as having a single decision to make: to brake or not to brake. But when I think about what I might do if I was driving my car at night and thought I saw something in the roadway ahead, hard braking isn't the first thing that comes to mind. I would probably take my foot off the gas and maybe do some light braking just to buy myself some more time to figure out what I was seeing. It strikes me as extremely worrying that Uber's system seems to operate under the premise that it will just follow the line and speed it's on unless it needs to take emergency action. That's very unlike human drivers, who subtly react to the environment around them on a pretty frequent basis as drivers. I don't know how to interpret this other than to think that Uber's software is at best pretty rudimentary and it make me wonder why it was on the road. It clearly should not have been.
posted by HiddenInput at 1:50 PM on May 25, 2018 [10 favorites]



About 5-10 years ago, if you asked me, my prediction would firmly place the early adoption of autonomous vehicles in niche applications with more controlled or well charactarized environments. Shuttle busses, transports in closed work sites, etc. But this was coming from my perspective in the world of industrial automation where incremental automation of tasks from controlled to semi-controlled to open-ended is a slow steady progression as the technology was developed, and individual uses and applications are understood and developed in a semi-custom way for each customer (factory). The Silicon Valley approach is completely different -- to break out a whole new category of consumer product that can be suddenly bought by everyone at once.

Making You Bored For Science: human takeover from automated systems isn't perfect, and I certainly don't claim it will be. The point I want to make is that, unless we understand the limitations on the perceptual processes which underlie such handoffs, we probably shouldn't be asking for them.

I agree, though I wouldn't say it's merely not perfect, but that this is an area that really deserves a huge amount of attention and research (so glad you are involved in just that!)

CHoldredge : Data gathering ... the full sensor suite running, but a human driver in full control.

Exactly what I was thinking. I don't know if Uber is also doing that. I would assume they are but maybe this incident showed that they could have done more or done it more carefully?? Not sure. (Certainly Google did it, and their early versions [several years ago] relied heavily on extremely detailed maps and scans of the roads that the car would later drive on. )
posted by thefool at 2:17 PM on May 25, 2018 [3 favorites]


I can empathize with the desire to make the robot "not drive like crap", I've been there exactly. And it's not always easy, getting (even defining) some desired balance between things like safety, desired speed, accuracy, and smooth motion is a bit complex. And I wouldn't be totally surprised to learn if there was some kind of management pressure to solve stuff like that quickly in the interest of overall progress.
posted by thefool at 2:22 PM on May 25, 2018 [1 favorite]


"Management pressure to solve stuff like that quickly" isn't a problem. "Management pressure to declare that these vehicles are road-ready" is the problem.

Attempts to solve, even rushed attempts, would look very different. What we're seeing is attempts to insist that there is no "real" problem, that the current bugs in the code will go away with a bit of tinkering, that this is an entirely safe and reasonable approach to transit.

And this is being said by the people who brought us "those aren't employees; they're contractors," and constant data-mining of our personal details, and coverups of assaults. I'm not confident about their ability to decide when a tech project is good for the surrounding community.
posted by ErisLordFreedom at 2:31 PM on May 25, 2018 [7 favorites]


My comment, re: tech & MeFi, wasn't in response to this incident or Uber--I fucking hate Uber, don't use them, and hate that it's made move-fast-and-break-laws a (even more) viable business strategy (NB: a strategy that exists outside of tech).

It was a response to Laotic's comment which was made in response to flug's (and similar) comment(s) coupled with my, generally increasing impression that in an even-somewhat controversial tech-related post (so, e.g., not about Meltdown or new fab tech), the probability approaches one of essentially someone saying Vallis Silicis delenda est and/or shitting on all engineers/STEM workers.

This is already bordering on MetaTalk territory, so I'll leave it at that.
posted by MikeKD at 5:35 PM on May 25, 2018 [2 favorites]


I work for a company that is headquartered in Mountain View. My livelihood comes out of that place, and if it went away tomorrow I'd be in a pretty tough spot. Anyway, all of which is to say:

Vallis Silicis fucking delenda est.

If you disagree, you're probably a sociopath.
posted by tobascodagama at 5:42 PM on May 25, 2018 [4 favorites]


Jesus will no one regulate these motherfuckers? Any California AG candidate who campaigns on regulating the tech industry will have my vote, donation and door knocking skills.

What's so sad is, regulating these capitalist ghouls is the literal minimum we should be doing: I mean, I didn't vote for the nightmare future of millions of autonomous cars circulating the streets while their wealthy owners dine at gourmet grilled cheese restaurants and our cities remain places for cars not humans. I want pedestrian only, dense, wheelchair accessible, child and senior friendly green spaces and residential communities, connected by bikeways and zero emission buses and trains, but I don't seem to get a vote on that.
posted by latkes at 11:24 PM on May 25, 2018 [4 favorites]


I don’t understand why saying Silicon Valley must be destroyed, or what actually was said, is threatening or evidence that metafilter doesn’t do tech well.
Surely there isn’t any place in mountain view so deep in the bubble it thinks it is beyond criticism?
The quality of unrealeased experimental software from Uber seems to be very shaky ground to try and build a defense of the valley, which obviously has both positives and negatives.
posted by bystander at 4:09 AM on May 26, 2018


Surely there isn’t any place in mountain view so deep in the bubble it thinks it is beyond criticism?
Oh God this is like when someone else's child asks what happens after you die.

Uh. Definitely not. There are definitely not large pockets that have convinced themselves that anything they could imagine doing would be justified and that anyone who doesn't see that is a sheeple. Definitely not.
posted by PMdixon at 5:24 AM on May 26, 2018 [3 favorites]


It will be a financial disaster for Uber if the company has to take delivery of thousands of vehicles that it can't use because its software isn't ready.

I've worked in software, so I can tell you exactly how this will play out.

The engineering team will have been telling management for months that management's arbitrary deadlines have no hope at all of being met, and that this would still be the case even if the software were to require no testing whatsoever, which it obviously will.

The marketing team will have absolutely zero knowledge about how any of the software works or even what it actually does. But they will be aggressively promoting it all the same, and answering all questions about whether it will be ready with bland reassurances that the project is well ahead of schedule and in fact already very close to complete.

Management will require the engineering team to pull continuous all-nighters in order to ship something by the announced release date, regardless of whether it works properly or not, and any engineer who objects will be informed in no uncertain terms that they're free to leave at any time.

All of this is absolutely standard industry practice, and anybody who says it isn't works in Marketing.

Their eyes are fantastic. Their brains are not.

Don't let the pigeon drive the bus!
posted by flabdablet at 9:26 AM on May 26, 2018 [7 favorites]


What fabdablet said. I've worked on many projects where the release date was carved into stone and the software was going to be released no matter what state it was in. You can argue until you're blue in the face that a bug should be marked "stop ship" but if management is determined to ship, they're going to ship. "We'll fix it in a patch after release" or "We'll document that bug under 'known issues' in the release notes".
posted by octothorpe at 11:31 AM on May 26, 2018 [5 favorites]


As far as the "we need autonomous vehicles for safety" thinking goes, we already know how to make using roads safe, and it doesn't require any new technology. Vision zero design principles are well known and well tested in other countries. You just have to design the roads better, and stop making drivers feel safe going as fast as they want. We could do that on every road and street in the country starting now, and it'd take no longer than switching over to all autonomous vehicles. But even in dense cities we're struggling to get more than lip service to the idea.

Autonomous cars are about money and car culture, any safety improvements are a fringe benefit, and they don't even try to solve any of the other problems of car culture.
posted by vibratory manner of working at 12:20 PM on May 26, 2018 [4 favorites]


From last week's Economist: Toyota takes a winding road to autonomous vehicles. About alternate approaches to using AI and sensors in driving.
posted by Nelson at 1:04 PM on May 26, 2018 [1 favorite]


Toyota's approach is the right one. Drop a load of sensors on the vehicle and keep drivers from making mistakes when possible. Emergency braking, following too closely, lane drift, blind spot warnings, cyclist/door warnings, traction/braking control, speed limit warnings, turn signal alerts, A-pillar pedestrian blind spot warnings, driving too long without rest, etc, etc.

Yes, your car should nanny your sorry human ass into being safer on the road. Drop insurance rates accordingly. I don't see as how autonomous driving really does anything for traffic or "taxi" service that can't be solved with floating car share. We really need better transit and active transportation.
posted by seanmpuckett at 4:24 PM on May 26, 2018 [7 favorites]


So, the Toyota approach is almost certainly the right one: the technology and the humans have different failure modes, so let’s use them to support each other. It’s slow, compared to the flashy ones, but it’s much less likely to kill people through tech-bro stupidity.
posted by Making You Bored For Science at 11:45 AM on May 27, 2018 [3 favorites]


Looks like it's Tesla's turn to fuck up and kill somebody.
posted by tobascodagama at 10:00 AM on June 7, 2018 [2 favorites]


Jesus Christ!! Has someone done the math yet on whether electric car batteries vs gas cars are worse for the environment?
posted by latkes at 12:47 PM on June 9, 2018


It's no contest.

Most EV batteries will never catch fire, but all gasoline fuel is burnt. The total quantity of toxic species released from EV battery fires will indeed be nonzero, but will cause a negligible amount of environmental degradation compared to that due to the de-sequestration and incorporation into the global carbon cycle of four cubic kilometres of crude oil every year.
posted by flabdablet at 1:30 AM on June 10, 2018 [4 favorites]


And how about production and disposal of the batteries? (I have a plug in hybrid so I've already committed to an electric car battery fwiw)
posted by latkes at 3:34 PM on June 10, 2018 [1 favorite]


Once EVs are being sold at anything like the scale that combustion engine vehicles are right now, I would expect closed-loop battery manufacturing to have achieved no-brainer profitability.

Unlike fuels, the various elements inside a lithium battery don't disappear when the battery reaches the end of its useful service life; rather, they just contaminate each other - but that leaves them inside the cells, and if there's enough of a feedstock of used cells, it becomes cheaper and easier to extract what's required from those than from newly mined ores.

This is already being done for lead-acid starter batteries, which don't have anything like the turnover volume that EV batteries will.

And all the arguments about battery manufacture, recycling and disposal emitting greenhouse gases are predicated on ongoing use of fossil fuels to derive the energy required for those processes. But the world in general and battery manufacturers in particular are moving off fossil fuels and onto renewables; by the time EV battery remanufacturing is happening at anything like the kind of scale that would have it sucking up more energy than e.g. Bitcoin mining, I would expect greenhouse gas emissions it's responsible for to be already relatively low and trending downward.
posted by flabdablet at 1:07 AM on June 11, 2018 [1 favorite]


« Older The Emperor has died. Long Live the Emperor!   |   Coyote Carnage: The Gruesome Truth about Wildlife... Newer »


This thread has been archived and is closed to new comments