"Morally dubious, technologically limited, and potentially dangerous"
March 28, 2021 7:40 AM   Subscribe

"In a 13-minute video posted to YouTube by user 'AI Addict,' we see a Tesla Model 3 with Full Self Driving Beta 8.2 fumbling its way around Oakland. It appears hapless and utterly confused at all times, never passably imitating a human driver."
posted by clawsoon (144 comments total) 19 users marked this as a favorite
 
It appears hapless and utterly confused at all times, never passably imitating a human driver.

Sounds like a passable imitation to me.
posted by Faint of Butt at 7:43 AM on March 28, 2021 [51 favorites]


Drives exactly like me
posted by The Toad at 7:48 AM on March 28, 2021 [1 favorite]


Tesla has been misleading customers about its capabilities since they first called it "autopilot". As the article says
They require constant human supervision and split-second intervention.
This kind of hybrid system is more dangerous than a manual system. Because you might be lulled into a sense of safety for a moment, lose attention, and then the car kills you.

Worse; the automatic actions these more complex systems take are hard to understand. I have a car with cruise control, distance pacing, and lane-keeping. I can take my hands off the wheel and gas on a highway and the car will be reasonably safe on its own, slowing down and steering as necessary to stay in my lane. But I'm very clear on exactly what the thing is doing. The systems are very simple to understand, as are their failure mode. These fancier systems are a lot harder to understand without providing the benefit of full autonomy.

I want to be another passenger in the self-driving car. I do not want to be the driver who has to save my life from a reckless AI at any moment.
posted by Nelson at 7:54 AM on March 28, 2021 [48 favorites]


Why do I get the impression the passenger in that video has a huge poster of Elon Musk on his bedroom ceiling?
posted by dirigibleman at 8:05 AM on March 28, 2021 [17 favorites]


Look I don't want to throw out "That'S whY It'S bEta" or some other silly platitudes but one of the things that AI needs is training data. Every time a driver is forced to take intervention it's basically logged and fed into the model. Getting it into the hands of a lot of people who know what they're getting into is part and parcel of the development of a reliable system. The problem being a lot of people get to see sausage being made.

It's not like anything in this world we like to think of as perfect is perfect. Most of what we have that is as close to "perfect" as we can get it is hard fought progress inch by inch on stupid edge cases that didn't even enter the mind of a team of engineers until that edge case happened either in testing or in prod.

So yeah, you can't exactly go to sleep and have the car take you where you want to go, but give it a few more years, massive amounts of training data, and you'll have something not perfect but probably better than a human.
posted by Your Childhood Pet Rock at 8:18 AM on March 28, 2021 [12 favorites]


How much is Tesla compensating people for providing the data they need to develop their self-driving system?

How much are they compensating their widows?
posted by Nelson at 8:20 AM on March 28, 2021 [56 favorites]


The problem being a lot of people get to see sausage being made.

An apt metaphor, considering how many times people were nearly turned into sausage in this video.
posted by mhoye at 8:23 AM on March 28, 2021 [35 favorites]


80%
posted by joeyh at 8:24 AM on March 28, 2021


I mean, Tesla's doing this on public roads, making all of us their test subjects, and I don't know about you, but I haven't gotten a check from them, yet.
posted by dirigibleman at 8:24 AM on March 28, 2021 [95 favorites]


Sounds like a passable imitation to me.

Yeah but can the car make itself feel really badly about its performance at the same time ?
posted by NoThisIsPatrick at 8:27 AM on March 28, 2021 [10 favorites]


Your Childhood Pet Rock: Getting it into the hands of a lot of people who know what they're getting into is part and parcel of the development of a reliable system.

I think that's covered in the last couple of paragraphs of the article:
That leads to videos like this, where early adopters carry out uncontrolled tests on city streets, with pedestrians, cyclists, and other drivers unaware that they're part of the experiment. If even one of those Tesla drivers slips up, the consequences can be deadly.

All of this testing is being carried out on public roads, for the benefit of the world's most valuable automaker, at basically zero cost.
It's a classic case of externalizing the costs onto the rest of us.

but give it a few more years, massive amounts of training data, and you'll have something not perfect but probably better than a human.

One thing I'm curious about, in case anybody knows: Why not use all that free labour to have drivers simply drive and train the model on their actions, without the car doing any self-driving at all? It feels like they're moving on to collecting the validation dataset before they've finished a proper training dataset.
posted by clawsoon at 8:29 AM on March 28, 2021 [57 favorites]


Performing non-consensual experiments on other humans, that could kill them, isn't some morally ambiguous grey area for debate on libertarian tech forums. It's right smack dab in the middle of evil.
posted by SunSnork at 8:31 AM on March 28, 2021 [93 favorites]


So yeah, you can't exactly go to sleep and have the car take you where you want to go, but give it a few more years, massive amounts of training data, and you'll have something not perfect but probably better than a human.

I really want to commend how casually you gloss over the fact that all this training data is amassed by subjecting thousands and thousands of unwilling participants to the risks of injury and possibly death, presumably for just as much money as Tesla's paying all those participants. Very on brand for Tesla fandom.
posted by mhoye at 8:34 AM on March 28, 2021 [52 favorites]


I didn't realize how frequently the Tesla driver has to look at the tablet next to the steering wheel. That, alone, is dangerous!
posted by The corpse in the library at 8:37 AM on March 28, 2021 [34 favorites]


I hate, hate, hate to ever suggest police crackdown as a solution to anything, but this is one of the very few times when it might be appropriate.

The driver is not driving their car. They are distracted and negligent. They're operating an electronic device that is not in hands-free mode.

We have all sorts of laws about what people are expected to do when driving, and this is breaking a lot of them. Tesla should be sued into the ground for encouraging this behavior in its users.

Test this functionality in your own private labs and test tracks. Prove that it works first, then petition the public to allow it to be licensed to drive. When the car passes a driver's test, then it's a certified driver. Before then, this is like letting your 6-year-old behind the wheel.
posted by explosion at 8:43 AM on March 28, 2021 [50 favorites]


My hot take has always been that Waymo is 5 years away from level 5 autonomy, Tesla is 10 years away, and at least Waymo doesn't claim to be there already.

Tesla autopilot v1 is nice though. Really good level 2 driver assist, worlds better than all the other lane-keeping that was available at the time, and still better than most. They should have stuck with that.
posted by allegedly at 8:46 AM on March 28, 2021 [9 favorites]


I'm going to say that I don't think this will work. I don't think Tesla's going to do it, I don't think anyone is going to do it in the medium term, absent some real breakthrough in computing. It's basically like blockchain but somewhat more likely to throw off useful side developments - a buzzword used to make publicity and scam people out of money. We are not going to have truly autonomous cars before the boiling seas rise and kill us all.
posted by Frowner at 8:46 AM on March 28, 2021 [11 favorites]


> ...one of the things that AI needs is training data.

Tesla has all the money they need to build their own Potemkin village, populate it with Musk-worshipping bros happy to sign waivers, and perform live experiments 24/7. Musk personally is wealthy enough to found his own educational institution to provide himself with an IRB happy to rubber stamp the research on human subjects.

This way they can get all the data they need without having to get the rest of us involved.
posted by at by at 8:49 AM on March 28, 2021 [42 favorites]


I still feel like over time the odds of who will have a better safety record on the road heavily favors the AI drivers... they never drive drunk, tired, distracted by cellphones, the stereo, etc. They don't speed for fun, refuse to merge properly, get angry and do stupid shit like tailgate or block other drivers. And the more other AI-driven vehicles on the road there are, the more predictable the overall driving environment will be for them.

If even one of those Tesla drivers slips up, the consequences can be deadly.

I'm no Tesla stan, but that's called driving. The 38k auto deaths per year in the USA don't unduly trouble the vast majority of drivers, that's just risk imposed on others that they're 100% fine with. Every new driver is an uncontrolled test on city streets. As long as the AI-driven vehicles are insured properly, I'm not really seeing a big difference between them and letting some 16-year-old loose on the streets.

It's a classic case of externalizing the costs onto the rest of us.

You do that to the world every time you drive a gas-powered car.
posted by lefty lucky cat at 8:49 AM on March 28, 2021 [33 favorites]


lefty lucky cat: You do that to the world every time you drive a gas-powered car.

And we should do something about that, too.
posted by clawsoon at 8:54 AM on March 28, 2021 [41 favorites]


Gathering data, training data... For a few years now, a number of self driving car companies have had their vehicles cruising all over my neighborhood in San Francisco, a quiet, not too much traffic, grid street arrangement sort of place. About three maybe four years ago I had to track down who was actually officially monitoring these activities because one of the brands cars were behaving really weird. They would just stop in the middle of the block for no reason. It took awhile but now I had someone to complain to. These cars had four people in them riding around each armed with a computer. Now the cars have one person in them. I still can’t tell if they are driving or not. The only hard stuff is the frequent street closure and construction that has been going on. But I don’t see them around those areas much. The behavior of these cars has been relatively unpredictable. In general I can assume a human won’t just stop in the middle of a block. (Though some do.) Somebody thinks this is a) going to be a money maker and b) what they are doing now will get them there. I don’t believe either.
posted by njohnson23 at 9:09 AM on March 28, 2021 [3 favorites]


Generally I am bullish on the potential for self-driving for the reasons left lucky cat mentions, but I don't think the path to get there leads through the "supervised automation" path that Tesla is taking. My suspicion is that Tesla is going to poison the well for "self driving" the way that meltdowns of nuclear power stations have for nuclear power generation.
posted by rustcrumb at 9:13 AM on March 28, 2021 [11 favorites]


Around 5:30 in the video the passenger mentioned that the car is very cautious of "bicyclists and pedestrians" and will get very close to other cars in order to give them space.

I do wonder if this is actually a better strategy for preserving human life than the people in the video think.
posted by TheophileEscargot at 9:23 AM on March 28, 2021 [8 favorites]


Yeah but can the car make itself feel really badly about its performance at the same time ?

You know how AI is basically a technology for replicating biases? Well this one replicates Elon Musk's inability to feel shame.
posted by klanawa at 9:35 AM on March 28, 2021 [22 favorites]


A squirrel runs into the street. The car veers headlong into a tree. Score one for the squirrel.
posted by njohnson23 at 9:35 AM on March 28, 2021 [4 favorites]


How soon before one of these runs over someone's kid?
posted by octothorpe at 9:35 AM on March 28, 2021 [2 favorites]


>Somebody thinks this is a) going to be a money maker and b) what they are doing now will get them there

Tesla's P/E is currently 984 (vs e.g. 35 for MSFT) -- this valuation is predicated on Tesla expanding from 2 factories (plus new ones in Austin and Berlin coming online this year) to ~20, which if they can find the sales to keep running at 95% should give them a two-digit P/E at the current price.

But to triple up from here to Apple's $2T market cap will require Tesla taking on and killing Uber with self-driving electric cabs.

Weird thing about Tesla's tech is that they could be 2 months or 20 years from Level 5 autonomy, they've got the smartest people money can buy working for them . . .
posted by Heywood Mogroot III at 9:43 AM on March 28, 2021 [2 favorites]


We've reached out to Tesla for comment on the video, but the company has no press office and does not typically respond to inquiries.

Because of course it doesn't.

Despite the extremely misleading "Full Self-Driving" name, when Tesla talks to regulators it's still only Level 2 autonomy. I don't care what warnings come up on the screen about remaining alert, keeping your hands on the steering wheel, etc: Tesla calling this "Full Self-Driving" is gross negligence. What possible meaning could that phrase have other than 100% autonomous driving?
posted by jedicus at 9:47 AM on March 28, 2021 [25 favorites]


How soon before one of these runs over someone's kid?

How many bitcoins is that child worth? How many accountants in the halls of Tesla are doing that very calculation, I wonder?
posted by They sucked his brains out! at 9:52 AM on March 28, 2021 [6 favorites]


Weirdly tesla is always 'six months from full self driving" according to musk. It's weird how six months out works as such a sliding scale.

I have zero hope in self driving tech working outside of doing what the rise cars did to pedestrian rights and putting the onus of protecting ones self from ai errors on the people getting hit. The rhetoric on that is already coming from shit heels like musk in telling people they need to learn how to act or respond around ai driven cars.
posted by Ferreous at 9:55 AM on March 28, 2021 [12 favorites]


Honestly why do people think that if anyone can crack the nut of ai driving it's an apartheid money fueled slimy narcissist like musk. He's Trump for tech fetishists.
posted by Ferreous at 9:57 AM on March 28, 2021 [35 favorites]


I'd love to hear from a product liability lawyer how Tesla's "we'll sell you full autopilot, and you're responsible for watching like a hawk so it doesn't kill third parties" is likely to go for them.

What does a plaintiff need to show against them? Do you need that this product is more dangerous than human drivers in general, or more likely to cause the specific accident that harmed you? Do you need to show that Tesla knew this?

I'm certain Tesla has fine lawyers who would flag any liability threats to the existence of the corporation, but Musk is totally the type to blow them off.
posted by away for regrooving at 9:59 AM on March 28, 2021 [7 favorites]


I liked how they grew increasingly alarmed. As a morality tale it's a striking rendition of the sorcerer's apprentice.
posted by dmh at 10:04 AM on March 28, 2021 [4 favorites]


The amount of time, resources, and human misery that will be incurred to bring us self-driving cars is appalling. It’s also unbelievably dumb compared to just spending that time and money on mass transit and housing development. Hell, we could’ve all had government ebikes years ago if capitalism allowed for anything that makes any sense at all.
posted by sinfony at 10:10 AM on March 28, 2021 [29 favorites]


https://www.gizmodo.com.au/2021/03/tesla-owners-take-to-reddit-asking-what-happens-if-full-self-driving-isnt-real/

There’s so much going on here, and so many questions raised. Is “FSD” a genuinely earnest project with real goals and deliverables, or an elaborate scam to get a lot of money while delivering nothing?

Is it real but just very behind schedule and suffering from Elon’s frequent overhyping and over-promising? Like when he claimed Teslas were appreciating assets because they’d soon be able to earn money for their owners as self-driving robotaxis?

Would buyers of “FSD” pre-orders able to file a class-action lawsuit if the promised capabilities aren’t delivered? Is Tesla protected from this? Would it cripple the company?

posted by Brian B. at 10:38 AM on March 28, 2021


they never drive drunk, tired, distracted by cellphones, the stereo, etc. They don't speed for fun, refuse to merge properly, get angry and do stupid shit like tailgate or block other drivers

Neither do I, just quietly.

The very first thing I've taught all my kids about driving a car is that of all the decisions all of us make on a daily basis, getting behind the wheel is easily the one most likely to kill us, and this is something that needs to be reflected upon every time before starting the engine.

The physics of two tons of metal at 100km/h are just inherently unforgiving, and an extra gram or two of silicon is never going to change that.
posted by flabdablet at 10:54 AM on March 28, 2021 [11 favorites]


The physics of two tons of metal at 100km/h are just inherently unforgiving, and an extra gram or two of silicon is never going to change that.

ABS? Traction control? Airbags? Seatbelt pretensioning systems?

The amount of lives saved from an extra gram or two of silicon in each car is almost impossible to quantify but I can assure you it is significant.
posted by Your Childhood Pet Rock at 10:59 AM on March 28, 2021 [19 favorites]


Like my Kia can get close to level 2 automation on the highway and the amount of mental stamina it drains from me is far less than when I have to drive my wife's unautomated car keeping me more alert. Not to mention the safety systems have saved me from a fender bender a few times. AEB isn't full self driving but it's a godsend.
posted by Your Childhood Pet Rock at 11:03 AM on March 28, 2021 [1 favorite]


Neither do I, just quietly.

If everyone were you, there would be almost no car wrecks, but the US kills a Korean War worth of itself every year.

Whenever it finally arrives, self-driving cars don't need to be perfect or as good as you. They just need to be better than the average human or, God help us, better than the average American for it to make sense and reduce deaths.
posted by GCU Sweet and Full of Grace at 11:24 AM on March 28, 2021 [4 favorites]


That’s something we noticed. It really favors the safety of bicyclists and pedestrians. And it will get very close to other cars and oncoming traffic in order to give a bicyclist extra room.”

They say that like it’s a bad thing.
posted by TWinbrook8 at 11:25 AM on March 28, 2021 [7 favorites]


"we'll sell you full autopilot, and you're responsible for watching like a hawk so it doesn't kill third parties"

The thing about this tech is that when it improves from 'tries to kill you every 5 minutes' up to 'tries to kill you every 10 hours' it will actually become MORE dangerous, because the average driver just won't pay attention for that long.
posted by Lanark at 11:31 AM on March 28, 2021 [22 favorites]


I'm kind of torn about this tech. ON the one hand I'm not fond of Musk and his particular brand of nonsense. On the other, as a blind person who has never been able to drive and feels extremely limited by the need to depend on others, I'm all for trying to explore the potential of this technology over the next years. I don't doubt that a lot of the safety issues are real, but I hope that with sufficient, real engineering work they can be overcome or at least minimized.
posted by Alensin at 11:51 AM on March 28, 2021 [20 favorites]


I didn't realize how frequently the Tesla driver has to look at the tablet next to the steering wheel. That, alone, is dangerous!
posted by The corpse in the library at 11:37 AM on March 28 [12 favorites +] [!]


It absolutely is a problem. A colleague of my wife basically bought a Tesla on impulse a few years ago. When he heard I was a "tech guy" he really wanted to show it off and take me for a drive around San Diego.

I was absolutely frightened by how he was distracted or fiddling with the touchscreen while driving, I had to tell him a few times that he was making me nervous.

In Germany, there was a case last summer where a court ruled that the Tesla "touchscreen car controls should be treated as a distracting electronic device". In particular, the speed of windshield wipers are automatically adjusted by how much rain is detected, but if you want to override this, you have to dive into the touchscreen controls. What could go wrong? (Especially during a rainstorm. Smh.)
posted by jeremias at 11:52 AM on March 28, 2021 [20 favorites]


If people want to sell self driving as life saving tech then it should be legally mandated for self driving cars to put their passengers at risk over pedestrians. We could also fix a huge amount of the pedestrian deaths with legislations mandating designs for cars that emphasize pedestrian safety but that's not exciting and sexy like vaporware.
posted by Ferreous at 11:56 AM on March 28, 2021 [6 favorites]


Legislate crossovers/trucks from being battering rams with shit visibility that funnel people under their wheels? Nah daddy musk is gonna make the world safe with ai magic, that's how you protect human life.
posted by Ferreous at 12:06 PM on March 28, 2021 [6 favorites]


>Weird thing about Tesla's tech is that they could be 2 months or 20 years from Level 5 autonomy, they've got the smartest people money can buy working for them . . .
I don't think those employees can tell either. And there's a mistaken view that they have to be the first to market to be the winner in this AI race -- transforming all cars will take a generation and in that time you can earn well using tech aids to make all road travel much safer. The "exponential growth" of machine learning for autonomous vehicles needs them to be networked and sharing state between nodes, there's no data links good enough yet or any way to carry a whole datacenter in a vehicle.

The thing that would make a difference in the next few years is ad-hoc vehicle-to-vehicle networking so that emergency braking is notified to the train of vehicles behind you within microseconds and so the zip merge is enforced by courteous programming. The goal is to get situational context into the vehicle's processing beyond the line of sight of the driver, used by driver aids to make vehicles safer while preparing the population for the difficult-to-predict behaviour of non-human vehicles.

(I read some snark in 'smartest people money can buy' because some people can't be bought.)
posted by k3ninho at 12:11 PM on March 28, 2021 [6 favorites]


while preparing the population for the difficult-to-predict behaviour of non-human vehicles.
aka make streets even less hospitable to humans and carve out more space as dmz for cars.
posted by Ferreous at 12:16 PM on March 28, 2021 [3 favorites]


>>while preparing the population for the difficult-to-predict behaviour of non-human vehicles.
>aka make streets even less hospitable to humans and carve out more space as dmz for cars.
We also get to solve the Trolley Problem by choice. Do we take our civilisation in a the direction which enshrines the powerful in their safe mobile death-boxes, or do we take our civilisation in the direction that protects the less-powerful outside the death box? Or do we go one better and make the deal for having the mobile armoured box one where the user of the will be sacrificed to aid the learning machines if a trolley decision arises -- with the design of its problem-solving such that it avoids the situation escalating to the point that a trolley-problem decision has to be made.
posted by k3ninho at 12:28 PM on March 28, 2021 [3 favorites]


The "go pedal"?? Is that what the kids are calling it these days?
posted by basalganglia at 12:30 PM on March 28, 2021 [10 favorites]


I drove a new-ish Nissan with autopilot recently. For me, the lane-keeping feature is distracting, since I'm always trying to predict where it'll get confused, e.g. around sharp curves or exits. The distance-keeping feature is much more predictable, I just wish there was a more prominent on/off indicator.

We've known about the issues with hybrid human-computer control systems since at least Apollo, let's use some of that research.
posted by RobotVoodooPower at 12:44 PM on March 28, 2021 [3 favorites]


Our Model S allows you to adjust the windshield wipers by turning a knob or punching a button on the wiper knob on the stalk. Our Model 3 does, I think, I don't drive it much, requires fiddling with the touchscreen. A lot of things have changed between the S and the 3, and not for the better IMO.

Given the 3 has no dash HUD, other than the touchscreen in the middle, requiring one to look away from traffic to even see how fast you are going, bugs the shit out of me.
posted by Windopaene at 12:47 PM on March 28, 2021 [5 favorites]


Jesus, we’ll do anything to avoid mass transit. Fucking doomed, man.
posted by Don.Kinsayder at 1:12 PM on March 28, 2021 [21 favorites]


Tesla: We will make the world safer through self-driving, which will be better, safer, more responsible and careful than any human driver

Also Tesla: LOL LUDICROUS MODE IT GOES TO 100MPH IN 3 SECONDS BRRRRRRRR
posted by parm at 1:31 PM on March 28, 2021 [8 favorites]


It's funny, if the lawyers make us settle the Trolley Problem to avoid lawsuits, then the insurance companies will need to factor in bugs-per-line-of-code, which will drive them to spread the risk across as many miles and as many bums on seats: the metric is like customers per mile traveled per bug per line of code.

The insurance actuaries will ask the autonomous vehicle companies to reinvent mass transit.
posted by k3ninho at 1:37 PM on March 28, 2021 [4 favorites]


Ferreous should be legally mandated for self driving cars to put their passengers at risk over pedestrians

A nice idea in theory, but in practice it's just going to be some opaque AI model driving the car, and nobody will understand why it chooses to do anything. It's not going to know what the Trolley Problem is, let alone have a cogent approach to it.

My pedestrian survival strategy is going to be: wear a t-shirt with Elon Musk's face. Nearby Teslas will identify me as their god-king, and will do my bidding.
posted by qxntpqbbbqxl at 1:52 PM on March 28, 2021 [20 favorites]


In particular, the speed of windshield wipers are automatically adjusted by how much rain is detected, but if you want to override this, you have to dive into the touchscreen controls.
Yeah, that was a really terrible decision. It took them until mid 2019 to get the auto-adjust right, and I haven't had to override it since, but it drove me bonkers before that. Now the worst thing about owning a Model 3 is El*n's twitter account.
posted by rhamphorhynchus at 1:56 PM on March 28, 2021 [2 favorites]


I am tertiarily involved in this, and the general consensus I've seen is that Tesla is not a serious player in the autonomous vehicle space. They continue to hemorrhage talent and are making questionable decisions with just about every aspect of their program. Meanwhile, its competitors are making faster progress using significantly safer approaches (sometimes publicly so, sometimes not).

My take is in the next 2-5 years most people in the US will have a chance to try out an AV if you're so inclined (I.E., you'll need to visit certain cities, but it won't be astronomically expensive nor have a long waiting list, etc.) I would bet money that it won't be in a Tesla.
posted by matrixclown at 2:22 PM on March 28, 2021 [9 favorites]


Whenever it finally arrives, self-driving cars don't need to be perfect or as good as you. They just need to be better than the average human or, God help us, better than the average American for it to make sense and reduce deaths.

I'm copying much of this from a previous thread, because I want to say it again:

The above viewpoint is both true, and completely irrelevant in the context of the real world.

It's true that safety will be improved when the software is better than the average human! But we're not there, we're not even close to there, and the software is being put on the roads today.

Once it's already there, and making money for the people who put it there, there is then little incentive for them to develop it any further at all.

For them, the definition of "sufficiently developed" is that they can get away with putting it on the roads and making money off doing so. That is the only standard that will actually be hit in practice, and the efforts to do so will involve far more lobbying, bribes and shenanigans to lower the standards than they will engineering to meet them.

Furthermore, once it's out there, and "certified" - whatever that comes to mean - then the incentive will then gradually become to not change it, because to do so will come to require lots of expensive testing and paperwork and the risk of being the guy who broke it.

So the point at which we as a society first accept the technology as "good enough" is critical, because once it gets out there, progress will slow down drastically and the processes of ossification and cost disease will set in against further improvement.

This isn't it. It's not even close. It shouldn't be on the roads, period.

And it shouldn't be on the roads soon either, because there's a whole gaping gulf between here and there in which it's even more dangerous in practice for the reasons people have pointed out already:
  • "This kind of hybrid system is more dangerous than a manual system. Because you might be lulled into a sense of safety for a moment, lose attention, and then the car kills you."
  • "The thing about this tech is that when it improves from 'tries to kill you every 5 minutes' up to 'tries to kill you every 10 hours' it will actually become MORE dangerous, because the average driver just won't pay attention for that long."
Did you know that this is an increasing problem in aviation? We have autopilots that can do more and more of the work, but the pilots are getting out of practice, causing accidents when they do have to take over manually. And when things go wrong, they now have to understand not only what's happening to the aircraft's systems, but how the automation is interacting with the problem - which is often completely unexpected and counter-intuitive.

Here's a talk about this issue, the title of which captures the problem perfectly: "What's it doing now?".
posted by automatronic at 2:41 PM on March 28, 2021 [59 favorites]


automatronic

Username is on the button, I'd say.
posted by clawsoon at 2:46 PM on March 28, 2021 [3 favorites]


> Why not use all that free labour to have drivers simply drive and train the model on their actions, without the car doing any self-driving at all? It feels like they're moving on to collecting the validation dataset before they've finished a proper training dataset.

This is a well-known challenge in imitation learning. The expert (human) almost always makes good decisions, so the dataset does not contain enough examples of recovering from bad states. Yet, the learned policy will inevitably wind up in a bad state due to some combination of its own mistakes and never-before-seen environments. A classic paper on this topic is the DAgger algorithm (Ross, Gordon, and Bagnell, AISTATS 2011), although it's unlikely Tesla is using that particular method.

When the human driver grabs the wheel of the Tesla, they provide immensely valuable information: 1) we now know for sure that the learned policy's actions in the past few seconds were wrong, and 2) we know how to recover from this particular bad situation.

To my knowledge, there is no way around this. Developing autonomous driving will require deploying an imperfect policy in the real world at some point. The big problem here, IMO, is using unwitting customers as "experts" instead of a well-trained, well-paid group of employees with more than one person in the car paying attention at all times.
posted by scose at 2:53 PM on March 28, 2021 [14 favorites]


I would be more concerned about what Tesla's half-baked "AI" is doing and the ways in which it fails if I had literally ever seen a failure mode that wasn't something that happens to humans trying to drive themselves regularly enough to not be newsworthy when they do it.

At least when a Tesla on Autopilot runs down a pedestrian it'll make the news and there might even be some consequences, unlike the usual case when a human pilots their vehicle into a pedestrian and everyone just throws up their hands and says "whatcha gonna do..stupid pedestrians amirite" unless the driver is obviously drunk. Uber tried blaming the victim when they did it, but their effort ultimately failed.

I'd probably feel very differently about ADAS in general if we lived in a world where human drivers gave the task the attention it actually deserves and hadn't personally known a classroom's worth of people who have been killed in cars. People don't need an excuse to not pay attention, they're already failing miserably even without these systems.
posted by wierdo at 2:56 PM on March 28, 2021 [4 favorites]


we're not even close to there, and the software is being put on the roads today.

Absolutely one billion percent these should not be on the road, and shouldn't be on the road unless and until they can demonstrate increased aggregate safety in virtual environments, and then again in large-ish phase-2 trials on their own pet roads, and then again in small-scale phase-3 trials in the real world.
posted by GCU Sweet and Full of Grace at 3:52 PM on March 28, 2021 [9 favorites]


The "go pedal"?? Is that what the kids are calling it these days?

It is electric, so that makes more sense than “gas pedal”
posted by shesdeadimalive at 4:08 PM on March 28, 2021 [3 favorites]


It is electric, so that makes more sense than “gas pedal”

Purists call them the velocitator and the deceleratrix.
posted by Your Childhood Pet Rock at 4:26 PM on March 28, 2021 [29 favorites]


automatronic: Username is on the button, I'd say.

Sure, but GCU Sweet and Full of Grace is the one who's literally named after a fictional artificial intelligence here.
posted by automatronic at 4:34 PM on March 28, 2021 [4 favorites]


After watching a couple of minutes of the video, I feel pretty confident that there are humans that can drive better than that after knocking back a six pack. (I don't mean this a pro-drunk-driving sentiment, but as an anti current state of AI sentiment.)
posted by Larry David Syndrome at 4:35 PM on March 28, 2021 [5 favorites]


If people want to sell self driving as life saving tech then it should be legally mandated for self driving cars to put their passengers at risk over pedestrians.

The entire point of all automotive safety engineering of the last sixty years, and arguably urban design in North America over the same period, has been to preserve the life of the driver at the cost of literally everything else in its path. Zero humans will buy a car, or indeed willingly deploy any safety mechanism, that picks somebody else's life over theirs.
posted by mhoye at 5:55 PM on March 28, 2021 [5 favorites]


they've got the smartest people money can buy working for them . . .

Tesla really doesn't. Tesla has the smartest people that can be convinced to work for Elon Musk in exchange for less Tesla stock than they could get by working somewhere else and just plowing their paychecks into $TSLA.

Also, in general in the self-drive world, the people cutting the paychecks aren't at all good at figuring out which engineers have a realistic shot at cracking the problem, as opposed to scammers and deadweight. There is a stupid amount of self-drive investment money in search of anyone who can appear competent in front of a VC for half an hour.
posted by reventlov at 6:14 PM on March 28, 2021 [9 favorites]


Just watched the video and holy shit is that scary. That's so far from anything that should be called "beta" that it's seriously criminal to call it that.
posted by octothorpe at 6:16 PM on March 28, 2021 [2 favorites]


they've got the smartest people money can buy working for them . . .

Tesla really doesn't. Tesla has the smartest people that can be convinced to work for Elon Musk in exchange for less Tesla stock than they could get by working somewhere else and just plowing their paychecks into $TSLA.
I've been slinging bits professionally for nigh on fifteen years now, during which time I have learned many things. Chief amongst those is "the 10x engineer is a folktale, and even if he existed, I'd refuse to work with him because he's almost certainly a cowboy who won't communicate with anyone or write tests for his goddamn code."

Hiring "smart" people is yet another tale the big players in tech like to spin. After all, who better than smart people to solve your really tough problems? Unfortunately for Silicon Valley, "smart" is an overloaded term that means many things to many people, and all the interview questions I've seen from the big guys tell me that they're optimizing for "has reviewed their CS301 textbook recently." If they'd hired someone who was ACTUALLY smart, that person would have told them to keep their beta tests off of public roads before a rogue Tesla kills a pedestrian and the state of California swats them like a horsefly.
posted by Mayor West at 6:29 PM on March 28, 2021 [25 favorites]


but give it a few more years, massive amounts of training data, and you'll have something not perfect

I bike and walk these exact roads they're driving on every single day RIGHT NOW (actually the video goes right past like 3 buildings I've lived in) and all I can say is no fucking thank you.
posted by bradbane at 7:59 PM on March 28, 2021 [10 favorites]


Leaked Tesla documents show a heated discussion about the wisdom of using BMW driver data as the training dataset, but it was buried by the marketing department who realized that they would make a fortune charging the existing market of BMW drivers hefty software upgrade fees to drive like even more of an asshole.
posted by loquacious at 8:23 PM on March 28, 2021 [1 favorite]


The basic problem with AI seems to be they're not afraid of dying.
posted by polymodus at 8:46 PM on March 28, 2021 [6 favorites]


they've got the smartest people money can buy working for them . . .

Fuck no they don’t. Musk locks the bathrooms on certain floors when he gets a whiff that anyone, including the technical people, might be organizing. Anyone still there is getting paid less money to be part of Elon’s horeshit because they believe in it.
posted by sideshow at 9:39 PM on March 28, 2021 [3 favorites]


The “but, but, but we’ll have soooo much training data!” bullshit has been said since the minute FSD was announced in 2013.

At first it was the reason human drivers would be obsolete by 2018 or whatever, but for most of the time it was the self deception the 1,000’s of weird nerds would tell themselves when some other company come out with stuff that was not only real (a big difference from most of the Tesla stuff), but that it worked much better.
posted by sideshow at 9:53 PM on March 28, 2021


Whenever it finally arrives, self-driving cars don't need to be perfect or as good as you.

They do if they want me to buy one. Why would I plunk down a stack of dollars for a machine that's even more likely to kill me than the one I have already?
posted by flabdablet at 10:30 PM on March 28, 2021 [3 favorites]


Weirdly tesla is always 'six months from full self driving" according to musk.

That's just stock manipulation.
posted by rhizome at 1:18 AM on March 29, 2021 [2 favorites]


I think real & imaginary safety concerns will become an important wedge issue for the industry lobby to justify regulations that accommodate their product, re: jaywalking. Huge swaths of public space will be death-zones where no living thing is allowed, off-limits to unlicensed/unregulated transportation – after all that's not unlike things are now. Interests are best protected by chiseling them right into the logic of the law & the organization of public space.
posted by dmh at 4:39 AM on March 29, 2021 [6 favorites]


How is driving computationally harder than go? if machine learning can tackle the state space of go, there's really no fundamental barrier to solving most human tasks. It's just the extension of the Church-Turing thesis to the real world. And interestingly, it's been reported that the go AI tends to play much safer than human players, who tend to take more risky moves.
posted by polymodus at 5:27 AM on March 29, 2021


How is driving computationally harder than go? if machine learning can tackle the state space of go, there's really no fundamental barrier to solving most human tasks

It's harder because go has stones in two colors on a board of some finite size, whereas in the real world, it's not given what constitutes a "stone" or a "board" or even a "color".
posted by dmh at 5:49 AM on March 29, 2021 [12 favorites]


How is driving computationally harder than go?
Because the rules don't get changed every 30 seconds by a bunch of irrational humans getting in the way. If pedestrians, cyclists and human driven cars could all be completely outlawed then AI would become a whole lot easier.
posted by Lanark at 5:52 AM on March 29, 2021 [2 favorites]


How is driving computationally harder than go? if machine learning can tackle the state space of go, there's really no fundamental barrier to solving most human tasks.
Go is orders of magnitude less complicated: you have simple rules and a single, perfect board which is the same on every game.

Driving is a classic problem which seems easy because so many of us do it every day and it’s easy to forget how often we’re relying on humans doing the right thing in ambiguous circumstances to avoid problems. Think about how hard it’d be to come up with rules for how to handle inconsistent lane markings (thinking of that block in Somerville which was painted as 3 lanes going in, 2 coming out and … something in the middle), malfunctioning or incorrectly placed signals, construction, handling other drivers making mistakes or choosing to break the rules, etc. Driving isn’t one activity like Go, it’s dozens of them which you have to do all of the time.

This hits the other challenge building ML systems: you don’t know why they’re doing why they do without a lot more hard academic research. You give them inputs and see the outputs but you don’t get rules you can review out, so there’s always the likelihood of a catastrophic failure when they get an input which wasn’t in your training data and you learn that it picked up the wrong trait. That’s amusing when Google tags your dog as a cat but potentially life-threatening if your self-driving car misinterprets a construction site, railroad crossing, jumbled city intersection, etc. – and unlike the common observation that human drivers make errors, too, this behavior can look random. You can assume that the Uber driving is being Taylorized into running red lights and plan accordingly but your Tesla might decide something isn’t there because of dust on the camera or unusual lighting, graffiti on a sign, etc.
posted by adamsc at 6:01 AM on March 29, 2021 [4 favorites]


Moravec's paradox: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".
posted by octothorpe at 6:34 AM on March 29, 2021 [7 favorites]


Police here have a catch all law "driving without due care and attention" that they hand out whenever you have an incident without anything else obviously being an infraction. So if you slip off the road because of black ice over a crest, "you know, _hypothetically_", and the Mounties show up they'll write you a DWDC&A ticket and there is jack all you can do about it. Try to go to court and all the judge asks is "Did you fall off the road?" and then, unless you want to argue you intentionally drove into the ditch you say "Yes" and the ticket is confirmed. Hard to see how Tesla, or anyone else advertising true automated driving, isn't on the hook for those tickets plus court costs, claims of false advertising, and anything else a lawyer can think up. And those tickets are going to show up because regardless of how super human the AI is our infrastructure isn't set up to provide perfect information.

Even if Tesla manages to limit their liability through regulatory capture who is going to by an autonomous car that gets a reputation for generating tickets for the passenger?
posted by Mitheral at 6:53 AM on March 29, 2021 [2 favorites]


I'm a software developer. My understanding of the technology of driving is that the black-box AI part is mostly in the "perception" area. To a computer, a video stream is just a bunch of numbers. So is everything that the computer perceives - lidar, radar, etc. Human programmers write software whose input is "all the lidar points measured in the last 16 milliseconds" and whose output is "A list of rectangles, the estimated bounding boxes of other vehicles, bicycles, pedestrians, and other obstacles in the environment". Similarly with vision - input: millions of numbers, output: "There's a UPS truck double parked across this lane. The road is marked with a double-yellow line...." etc. Of course, the system often combines input from various sensors to help them out, like if you know that this part of your lidar point cloud represents that part of your camera image, that can help you decide, "yes that is a car on the road, it's not a small picture of a car held up close to the camera or a large billboard of a car far away"

The task of recognizing objects in the streams of data gathered by the cameras, lidars, radars, etc is currently where a lot of AI training data is used. You show the system millions of photographs and you say, "these have traffic lights, these do not" and some AI magic probability stuff happens, and you can test it out on some other corpus of data and see that indeed, it seems to be able to tell "has a traffic light" correctly 98% of the time (and similarly, where in the image the light is).

The task of controlling the vehicle is I think somewhat less dependent on AI probability magic. Like, you can tell the car pretty explicitly how its own outputs should map to real world change in position, it's not a very messy data classification problem. You do need to write lots of rules about how to handle various situations on the road, and people worry about what the system will do in confusing or ambiguous situations, but I think you can provide some general guidelines that can take precedence no matter what the signage or lane markings tell you - e.g. "Don't drive into pedestrians, cyclists, other vehicles, or obstacles. Almost always, follow lane markings if they are there, but not if doing so would make you drive into a person." etc. etc.

So at least as the technology exists today, if a self-driving car crashes, the analysis is going to look at the "black box" data from the vehicle and say "this sensor failed, the system did not recognize that the sensor had failed because it continued to emit data that was wrong but not obviously wrong, this led to the system failing to identify the vehicle approaching from that direction, and since it did not perceive the possibility of a collision, the control system piloted the vehicle into the space where the collision occurred" or perhaps, "the image recognition system misclassified this obstacle as safe to drive through, so the car tried to drive through it, and thus struck it" The un-understandable part is mostly likely to be "why was this combination of camera images, lidar, and radar readings not recognized correctly", not "why, despite having a perfectly accurate interpretation of the environment, did the car decide to drive into a pedestrian?"
posted by rustcrumb at 7:28 AM on March 29, 2021 [2 favorites]


if machine learning can tackle the state space of go, there's really no fundamental barrier to solving most human tasks

In order to have a state space, one needs to start with identifiable states.

In order to have identifiable states, one needs to start with identifiable objects to have the states.

My skull contains a finely tuned neural machine that has been applying itself to the task of identifying objects and modelling their likely behaviours and interactions and meanings for longer than I've been alive (because for damn sure some of that stuff has been evolved into the basic hardware layout). It works better than anything likely to come out of a lab in my remaining lifetime. It has to, because my life literally depends on its ability to do so even when I'm not behind the wheel of a car.

What computers are better at than me is reacting fast in circumstances that are well defined, like the sudden approach of an obstacle in front of the car as revealed by forward-facing range detectors. What I'm way better at than any computer that's ever been put on the road is continuously modelling myself and the terrain I'm travelling through while running a continuous obstacles vs scenery classifier on all elements of that model (some of which occasionally consist of temporary signage carrying an arbitrarily large range of suggestions, prohibitions or other terrain condition information expressed in bad, abbreviated and occasionally hand lettered natural language).

To my way of thinking, the right way to use tech on cars is to have it improve the connections between driver and vehicle both on the outbound control path (which is where stuff like ABS and ESC comes in) and on the inbound sensory path (better mirrors and reversing cameras seems to be the extent of this at present). The way to improve driver performance and therefore driving safety is to make vehicle behaviour more predictable and controllable, not less, and autonomous control strikes me as a deliberate step in exactly the wrong direction here.

Driving safely is hard. It's way more than an aggregation of cute technical hacks with a bit of ML glue to hold them together. It's so hard that I really don't expect to see properly autonomous road vehicles exist before I cease to. It's much much harder than flying safely because the density of potential obstacles is both orders of magnitude higher and subject to a combinatorial explosion of potential interactions.

A properly safe autonomous driving machine is essentially a strong AI. Like us, it's embodied and I would expect performance of any autonomous driving tech that fails to achieve sentience also to fail to achieve safety.
posted by flabdablet at 7:41 AM on March 29, 2021 [11 favorites]


Oh, and just to make clear where my true Scotsman boundaries are here: to me, a "properly safe autonomous driving machine" is one that works at least as well as I do in any environment to which the vehicle has access, not only those amenable to having the majority of the obstacle vs. scenery classification pre-canned via Google Maps or similar.
posted by flabdablet at 8:01 AM on March 29, 2021 [1 favorite]


The thing that would make a difference in the next few years is ad-hoc vehicle-to-vehicle networking
The biggest hurdles to the most useful and feasible kinds of autonomy (and this is one) are not technical but logistical and political.

The biggest benefit we could implement right now would be autonomous-only lanes on long highway stretches that are instrumented with machine-readable traffic control devices combined with automated car-to-car negotiation. There’d be a breakdown lane where any car experiencing any sort of problem would automatically just stop, including things like “I signaled my driver that his exit is coming up but he’s asleep or dead or something, I dunno.”

None of the barriers to implementing a system like this are technical, but it’s still impossible because we cannot muster enough human cooperation to get it off the ground. The only reason to obsess over general-purpose automated driving is Engineer’s Disease. Fully automated driving can be approached as a purely technical problem. Less ambitious but more feasible forms of vehicular autonomy cannot emerge from the techno-libertarian paradigm because there’s too much human involved.
posted by gelfin at 8:15 AM on March 29, 2021 [8 favorites]


autonomous-only lanes on long highway stretches that are instrumented with machine-readable traffic control devices combined with automated car-to-car negotiation

Within this restricted an environment, hands-off driving becomes only about as difficult to implement as hands-off flight. Except that mooses are bigger than gooses.
posted by flabdablet at 9:14 AM on March 29, 2021 [2 favorites]


> automated car-to-car negotiation

I thought this was out due to a) potential for mis-use and b) the complexity of figuring out a negotiation protocol with random boxes produced by who knows. I think there's also a bit of c) unclear what it actually would solve: The actual problems have more to do with interacting with the (uncommunicative) environment, more than other cars.

Overall, I agree, though: Highway usage is much more attainable. Automated semi-truck convoy with a single driver/coordinator is an idea I was hearing a lot about a few years ago, but haven't seen much on lately.

> Driving safely is hard. It's way more than an aggregation of cute technical hacks with a bit of ML glue to hold them together.

Go was impossible until it wasn't. Driving is very different from Go, but it isn't necessarily harder in any theoretical or even practical sense. Millions of 16 year olds learn to drive, and only a bare handful would make it to Go master status if they were to instead dedicate their lives to that instead of driving. Driving is a harder technical problem given what we know right now, but it's entirely possible for that to change.

In the big picture, this is the largest effort in everyday robotics ever attempted. For perspective, the runner-up might be the Roomba. I, for one, am seriously rooting for success, but also deeply unhappy about the shitshows that Tesla and Uber have put on the roads. It's a disservice to all of the people honestly trying to get this stuff working.
posted by kaibutsu at 9:48 AM on March 29, 2021 [3 favorites]


There’d definitely be some situational awareness you’d want, like the mooses, road obstructions or non-compliant people who end up in the automated lane anyway, but in general, yeah, that’s the idea: you don’t have to crack strong AI to get hands off steering wheels, and I agree that’s what a general-purpose driving AI unavoidably entails. Once you’ve got the requisite networking, the exceptions can be addressed much more safely by, say, implementing automated traffic breaks as soon as they’re detected.
posted by gelfin at 9:48 AM on March 29, 2021


I can't wait to start seeing defensive hacks against self driving cars.

How will they react to a big STOP sign printed on my back? Could I hang some lightweight traffic-cone-looking outriggers from my bicycle to have more passing distance? Are they being clever with their lidar signal, or could a bunch of near-infrared LEDs or laser throw it off?

I knew some people who cheated in a final project, they painted signs in infrared reflecting varnish on the walls of the maze for their "self driving, maze solving" robot rat, it did some self driving, but the maze solution was written on the walls. They got caught because at just the right angle the varnish also reflected visible light. Could someone organize some invisible IR "road work, detour -->" graffiti to have Teslas going around in circles?

Anyone else remembers that science fiction short story that was linked here a while ago? About a parent suing a self-driving car company and getting a look at their models and training data? It was a great read.
posted by Dr. Curare at 10:09 AM on March 29, 2021 [4 favorites]


There are very few terrorists, griefers, foreign governments, competitive companies, resentful former professional game pieces, bored teenagers, etc. using Go to try to kill, inconvenience, damage, subvert, or spy on the game pieces.
posted by Spathe Cadet at 10:21 AM on March 29, 2021 [4 favorites]


Millions of 16 year olds learn to drive, and only a bare handful would make it to Go master status if they were to instead dedicate their lives to that instead of driving.

The fact that the two things being compared here are playing Go at master level and driving as badly as millions of 16 year olds is quite revealing.

Playing Go at master level is hard but really only involves finding good heuristics to deal with one problem - one easily specified problem - the combinatorial explosion of a relatively small and well defined collection of items. Playing chess at master level likewise.

Driving is a completely different kind of problem. With driving, the vast bulk of the problem space is completely outside the driver's control, so the neat hack that Alpha Zero and Alpha Go both use to generate ML training data - extensively iterated adversarial trial and error from an endlessly regenerable starting position - doesn't work. And because it is a different kind of problem, I don't think trying to find a spot for it on a relative difficulty scale is even reasonable.
posted by flabdablet at 10:25 AM on March 29, 2021 [8 favorites]


deeply unhappy about the shitshows that Tesla and Uber have put on the roads

Full agreement. Anybody who isn't deeply unhappy about these things is not paying enough attention to be safe behind a wheel.
posted by flabdablet at 10:28 AM on March 29, 2021 [1 favorite]


> automated car-to-car negotiation

I love it when my car doesn’t move when the light turns green because a Tesla engineer made a subtle bug in their Paxos implementation
posted by qxntpqbbbqxl at 10:45 AM on March 29, 2021 [2 favorites]


I can't wait to start seeing defensive hacks against self driving cars.

Attacks against machine vision neural networks already exist, and they’re awesome! Imagine printing out a sticker that looks like noise, affixing it to a stop sign, and now all of a sudden cars recognize the stop sign as a speed limit 50 sign.

The difficulty as I understand it is that to generate these you need a captive instance of the AI that you can experiment on, in the same way that casino hackers buy their own slot machine to vivisect.

(I’m on mobile so too lazy to link)
posted by qxntpqbbbqxl at 10:57 AM on March 29, 2021 [2 favorites]


> different kinds of problem

Still, not necessarily impossible. Want multi agent problems with partial information? There's very good progress being made on ML StarCraft. Go was one obvious, incredible success story, and there are many other problems being worked in which share features with what's needed for driving.

All I'm saying is that there's no theorem that places driving outside the realm of the possible.

The best approaches to driving right now take lots of real world data, and then find the interesting cases and turn them into millions more related simulated situations. (There's some argument that this is what dreaming does: replay variations of waking input to provide more learning from a few examples.) Replaying simulations of driving situations is absolutely a good strategy. Let's also keep in mind that the best we can currently do is not in this video!
posted by kaibutsu at 11:23 AM on March 29, 2021


Just jumping in to say that my grandfather was a Go master and such a horrible driver that he lost his license. I do wonder if all 16 year olds learned to play Go competitively how many would become masters (and maybe it would keep them off the streets - I'm horrified what a bad driver I was at that age)
posted by jessamyn at 2:17 PM on March 29, 2021 [3 favorites]


The difficulty as I understand it is that to generate these you need a captive instance of the AI that you can experiment on, in the same way that casino hackers buy their own slot machine to vivisect.

They are restricted now but Tesla does plan to sell millions to all comers.

There is an obvious exploit against unattended autonomous vehicles. Get it to come to a stop by walking in front of it and then place a cardboard box in front and behind it. The car isn't going anywhere without manual intervention. This is exactly the sort of thing bored teenagers would delight in.
posted by Mitheral at 2:29 PM on March 29, 2021 [1 favorite]


As someone who has driven, walked, and unicycled around this area, I would say it's roughly 80th percentile in terms of difficulty for a human driver. Not as hard as the notorious cities with Byzantine intersections and the median driver being a sociopath. But also much more challenging than your typical mile in the U.S. The frequency of double-parked vehicles, 1-way streets that become two-way, pedestrians in crossings outside the light, and general changes of direction to get somewhere are high. I can drive around my suburban town for hours without encountering a car, construction, or other obstacle that's blocking the travel lane. That's a "category" problem where multifactorial reasoning is required. Are they just delivering a pizza, and I should go around once the opposing lane is clear, even if that means grazing the double yellow briefly? Or is it a lineup because of construction and I just have to wait? Very difficult problem to solve.

That said, this is clearly alpha-quality software. I wouldn't feel comfortable in this unless there was a safety driver and a feedback passenger. Terrifying. When you're measuring the number of interventions per mile, rather than the other way around, a system has no business being on public roads. And presumably it would still attempt to operate at night in the rain, which would be even worse.
posted by wnissen at 3:25 PM on March 29, 2021 [1 favorite]


I didn't expect my question to prompt a series of responses, but as someone with a (small) background in CS theory I think a lot those are based on misconceptions about computing. What theorists care about is computational complexity. For example, Go is in, roughly, the complexity class PSPACE. Since we have an AI that can solve Go, we now have an AI that can solve PSPACE problems of human interest.

Now if autonomous driving is in some related computational complexity class, then there's really nothing stopping human progress from finding the answer of an AI that can drive. The biological barriers, the data barriers, all those other implementation or performance issues become secondary and subsumed by the extension of the Church-Turing thesis, which is just that our universe is governed by computation; computation is a law of nature. Now I've not Googled the answer, but I imagine robotics systems, or any human task from cooking to surgery, is generally not that far removed from something like PSPACE. It's because our own brains are finite.

It might take humans 100 years or 1000 years to figure out the AI to do it. But there's nothing in principle making it impossible to do so. There are things that are not possible by computers, in principle, such as virus detection, because that would reduce to the Halting Problem, and yet (!) we do have working virus checkers! That's the sort of issue I'm getting at.
posted by polymodus at 3:42 PM on March 29, 2021 [2 favorites]


I am reminded of a science fiction story which my father was fond of. Passengers load into the world' first fully self-driving rocket. The computer system welcomes them at the start and reassures them that everything will work out. In fact, "nothing can possibly go wrong... go wrong... go wrong... go wrong..."

50's science fiction writers: predicting Tesla.
posted by Joey Michaels at 3:54 PM on March 29, 2021 [1 favorite]


The biological barriers, the data barriers, all those other implementation or performance issues become secondary and subsumed by the extension of the Church-Turing thesis, which is just that our universe is governed by computation; computation is a law of nature. Now I've not Googled the answer,

As a pure statement of fact, I choked on a mouthful of beer laughing at this. "The Church-Turing thesis says our universe is computable but I haven't googled it" is an almost magically perfect sentence, something that poet would pen for inclusion in the DSM5 for diagnosing engineer's disease. Amazing.
posted by mhoye at 4:09 PM on March 29, 2021 [14 favorites]


That's not what I said. Most people who have studied CS theory accept the Church-Turing thesis broadly. What I didn't google was whether robotics was in PSPACE or whatever complexity class is in (complexity class being synonymous with hardness). That's two sentences.

My doctoral studies minor was in CS theory; I put in the hours to learn the material.
posted by polymodus at 4:12 PM on March 29, 2021 [1 favorite]


Indeed, a simple Google shows top results like Demaine's paper, showing that motion planning problems are PSPACE-complete. It's not my area of expertise, but keeping the CS fundamentals in mind is important for cutting through both questions of ethics of AI driving and tractability or feasibility of AI driving.
posted by polymodus at 4:40 PM on March 29, 2021 [1 favorite]


It's not an issue of computational complexity. It's an issue of the task not being clearly defined.

If you could formally specify the calculations required to result in correct driving behaviour - i.e. what the outputs must be as a function of the input data (which is a bunch of noisy camera images, lidar data, sensor inputs etc) then you'd be decades ahead of the state of the art and able to sell your work for billions.

The problem isn't the computational cost of computing the answers in time. The problem is that nobody has even meaningfully defined the questions.

You're a theorist asking "why is this difficult when the problem isn't NP-hard?", but what you're saying is about as relevant as a mechanic asking "why is this difficult when the self-driving car's engine has enough horsepower?".
posted by automatronic at 4:46 PM on March 29, 2021 [11 favorites]


Driving is also heavily time constrained, an aspect of technical difficulty about which theoretical considerations of computational complexity equivalence have nothing to say whatsoever.
posted by flabdablet at 7:56 PM on March 29, 2021 [1 favorite]


Factoring large numbers is in PSPACE, way way inside PSPACE. Yet we are not cracking RSA by reduction to AlphaGo.

Complexity theory is anti-relevant here. We have not even solved Go in the formal sense at all, nor do you have any hope of posing Driving as a formal function problem. That's why it's software engineering.
posted by away for regrooving at 8:28 PM on March 29, 2021 [5 favorites]


Watching the video it struck me that these grid streets, with several lanes at each junction, all controlled by lights, is actually quite a straightforward environment. Try this in London, Paris, Rome or Lisbon and that software wouldn't last 30 seconds
posted by el_presidente at 9:27 PM on March 29, 2021 [3 favorites]


Could anyone tell me why we need self-driving cars?

The worst traffic I've ever seen was rush hour in Istanbul, Turkey. Six or eight lanes of cars, packed bumper to bumper, standing still. A cop car next to my cab turned its sirens on; that didn't help them much.

Now if we replaced each of these cars with a self-driving car, what have we gained? We'd still be stuck in the same traffic jam. And if these self-driving cars still run on gasoline, what exactly have we improved?

If Musk and the other tech giants want to improve society, they should spend their money and creativity to get us away from oil. That should be their first priority. Are they doing that? And then they could perhaps google the phrase "public transport".
posted by Termite at 1:23 AM on March 30, 2021 [4 favorites]


Ah, but that would be on the slippery slope to Godless Communism. Sitting in the hermetic womblike bubble of your private, personally owned car in a traffic jam, rather than jostling elbows with other serfs in a train carriage like in Communist France or somewhere, is a sacred cornerstone of Freedom and Liberty.
posted by acb at 3:35 AM on March 30, 2021 [4 favorites]


And none of this even broaches the discussion of what will happen to all of this self-driving stuff when it starts to fail. I work on my own cars and the most common--- and earliest--- failure mode is the electronics. I shudder to think about these discussions in another 10 years when these systems start to fail, possibly unexpectedly.

Which leads me to the Air France flight that went down in the Atlantic. When the plane gave control back to the pilot during a storm he stalled the plane and continued to stall it while being puzzled as to why they were continuing to fall like a rock out of the sky. By the time he realized what he was doing it was too late to nose down and regain enough air speed to recover from the stall. Investigators attributed this to a lack of flight time thanks to the autopilot. Now imagine full self-driving cars with Mrs. Lemonjello behind the wheel. Thanks to AI she hasn't actually driven in months but during inclement conditions (a freak storm) the car gives control back to her at a time when her skills have eroded to the point of being rusty, at best. And she's not a highly-trained pilot with many hours in a simulator.
posted by drstrangelove at 5:16 AM on March 30, 2021 [4 favorites]


Ah, but that would be on the slippery slope to Godless Communism. Sitting in the hermetic womblike bubble of your private, personally owned car in a traffic jam, rather than jostling elbows with other serfs in a train carriage like in Communist France or somewhere, is a sacred cornerstone of Freedom and Liberty

A few years ago I came across some photos on the internet showing a depicted future of full self-driving vehicles. The seats faced backwards and the occupants, a super-hip couple who were obviously part of the Beautiful People, both had large screen LCDs to play games or watch movies. It hit me that this very well could end up being our future-- a place where the highly-privileged are coddled even more than they are now in self-driving cocoons while the rest of us will get to fight over dwindling resources. That's when I realized Soylent Green was less of a campy 70s dystopian film and more of a documentary about our future.
posted by drstrangelove at 5:22 AM on March 30, 2021 [2 favorites]


Could anyone tell me why we need self-driving cars?

Because in North America at least we've failed as a culture at urban planning and public transit, and between zoning laws and social norms permit employers to ignore and externalize the costs of employee travel time.
posted by mhoye at 6:02 AM on March 30, 2021 [7 favorites]


Because in North America at least we've failed as a culture at urban planning and public transit.
I’d go stronger: in the United States we let those be coded brown/poor to the point where people actively vote against less miserable commutes or even multi-use trails because it might “attract criminals”, as if burglars aren’t bringing a pickup truck for your Peloton.

We aren’t going to do a good job on climate change until we change that mentality, and it’s really hard in much of the country which has so much infrastructure deficit. The best thing which could happen would be $4/gallon gas but unlike in 2008 we’re far enough down the EV path for that to help as much as it would have.
posted by adamsc at 6:13 AM on March 30, 2021 [3 favorites]


I’d go stronger: in the United States we let those be coded brown/poor to the point where people actively vote against less miserable commutes or even multi-use trails because it might “attract criminals”, as if burglars aren’t bringing a pickup truck for your Peloton.

I had a discussion with a friend from Phoenix who told me about the fierce opposition to the expansion of their light rail system largely for the perception that it will bring criminals to their neighborhoods. They seem blissfully unaware of the reality that a car is going to make a superior tool for committing crimes out in the 'burbs because, for one, there is no shortage of roads leading there which also offers multiple paths of escape. For another, a car can be purchased from a private party for as little as $400 and they don't even need to know your name in most states (and if they did you can always make something up.) After the commission of the crime the car can just be ditched or torched or something. Try that with a train where every platform has CCTV and only a few places to get on or off.
posted by drstrangelove at 6:43 AM on March 30, 2021 [1 favorite]


people actively vote against less miserable commutes or even multi-use trails because it might “attract criminals

If you limit the definition of "less miserable commute" to mean "transit," sure, some people do shoot themselves in the foot as drstrangelove mentions. Even then, the majority (at least a small majority) of people aren't quite that stupid, as evidenced by the fact that Phoenix's light rail is getting built.

In any other sense, people constantly vote in overwhelming majorities for things that they believe, wrongly, will improve their commute. Unfortunately, in most cities most of the time that means more and wider roads. Thankfully, people are finally beginning to grasp that more and wider roads merely generates more traffic. They're still often skeptical of transit, but not because racism, but because they either don't believe it will be useful to them or because they've been burned more than once in the past by billions of dollars in bonds producing little to no visible progress as happened in Miami (again) not long before I moved here.

I'd be interested to know where you've heard people claiming MUTs will bring criminals to the neighborhood, though. I've lived in some pretty overtly racist places that managed to build a bunch of trails without that particular canard coming up.
posted by wierdo at 7:16 AM on March 30, 2021


I've heard that sort of thing for years and from a variety of different parts of the country. My friend in Phoenix actually linked to an online news story about an expansion and there were multiple comments from the ambulatory head wound demographic claiming it would bring crime to their community. A friend in Minneapolis said she'd heard similar things about their light rail system when it was being built (which she uses every day.)

Speaking of adding lanes, a friend who lived out in LA said the biggest mistake they made was not building the freeways to have more lanes. As if 12 lane highways weren't enough. Interestingly one time when I was out there visiting and was stuck on the 210 going east to my friends' place (with them somewhere out in front of me in traffic) I got fed up with that and got off to use the restroom and noticed that despite the total gridlock on 210 (and the 10 based on the traffic reports) Foothills Blvd was moving along pretty nicely. We continued on that way and beat our friends by 10-15 minutes.
posted by drstrangelove at 8:28 AM on March 30, 2021 [1 favorite]


I forgot another one. Kansas City's light rail system is very much in its infancy but it is popular and there are plans to expand it. I was told by a haughty suburbanite that she's very much opposed to it ever extending to her comfortable suburb out in Johnson County (on the Kansas side and a very wealthy county) because it would bring "undesirables" and crime. All of this is hilarious given that the active core of the existing system is in Hipster Central. The only thing they'll bring out to her sterile suburb is better coffee, man buns and horn-rimmed glasses. (Actually they wouldn't go there at all as the heart of the city is a much nicer place than it was when I lived there in the 90s when it was a dead zone.)
posted by drstrangelove at 8:38 AM on March 30, 2021


I can't wait to start seeing defensive hacks against self driving cars.

We've seen that self-driving cars can be trapped and fooled in a number of ways so far, including holding up a slightly modified sign. I didn't want to interrupt the thoroughly justified rebutting of the computational complexity question above to raise the adversarial input question, but it's a huge issue.

Myself, I'm looking forward to figuring out how lidar reacts to a cyclist wearing a mylar cape. Should be exciting.
posted by mhoye at 9:35 AM on March 30, 2021


I've heard that sort of thing for years and from a variety of different parts of the country

To be clear, I've heard that bullshit about all forms of mass transit, just never in the context of trails, bike lanes, and such.
posted by wierdo at 12:26 PM on March 30, 2021 [1 favorite]


wierdo:
If you limit the definition of "less miserable commute" to mean "transit," sure, some people do shoot themselves in the foot as drstrangelove mentions. Even then, the majority (at least a small majority) of people aren't quite that stupid, as evidenced by the fact that Phoenix's light rail is getting built.
Transit is definitely what I had in mind but it was in the context of living in DC where lots of people have miserable car commutes and any time you talk about things like express buses, bike lanes, etc. there are always people who say that the solution is to add more car lanes even though there is literally no more room to add capacity, not to mention the last century disproving the idea that you could conceivably add enough capacity to meet demand using such an inefficient mode of transportation. Even if you ignore traffic, we get the same dynamic for everything — people complain about every stop sign or signal requested by the neighbors even though all any change ever means is that people are sitting stopped in the middle of the intersection rather than behind the stop line because the average vehicle speed at rush hour is 2-3 miles per hour.
I'd be interested to know where you've heard people claiming MUTs will bring criminals to the neighborhood, though. I've lived in some pretty overtly racist places that managed to build a bunch of trails without that particular canard coming up.
I don't have a link handy but it was one of the suburbs outside of Baltimore — I believe a rails-to-trails conversion but it was just so absurd how people thought they were being subtle talking about a horde of “thugs” coming on bike. We've had similar concerns closer to us with Maryland's Purple Line for the same reason — all of the ritzy white-flight neighborhoods are Very Concerned about the risks, when the crimes in the news tend to be people stealing cars in one jurisdiction and using them to rob someone across state lines so the police investigating the second victim waste time chasing down the first one.
posted by adamsc at 1:08 PM on March 30, 2021 [1 favorite]


this is crazy. what even counts as success here? driving, esp in crowded cities like oakland, is a mishmash of following certain rules, skirting others, reading other drivers' faces, guessing what pedestrians might do based on their demeanor, and a million other factors that are based on unspoken norms of the road. you cant program a consistent rule for what to do in a crowded unprotected left turn in rush hour with buses and cars and pedestrians who themselves are not always following the rules. this tesla junk should be reserved for rural roads only.
posted by wibari at 2:16 PM on March 30, 2021


So FSD is basically "drunken cruise control with an index finger on the wheel"? Got it. Good investment.
posted by turbid dahlia at 3:46 PM on March 30, 2021 [2 favorites]


It's basically like blockchain

"It's blockchain, but instead of mining for Bitcoins, it steers your car!"
posted by turbid dahlia at 3:51 PM on March 30, 2021 [2 favorites]


guessing what pedestrians might do based on their demeanor

Eye contact with pedestrians to acknowledge that (1) I see you and (2) I acknowledge your right-of-way is a wonderful thing.
posted by mikelieman at 4:22 PM on March 30, 2021 [6 favorites]


If you not really a Musk or of Tesla's plan for vehicle autonomy - but you ARE interested in the technology - then life can be tough as many of the sources are curated by Tesla fans and Tesla stock investment fans. Nevertheless, I would recommend Dave Lee's conversations with Machine Learning specialist James Douma on the technicalities of how Tesla have been developing FSD. I'd also recommend AI Driver and Dirty Tesla if you want to see many hours of drivers taking their "FSD beta" cars on drives over the last 6 months.

I'd also recommend Nikki from Transport Evolved - who has asked Where are the Autonomous Cars that Elon Musk Promised us? - part of a general review of the state of the technology.
posted by rongorongo at 5:14 AM on March 31, 2021 [1 favorite]


Now that I'm home from work and was able to watch the video: "Holy Crap that was terrifying". The driver's hope that 8.3 will fix all the 8.2 problems also seems irrationally optimistic.

One thing that strikes me is the Tesla seems to really have problems parsing anything but the simplest road lines. Can't imagine it would be any better someplace like here where road markings haven't been renewed for 9 months and have spent 5 months getting scraped off by plows so they are more ghost suggestions of a line rather than the clear, well delineated markings seen in the video.

I also wonder how the Tesla would handle the road grit that covers everything but travel lanes at this time of the year. Bet it would be as confused as it was by that double curbcut sidewalk.
posted by Mitheral at 2:17 PM on March 31, 2021 [1 favorite]


Mitheral, a guy on youtube with a channel called 'savagegeese' talked about this. He said these systems will likely be rendered unusable during those winter months when road grime covers everything. He also went on to say that the FSD stuff is being pushed hard by big technology companies like Google which has a financial interest in people being glued to their screens for even more hours out of the day.

Bottom line, what I said earlier is still what bothers me most. Even if many of these problems are ironed out, how well will these systems still be working in several years? I know a few people with cars that have the lane-keeping feature that has already failed due to a bad sensor. These things are 6-7 years old. When these cars hit the secondary market in earnest DIYers will start to attempt to repair these systems themselves. Obviously the manufacturers don't want that and it's going to be the ultimate justification against the right-to-repair movement. As someone who works on my own stuff I'd ultimately rather just drive my own car that I fix myself rather than being required to go to a stealer for service.
posted by drstrangelove at 3:46 AM on April 1, 2021 [3 favorites]


There is an obvious exploit against unattended autonomous vehicles. Get it to come to a stop by walking in front of it and then place a cardboard box in front and behind it. The car isn't going anywhere without manual intervention.

Teslas will just laugh at your feeble cardboard boxes.
posted by flabdablet at 6:11 AM on April 1, 2021


So maybe some tin foil? My exploit did require functional autonomy so filed for future reference.

That review was actually pretty good and exposed somethings I wasn't aware of like the self drive feature is a 20% / $10k add on.

Also could you imagine the outrage if GM released a car that required rebooting twice during the filming of a review? You could power the car on the heat given off. That free ride is in a nutshell is how Tesla was able to be the success that it is and not be considered a modern Yugo.
posted by Mitheral at 11:32 AM on April 1, 2021


That review was actually pretty good

Mehdi Sadaghdar is almost always good value.
posted by flabdablet at 12:32 PM on April 1, 2021


> If Musk and the other tech giants want to improve society, they should spend their money and creativity to get us away from oil. That should be their first priority. Are they doing that?

It is my understanding that all of the vehicles Tesla makes are electric and not powered by internal combustion engines, but please correct me if that's incorrect.
posted by fragmede at 1:54 PM on April 1, 2021


It is my understanding that all of the vehicles Tesla makes are electric and not powered by internal combustion engines, but please correct me if that's incorrect.

That is correct! As are some other things:

Tesla has also invested $1.5 billion in Bitcoin, which is currently burning through more power than Argentina doing no useful work whatsoever.

Elon Musk's other main venture, SpaceX, burns hundreds of tons of high-carbon fuel per launch. If they achieve their goal of launches every two weeks, they'll be emitting 4,000 tons of CO2 per year from launches alone. A lot of that will be from space tourism for the 0.001%. They're also polluting Earth orbit, and the night sky, with thousands of Starlink satellites whose main market will be high-frequency traders and rich people's yachts.

Oh but let's not forget Musk's contribution to public transit: Hyperloop, which... I'm sorry, I can't even.
posted by automatronic at 8:10 AM on April 2, 2021 [7 favorites]


I really want my next car to be fully electric and Tesla's battery and motor tech is really better than anyone else's right now (although lots of other companies are catching up) but I'd never buy from them because Musk is such a wanker and I hate that they keep pushing this half-backed autonomous driving software.

I'll probably end up going with a Hyundai or VW.
posted by octothorpe at 8:23 AM on April 2, 2021


Also, let's be honest with ourselves: if "rolling coal" was the hip new tech for cars, Musk would have just called his company something like Centralia and would be selling monster trucks powered by Civil War-era steam engines.
posted by Glegrinof the Pig-Man at 8:23 AM on April 2, 2021


> Elon Musk's other main venture, SpaceX, burns hundreds of tons of high-carbon fuel per launch

...and showers the surface with trash.
posted by Bangaioh at 12:08 PM on April 2, 2021


Here’s an old tell-all twitter thread by a former Tesla engineer.

I’d like to think their tech has improved but I wouldn’t put money on it.
posted by bendy at 10:04 PM on April 2, 2021 [5 favorites]


Ugh, none of those web-framework technologies are robust enough for the real-time reliability that a car running 70 MPH needs.
posted by octothorpe at 11:41 AM on April 3, 2021 [1 favorite]


Move fast and break things, is not a development model I would bet my life on.
posted by inpHilltr8r at 1:55 AM on April 4, 2021 [1 favorite]


Move fast and break things, is not a development model I would bet my life on.

If a company is to survive without advertising, then it does depend on people talking about its products a lot. My view is that Tesla's autonomy developments helps considerably towards this goal. Current owners - and their passengers - may not be willing to shell out bucket loads for software - but they can see their car's improvements in rendering the reality around it as part of the vehicle's standard operation and built in "auto-pilot" capability. The key technology here is not so much autonomy as over-the air software updates that allow things to evolve and for people's interest to be maintained. Contrast that to most cars that are exciting enough to talk about only when new (or broken). The trick in moving fast is not so much to break things as to capture people's attention.

To current Tesla drivers, I am not sure that the question about whether their cars ever reach level 5 autonomy is all that important. They bought the car so that they could enjoy driving it themselves and they are affluent enough not to have any financial need for the car to be moonlighting as a robo-taxi. Tesla's proposal to rent FSD capability for a limited time - rather than buy it - might be appealing for those taking a longer journey - but as driver assist function and as a curiosity.

The people who might want to buy real autonomous vehicles are, I'd argue, a completely different group from current Tesla drivers: they are aspiring fleet owners who would be wanting vehicles that would make them money operating in a particular area. Tesla's current promise of "full autonomy just so long as there is somebody behind the steering wheel ready to take over" is useless to them ; they could just hire drivers to do that. Equally they are going to want assurance that the cars are licensed to operate in pretty much any part of their area under pretty much all conditions. Finally they will be people who want to assure their lawyers, their insurers and their investors that they are not going to be bankrupted if one of their cars gets into an accident.

So Tesla's full self drive developments can serve the company - and its drivers - quite well in the short term as a gizmo that they talk about. That will get them through the next couple of years. The point where they start serving fleet operators (or setting themselves up as such) is very much more tricky.
posted by rongorongo at 2:29 AM on April 5, 2021 [2 favorites]


Don't forget the other big scam with Tesla and FSD.
The 10,000 dollar FSD licence is worse than DLC for a cell phone game.
It is tied to the specific person and the specific car. So if either of those changes it needs to be purchased again.
Sell the car to someone else? They need to buy FSD again for the same car.
Your cars battery spontaneously sets itself on fire? You need to re-buy FSD for your new car.
posted by Iax at 12:22 PM on April 5, 2021 [3 favorites]


Minus the non transferability a friend owned a pickup like this. It wasn't sold with remote locks or power windows. He bought the power window motors and door skins but then the dealership charged him a couple hundred dollars to provide an encrypted code number to turn the windows on in the ECU and enable the key fob.
posted by Mitheral at 12:29 PM on April 6, 2021


« Older Celebrating Irish Twitter's greatest late-night...   |   Why Would Someone Put Penises All Over the Beach? Newer »


This thread has been archived and is closed to new comments