I think I can scratch a self-driving car off my Christmas list for this year…and for every year. I can always use more socks, anyway. The Washington Post (owned by another tech billionaire) has a detailed exposé of the catastrophic history of so-called autonomous vehicles.
Teslas guided by Autopilot have slammed on the brakes at high speeds without clear cause, accelerated or lurched from the road without warning and crashed into parked emergency vehicles displaying flashing lights, according to investigation and police reports obtained by The Post.
In February, a Tesla on Autopilot smashed into a firetruck in Walnut Creek, Calif., killing the driver. The Tesla driver was under the influence of alcohol during the crash, according to the police report.
In July, a Tesla rammed into a Subaru Impreza in South Lake Tahoe, Calif. “It was, like, head on,” according to a 911 call from the incident obtained by The Post. “Someone is definitely hurt.” The Subaru driver later died of his injuries, as did a baby in the back seat of the Tesla, according to the California Highway Patrol.
Tesla did not respond to multiple requests for comment. In its response to the Banner family’s complaint, Tesla said, “The record does not reveal anything that went awry with Mr. Banner’s vehicle, except that it, like all other automotive vehicles, was susceptible to crashing into another vehicle when that other vehicle suddenly drives directly across its path.”
Right. Like that ever happens. So all we have to do is clear the roads of all those other surprising vehicles, and these self-driving cars might be usable. That’s probably Elon Musk’s end goal, to commandeer the entirety of the world’s network of roads so that he can drive alone.
Speaking of Musk, he has a long history of lying about the capabilities of his autopilot system.
Tesla CEO Elon Musk has painted a different reality, arguing that his technology is making the roads safer: “It’s probably better than a person right now,” Musk said of Autopilot during a 2016 conference call with reporters.
Musk made a similar assertion about a more sophisticated form of Autopilot called Full Self-Driving on an earnings call in July. “Now, I know I’m the boy who cried FSD,” he said. “But man, I think we’ll be better than human by the end of this year.”
Lies. Lies, lies, lies, that’s all that comes out of that freak’s mouth. If you want more, Cody has a new video that explains all the problems with this technology. I know, it’s over an hour long, but the first couple of minutes contains a delightful montage of Musk making promises over the years, all of which have totally failed.
Can we just stop this nonsense and appreciate that human brains are pretty darned complex and there isn’t any AI that is anywhere near having the flexibility of a person? Right now we’re subject to the whims of non-scientist billionaires who are drunk on the science-fantasies they read as teenagers.
microraptor says
Even if self-driving cars could deliver as promised (and they can’t), you’d need to have the majority of vehicles on the roads be self-driving for it to be effective. And that still wouldn’t solve the real problems of too much traffic, which has no actual solution beyond “get fewer vehicles on the road.”
PZ Myers says
Trains. Trains are the answer.
beholder says
Ah yes, I’m sure that sentiment will age well and won’t be disproven within the decade. Or will we just classify everything artificial intelligence does as “No, that’s not AI anymore, even a machine can do that”?
Self-driving cars don’t have to be perfect, they just have to be better than humans at driving. I’m still optimistic about the technology reaching that level pretty soon, even if I’m less optimistic about lawyers tying things up in court indefinitely and software devs closing their proprietary tech to any outside analysis.
@1 microraptor
Yes, depopulating the Earth will reduce our dependency on cars, but I don’t think humans will accomplish that except by accident. In the meanwhile any tech that results in far fewer road deaths is welcome.
robro says
You’re right, PZ. The answer is trains. Human driven cars also slam on the brakes at high speed for no apparent reason, etc. It’s not surprising that AI driven cars aren’t so great as they are approximating human behavior. It is possible that AI driven cars can “learn” to not act like humans…unlike humans, they have the potential to learn from mistakes…but that depends on the amount of engineering effort and investment put into it. Frankly, I wouldn’t trust a Musk car with a human at the wheel.
jimf says
@3 Beholder: Self-driving cars don’t have to be perfect, they just have to be better than humans at driving.
This. And once this happens, keep your eyes on the car insurance companies.
Personally, I look forward to when self-driving cars exceed the performance of human drivers, because, quite simply, I am sick of people who do not use turn signals, who do not come to a stop before “right on red”, who tailgate, who exceed the speed limit by 20 MPH or more, who do not turn on their lights in the rain (a law in NY), etc. Many drivers have pretty much no courtesy on the roads.
robro says
jimf @ #5 — You set a low bar, of course.
microraptor says
beholder @3: We don’t need a population crash, we just need to stop building our infrastructure to be hostile to everything but privately-owned motor vehicles and provide robust networks of public transportation so that car ownership is no longer a necessity for the majority of the population.
And, as PZ said, trains. Honestly, it’s almost mind-boggling how efficient trains are when it comes to long-distance transportation.
raven says
Don’t trust Elon Musk at all, ever.
Ask the US government and US military who paid SpaceX 15 billion USD in grants and contracts to have Elon Musk side with our enemy, the Russian Federation.
SpaceX wouldn’t exist without NASA and the US government.
One of Musk’s breeding stock is now suing him, Grimes.
“Grimes claimed in a since-deleted tweet in September that Elon wouldn’t let her see one of her sons.”
It’s not too clear what is going on here but it looks pretty weird.
I think this third son was born by surrogacy and Grimes doesn’t even have possession of the baby. She might not have even known it was born if there is a collection of frozen embryos somewhere that Musk can access.
Each to their own but I would put Musk as close to the last person in the universe I would want to have children with.
I quite literally am incapable of imagining what Grimes or his new baby mom Shivon were thinking when they got together with Musk.
I’d ask for my gametes back but it doesn’t work that way.
numerobis says
You should be proud of yourself for coming up with an argument equally cogent as the “look, Darwin was wrong about a specific thing therefore evolution is disproved” argument you like to lampoon.
Tesla is not the only car manufacturer. Musk is not the only car executive in the world. Dr Raoult shilling quinine doesn’t invalidate the entire field of medicine any more than Musk being a fabulist invalidates the field of autonomous driving.
Lots of companies have SAE level 2 autonomy systems, aka ADAS. Tesla was among the first, adding it as a luxury item, but it’s increasingly becoming a standard item on all cars. Tesla calls it Autopilot (TM), Nissan calls it ProPILOT, Honda calls it Honda Sensing, VW calls it Travel Assist. They all basically offer the same thing.
The experience with early systems like Tesla’s showed that they’re too good at the routine stuff, so humans tune out. That forces the manufacturer to check and make sure the driver is paying attention, and if not, switch the system off. Most manufacturers learned that lesson, so you don’t hear about their systems being involved in crashes all that much. Tesla built an early version of such a system but it’s not very good, and they steadfastly refuse to improve it.
That’s not a general problem with the entire technological concept, it’s a problem with one manufacturer and the lack of effective regulation that’s let them get away with it for years after the lessons were already learned.
Indeed, the technology is likely to be increasingly mandated “soon” (for regulatory timescales). SAE level 0 — lane departure warnings and automated emergency braking — are proven to reduce crashes and crash severity.
Other examples you don’t hear much about:
Mercedes has an SAE Level 3 system in cars that people can buy (if they have lots of money). That means drivers actually can tune out safely in some circumstances, then get adequate warning that it’s time to wake up and take the wheel again.
Waymo operates an SAE Level 4 taxi service. They’ve been giving rides to the public for 6 years, without a safety driver for 3 years. Level 4 meaning it has to stay within the cities it operates in and can’t handle conditions that are out of bounds, but within those bounds, there’s no driver at all (and if the car starts to leave the bounds, it stops safely). Not much news about that one, because they tend not to even have minor accidents. In the same city where Waymo launched, Uber did get in the news for turning off all the safeties and, after a lot of fender-benders, ended up killing someone.
lochaber says
recently, one of the autonomous cabs in San Francisco ran over a fallen pedestrian, and then parked on top of her leg.
I think some day, autonomous vehicles will be safer than human drivers, but the testing process needs more oversight and regulation. Corporations shouldn’t be allowed to test them in the open, on the general public.
birgerjohansson says
Numerobis @ 9
How do the systems make sure the drivers are paying attention? I would certainly need something like that in a normal not-self-driving car when on a three-hour drive. A loud buzzer sounding when I listen too much on the radio, for instance.
birgerjohansson says
Lochaber @ 10
The AIs are not supposed to be trained on “Death Race 2000”.
HidariMak says
nurmerobis @ 9
You beat me to it. Musk has a tendency to treat humans as lab rats. Tesla owners are the lab rats for his supposed self-driving cars, just like those signing up for Musk’s brain implants are his lab rats for brain surgery. I’m not sure of the stats on the failure rates among the companies involved in self-driving cars, but whenever a failure makes the news (at least from what I’ve seen), it’s the latest failure in a Tesla.
tacitus says
I can’t watch more than a couple of minutes of this guy, but obviously self-driving technology isn’t remotely close to ready yet, and won’t be for a good number of years, but like some others in the comments, I can’t see how they won’t become a reality eventually. AI is still in its infancy, and “never” is an extremely long time. I am still hoping they become a reality in time for when I need them — about 20 years with luck — but I’m not as optimistic as I once was.
Around 42,000 Americans die on the roads each year, and if self-driving vehicles ever reach the point where they can take a significant bite out of that number, then we’re going to have to face up to the fact even though self-driving technology will have proved itself safer than human drivers, the news will still regularly include headlines reporting people dying in accidents caused by self-driving vehicles.
The question is, how much safer than human drivers will self-driving vehicles have to be before people will be willing to share the roads with them?
Nathaniel Hellerstein says
Self-driving cars are superior to humans… in foreseeable situations. But of course accidents are by definition unforeseen. Self-driving cars are programmed to be perfect, so when things go wrong, their response is perfectly wrong.
This is a special case of the 90-10 Rule: humans are 90% useful, and 10% useless, 100% of the time; but robots are 100% useful 90% of the time, and 100% useless 10% of the time.
Don’t forget the deep-pockets factor. If a human plows into a crowd, then that’s only one person to sue penniless; but if a robot plows into a crowd, then that’s an entire corporation to sue penniless.
tacitus says
I don’t disagree, but how realistic is that answer in the USA today (or in the future)? Even in better served, more compact countries, like the UK, the rail networks have been “streamlined” (i.e. cut) since the 1960s.
After all, vegetarians argue, with significant merit, that ending the consumption of meat would be extremely beneficial in out battle to prevent global warming…
Nathaniel Hellerstein says
… therefore I predict Mass Litigation Events.
tacitus says
That one is easily solved. Legislation will protect the manufacturers from being sued into oblivion.
Cynical, but true.
Trickster Goddess says
Even better than just trains: self driving trains. They have been running in Vancouver for almost 40 years now. Imagine what we could have if they took the hundreds of billions of dollars spent on developing self driving cars and invested it in public transit instead.
microraptor says
Trickster Goddess @19: That’s Muskrat’s nightmare, right there. Which is why he’s spent so much time and money killing things like the proposed high-speed rail connection between Vegas and LA.
cartomancer says
Now hold on a moment, it might be possible to solve the problem of too much traffic by tweaking cars a bit. We don’t have to explore funny, unpatriotic, godless non-car solutions!
How about we up the passenger numbers in cars? like, instead of four people make cars that fit 20 or 30. If more people are going the same route you could string several together to save on engine power. Then one person could drive hundreds! Just think, strings and strings of big, long cars where you don’t have to do the driving yourself. We could even make special buildings that the new strings of long cars stop at regularly, so you know there will be one along when you need it. If they take off we could even make special narrow roads just for these things, to make the whole process more efficient.
jo1storm says
Nurmerorbis @ #9 did you even watch the darn video? It specifically mentions Weymo
So many falsehoods. First, it doesn’t stop safely and then it doesn’t start safely after it stops. In one example, it blocked a whole lane in San Fran for over twenty minutes with journalists in the back seat. Second, that level 4 touted so much? A blatant fucking lie, because guess what?! There’s no certifying body to check those claims. Third, they are abusing the laws and reporting shit that happens (as mentioned in the video). About the only thing they actually have to report is if they killed somebody. Which they did, once. Fourth, their only warning that this dangerous machine might act like a crazy human is “This car might brake suddenly” sticker stuck on the trunk of the car. Fifth, they had to halve their fleet and shouldn’t have gotten permission to test it in the first place. Because it is damn dangerous. Just the fact that ambulances couldn’t get somewhere in time thanks to damn robo car is for me proof that they indirectly caused some deaths.
Seriously, watch the video.
jo1storm says
Should be “abusing laws when reporting shit that happens”.
Alan G. Humphrey says
It’s been about 35 years since I first heard the joke about how optimists predicted AI would take over all jobs in 30 years, while pessimists predicted it would be in 20 years. It takes a human about 5 years of continual interaction with their environment to achieve the most basic competence in walking around and we expect a similar competence to be programmed into vehicles in a lab. The reason it hasn’t worked yet is because the problems have not yet been correctly defined. Seeing a cat run across the yard, a bird fly away as you run toward it, grocery carts colliding in the store as you accompany a parent shopping, and all the experiences absorbed by being a passenger in a car during the first 15 years of life are directly useful in driving safely. The problem isn’t algorithms, a ChatGPT level system can probably be trained with billions of driving samples. It’s building a computing device a tenth the size and complexity of a human brain, that is efficient enough to use only a few KWs to run, and that only doubles the cost of a vehicle. Then there will be competent self-driving vehicles.
Artor says
To be fair, humans can be pretty shitty drivers too, but if you’re going to build a robot to replace them, you get to take responsibility for that robot. “Better than human” is a pretty low bar, and won’t protect Tesla or Musk against the fully-justified lawsuits.
david says
All I’ve seen are inadequate statistics. I’ve never seen a comparison of accidents per mile driven for self-driving cars (in self-driving mode) compared to the same figure for human-driven cars, adjusted for relevant covariates (car type, trip distance, urban/highway setting, etc). But, without data, enthusiasts say “it’s safer” or “it will soon be safer”. Making those statements is almost as irresponsible as unleashing untested technology on streets where children walk.
wzrd1 says
It would probably help them had they abandoned the early software used for their autopilot, the software for antiaircraft missiles.
Seriously though, the software is now highly advanced, equal to a heavily intoxicated human driver. It’s a shame we never passed laws against drunk drivers.
But, corporate types keep promising self driving cars and flying cars. Because, if they can’t drive worth a fuck on the ground, they’ll do even better over our rooftops.
Mental note: invest heavily in roofing businesses.
And watch out for cars with Pep Boys driver’s licenses, acquired from the Helen Keller School of Driving.
They shoot horses, don’t they? Where does one sign up…
All, in the holy name of Jesus’ scrotal contents or something.
Ramen.*
*OK, stew that, had an extra chicken leg quarter I didn’t have freezer space for, so I made a pot of stew with it. Fresh veggies, tons of garlic and onion, figure that was 1/3 of the pot, then tossed the dead bird parts into it, filled with water and some barley, added some noodles in the last 20 minutes. Scalded tongue and stuffed belly with the stew and some rye bread, ready for my food coma.
And thankful I’m on the 8th floor, where no self-driving menace can reach. In a building that already withstood a drunk driver’s attempt to drive through in an SUV.
Reginald Selkirk says
Electric car ‘kidnaps’ owner in terrifying ordeal: ‘It won’t stop’
Jaws says
What’s worse is that virtually of the development is being done within 100km of Palo Alto, California.
Meaning that they’re putting AI California drivers behind the wheel… trained in the most-illogical part of California’s roadways, around a serious literacy problem (can’t read eight-sided ride signs at the side of the road).
This is parallel to one of the errors made when autolanding systems were being developed for aircraft: Almost all of the work was done in places that were at/near sea level. This proved problematic (for complex aerodynamic/airflow and thrust-to-lift relationships) at airports more than 1100 meters above sea level… like Denver.
The above is only partly tongue in cheek.
gijoel says
As I’ve commented elsewhere, a lot of tech-bros disruptive technology seems to be power by their deep antipathy of sharing space with other human beings.
John Morales says
Um, this all seems sorta pointless. I honestly don’t see the problem.
Not like one has to concede to the car and not be able to drive manually, right?
Obs, Teslas can be driven manually. Nobody is forcing owners to go hands-free.
(BTW, it’s a truism in the software industry that 90% of users only use 10% of the features)
I don’t even have a car!
So, no. We are not, given values of ‘we’ that include me, and if this is the basis for that claim.
Silentbob says
@ ^
‘We’ meant the non-hyperliteralists.
gijoel says
@31
Tell that to the guy who died watching Harry Potter in his Tesla. A lot of people don’t like driving and would be ecstatic to have a car that they don’t even have to share with a driver, much less another passenger.
Besides, the cult of Elon seems to believe everything he says will be true. No matter how unlikely, impracticable or unfeasible his ideas are. Remember Elon’s subway for cars.
John Morales says
Silentbob, how many times now have I asked you what you imagine the difference to be between mere literalism and your supposed hyperliteralism?
At least a few, no?
Pray tell, how do you imagine non-hyperliteralists are subject to the whims of non-scientist billionaires whereas non-non-hyperliteralists are otherwise?
—
Of course, as ever, you do not dispute anything I’ve written.
(You can label me however you want, but people can read for themselves what we’ve each written and how those writings relate to each other)
John Morales says
gijoel,
Um, first of all, if he’s dead, I can’t actually tell him that.
More substantively, do you imagine he actually had “to concede to the car and not be able to drive manually”?
(Heh. Dedicated to ObsessiveBub)
They would, would they? Fair enough.
Point being, Teslas are still able to be driven manually.
This is a fact.
That was the very point!
Well, you’re not part of “the cult of Elon”, I’m not part of “the cult of Elon”, and so forth.
It’s only a problem for the cultists.
I think Tesla cars are hardly unlikely, impracticable or unfeasible.
(Want some stats on that?)
Wow. You’ve sure debouched way way away from what you quoted.
I know you used it as a springboard, but you’ve sprung beyond the universe of discourse.
(And I like how you acknowledge this is about Elon Musk in particular, whatever obfuscations and vagueries have been applied)
—
Perhaps I was too allusive and indirect; let me rectify that.
Point is, those cars can be driven manually.
No-one is beholden to automated driving.
wzrd1 says
@Jaws, yeah, totally autoland was autocrash, early on.
And utterly where this shitware is offering.
John whoring out regardless.
The fucking shit acts like an AIM-9 aimed at emergency vehicles.
As I said, total shitware.
Makes TMI look like a cupcake.
And I live less than three miles from that shut down plant.
erik333 says
@35 John Morales
Until such time as the button can be pressed safely, the button shouldn’t exist. You made TV remotes where one button made the hand grenade inside go off, simply labeling a button “don’t press!” isn’t good enough.
As to how dangerous (or not) the “self driving” is, is there even any reliable data?
John Morales says
Um.
“Tesla’s $15,000 Full-Self Driving software has always been somewhat controversial. Critics have been quick to point out that regular Autopilot offers virtually all the driver assistance features you’ll need on a daily basis. And, if you ever wanted to summon your Tesla (which would be rare) or have it change lanes for you (a bit more useful) Enhanced Autopilot can do so for an extra $6,000.
[…]
Tesla has sold 1.5 million vehicles in North America, meaning roughly 19% of customers opted for FSD. As mentioned above, the system used to cost much less – initially $5k, prices incrementally rose to $10k in 2020 and $15k in 2022. Despite the extortionate price hikes, FSD has only made minor improvements since 2018 and is far from the “full-self-driving system” its name suggests.”
(https://insideevs.com/news/629094/tesla-how-many-buy-fsd/)
hemidactylus says
Yeah the self-driving stuff is scary, but there is an absolutely insane aspect to the Tesla Plaid that is way underappreciated. I prefer Hayabusa on the Autobahn porn myself, but Plaid acceleration porn is interesting too:
Hayabusa 0-60 in the 2.8 sec
range. There are quicker bikes.
Tesla Plaid around 2 sec!
Hayabusa porn is still much more fun to watch than vanilla Plaid porn:
Teslas don’t raise front wheels or growl. Still insanely impressive getting beyond the downside of selfdriving. That sotrt of acceleration might make self-driving even scarier!
Howard Brazee says
Not counting that Musk has always lied, I do like self-driving features.
I remember when science fiction had robots sitting behind the wheel and driving cars. It didn’t predict that we would have incrementally smart cars. Cars can parallel park. Cars can let us out and then park in a tight garage. Cars can watch to see if we are backing over a tricycle that a child is riding. Cars can stop us from changing into an occupied blind spot.
They will continue to get better and more useful — incrementally.
Kagehi says
To be clear, the guys that originated the idea, i.e. DARPA don’t consider their own systems “road ready, except under specific conditions, and user supervised”, and those have 10 times as many sensors on them, and do not rely completely on freaking cameras to detect obstacles and movement around them (which all current self driving cars do). Someone some place had to have either bribed someone, or ignored reality, to allow “general, non-specific use”, self driving vehicles. Probably, though sadly not necessarily (we have psuedo leftist pro-corporate “libertarians” among the other side too) a Rethuglican, ironically, since they tend to be the ones who go, “Expert? Bah. What do experts know?”
Autobot Silverwynde says
Unless that self driving car comes from Cybertron, I refuse to trust it.
tacitus says
@22:
That’s more warning that you get when you’re getting into a taxi or drive-share vehicle. You don’t know who you’re going to get, what type of driver they are, or what state of mind they’re in. It boils down to trust either way.
chesapeake says
The recent book “Ludicrous” show musk as responsible for deaths from self driving Tesla. And him to be a bad guy.
robro says
Self-driving is not ready for the streets without a human watching and ready to drive. The systems need a lot more training. Fortunately, self-driving has limited availability except for a few companies, such as Waymo, and few people with more money than good sense. And, if they want to test out fully-autonomous driving there are safer venues than urban streets, such as designated parking lots. I’m confident that fully-autonomous vehicles will be here…some day. I think Waymo’s program was doing OK but they jumped the gun trying to monetize too soon.
Raging Bee says
Self-driving cars don’t have to be perfect, they just have to be better than humans at driving.
The problem with that logic is that even if a self-driving car consistently does dangerous stupid thing X, advocates of the technology can always say “yes, but it doesn’t do dangerous stupid things Y and Z that human drivers are prone to do, therefore it’s safer than human operators.” You’ll never be able to plausibly claim self-driving cars are safer than human-driven cars unless/until you can come up with a meaningful metric for “safety” and use it consistently in all instances of real-world operation.
Another thing to consider here is that organic human brains have been evolving to move about, maneuver and respond to outside events, much longer than they’ve been evolving for all the stuff we call “higher cognition.” Computers are mastering all the higher-cognition stuff, from chess to accounting to medical diagnostics, but at this time at least, we really have no grounds to think they’ll master the more animal functions of moving about while perceiving and adjusting to nearby events.
Furthermore, a machine that masters moving about may well have to be wired more like a human brain — in which case we’d have no reason to trust it to work better than a human brain.
garydargan says
And what happens when people get his dodgy brain implant and computers start driving humans. Self driving ones cause enough carnage as it is.
beholder says
@46 Raging Bee
I’ve got bad news for you: putting that much momentum in a big rolling metal container is always going to be dangerous. It’s equally reasonable to compare tradeoffs in automated control systems as it is to make the observation that you’re far less likely to die in an airplane than you are driving to the airport.
Insurance companies do this all the time. Your point?
Humans have not have nearly enough time to evolve in response to driving automobiles. I don’t think that point even needs to be addressed, other than to point out that we’re terrible at driving and, like almost all other terrestrial species we too get killed all the time by human drivers when we cross the road.
I have no reason to assume an computer theoretically optimized for driving resembles a human as a whole. I have no idea how it would operate in detail. You’re confusing something humans do for something humans are both naturally and functionally well-suited to doing, when all available evidence indicates neither is the case for driving cars.
Raging Bee says
I’ve got bad news for you: putting that much momentum in a big rolling metal container is always going to be dangerous.
So what? That doesn’t invalidate anyone’s objections to self-driving cars.
It’s equally reasonable to compare tradeoffs in automated control systems as it is to make the observation that you’re far less likely to die in an airplane than you are driving to the airport.
In the firmer case, are you actually comparing the tradeoffs? Or are you just making an assertion? The latter claim seems to backed up by actual statistics; I have yet to see stats supporting the former claim.
birgerjohansson says
AGIs may first come in existence because humans are sick of the traffic.
And AGI robots may first come into existence for the sex industry.
Various late SF authors that specialised in satire will laugh in their graves.
DanDare says
I went to an AI conference in Brisbane a few weeks ago.
One presenter was talking about how irrational people were in their fear of tech.
For self diving cars he pointed out how many people die in car accidents each year, about 50k, and how many die sue to AI divers, about 2 or 3.
Sound legit? No.
He didn’t take into acount the billions of safe car trips by manual drivers vs the hundereds only by AI. The AI comes out on top as a killer.
Tabby Lavalamp says
The goal absolutely should be fewer cars period, but between manufacturer lobbying and individuals crying “my freedom!” there isn’t much hope of that happening.
Anyway, maybe once people stop writing bug-free software…
jo1storm says
@43 “That’s more warning that you get when you’re getting into a taxi or drive-share vehicle. You don’t know who you’re going to get, what type of driver they are, or what state of mind they’re in. It boils down to trust either way.”
Yup, but AI doesn’t have to pass the test to get a driver’s license. Or “taxi test”, to be considered a cab driver (I live in Europe, not everyone can be a cab driver here). Thus, I don’t trust it. I also don’t use drive-share apps or vehicles, because I don’t trust “the stranger on the street”. Yet, I still trust them more than AI just because AI wasn’t forced to pass a driving test before being allowed on the road behind two tons of machinery.
numerobis says
jo1storm: you claimed Waymo has killed someone. I can’t find any evidence of that. Uber did, as I mentioned. Everyone in the industry knew that Uber was reckless.
As for your claim of the Waymo car not safely stopping, everything you described describes a car that safely stopped. It didn’t crash into anything. The occupants and other people were inconvenienced but not hurt, and there was no property damage.
No doubt having too many cars on the roads means there’s going to be traffic jams that slow down emergency response. That’s not something particularly addressed by self-driving.
As for whether I watched the interminable video, no, I can’t stand his style.
numerobis says
As for “AI doesn’t have to pass the test” — regulators everywhere, even in the US, disagree. I mean sure individual AI units don’t have to pass the usual driver’s test, but what would that show anyway? Anyone selling a self-driving system, or even a dumb car, needs regulatory approval.
jo1storm says
First of all, there were people driving behind that car. It “safely” suddenly stopped and “safely” blocked a whole lane, thus causing danger to every other driver tryong to overtake it. I don’t call that safe. If I tried to switch lanes and then failed so catastrophically I had to suddenly break and stopped and blocked the lane for 20 minutes, you’d call me crazy at worst and horribly incompetent driver at best. I did say they INDIRECTLY killed someone by blocking the road. Yes, there will be traffic jams but in the video example you refused to watch we have at least 6 Waymo cars stopping on the same intersection for no reason and causing a traffic jam all on their own. So the correct question to ask is: would there be more traffic jams with Waymo cars on the street or less? And the answer at this point in time is much more. More traffic jams = slower emergency response= more people dead. Causation is clear.
I dare say there is a difference between selling a car that won’t explode or catch fire for no good reason and selling a self-driving robot car that will aim for a pedestrian if you give them a chance. One is very well regulated, the other is not. They really should make those cars pass tests for driver’s license and not just trust car manufacturer’s words.
numerobis says
Human drivers sometimes double-park. That causes traffic to snarl up. By your logic, we can immediately conclude that Waymo saves lives by allowing emergency services to operate faster, because humans double-park.
That would, obviously, be a pretty stupid argument. You need to compare two alternatives to make a determination of which is better, you can’t just say that one has a problem therefore the other is better. But it’s equivalent to the argument you’re making.
jo1storm says
No, it is not. You do realize that humans double parking has no influence on traffic flow while parking on the left (fast) lane on the street for twenty minutes does? Isn’t it your argument that AI is better than humans at this driving thing?
Watch the goddamn video. You can compare two situations, with Waymoon the street and without Waymo on the street. You would literally have one traffic jam and gridlock less because waymo caused it for no reason except “it was slow internet in the area”.
https://m.youtube.com/watch?v=_bECwMbG2wo&pp=ygUJV2F5bW8gamFt
jo1storm says
And another one: https://www.youtube.com/watch?v=iVQL99P7ru0