Wapo journalist verifies that robotaxis fail to stop for pedestrians in marked crosswalk 7 out of 10 times. Waymo admitted that it follows “social norms” rather than laws.
The reason is likely to compete with Uber, 🤦
Wapo article: https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/
Cross-posted from: https://mastodon.uno/users/rivoluzioneurbanamobilita/statuses/113746178244368036
this is not on Waymo. it’s on the absolute sold out pieces of shit that allow Waymo and other cunts like Elon to experiment with human lives for money.
I work in a related field to this, so I can try to guess at what’s happening behind the scenes. Initially, most companies had very complicated non-machine learning algorithms (rule-based/hand-engineered) that solved the motion planning problem, i.e. how should a car move given its surroundings and its goal. This essentially means writing what is comparable to either a bunch of if-else statements, or a sort of weighted graph search (there are other ways, of course). This works well for say 95% of cases, but becomes exponentially harder to make work for the remaining 5% of cases (think drunk driver or similar rare or unusual events).
Solving the final 5% was where most turned to machine learning - they were already collecting driving data for training their perception and prediction models, so it’s not difficult at all to just repurpose that data for motion planning.
So when you look at the two kinds of approaches, they have quite distinct advantages over each other. Hand engineered algorithms are very good at obeying rules - if you tell it to wait at a crosswalk or obey precedence at a stop sign, it will do that no matter what. They are not, however, great at situations where there is higher uncertainty/ambiguity. For example, a pedestrian starts crossing the road outside a crosswalk and waits at the median to allow you to pass before continuing on - it’s quite difficult to come up with a one size fits all rule to cover these kinds of situations. Driving is a highly interactive behaviour (lane changes, yielding to pedestrians etc), and rule based methods don’t do so well with this because there is little structure to this problem. Some machine learning based methods on the other hand are quite good at handling these kinds of uncertain situations, and Waymo has invested heavily in building these up. I’m guessing they’re trained with a mixture of human-data + self-play (imitation learning and reinforcement learning), so they may learn some odd/undesirable behaviors. The problem with machine learning models is that they are ultimately a strong heuristic that cannot be trusted to produce a 100% correct answer.
I’m guessing that the way Waymo trains its motion planning model/bias in the data allows it to find some sort of exploit that makes it drive through crosswalks. Usually this kind of thing is solved by creating a hybrid system - a machine learning system underneath, with a rule based system on top as a guard rail.
Some references:
(Apologies for the very long comment, probably the longest one I’ve ever left)
And again… If I break the law, I get a large fine or go to jail. If companies break the law, they at worst will get a small fine
Why does this disconnect exist?
Am I so crazy to demand that companies are not only treated the same, but held to a higher standard? I don’t stop ar a zebra, that is me breaking the law once. Waymo programming their cars noy to do that is multiple violations per day, every day. Its a company deciding they’re above the law because they want more money. Its a company deciding to risk the lives of others to earn more money.
For me, all managers and engineers that signed off on this and worked on this should he jailed, the company should be restricted from doing business for a month, and required to immediately ensure all laws are followed or else…
This is the only way we get companies to follow the rules.
Instead though, we just ask compi to treat laws as suggestions, sometimes requiring small payments if they cross the line too far.
Funny that you don’t mention company owners or directors who are supposed to oversee what happens, in practice are the people putting pressure to make that happen, and are the ones liable in front of the law.
I remember seeing a video from inside a waymo waiting to make a left against traffic.
It turned the wheel before moving, in anticipation of the turn. Which is normal for most drivers I see on the road.
It’s also the exact opposite of what you should do for safety and legality.
Keep the wheel straight until you’re ready to move, turning the wheel before the turn means that if someone rear ends you, you get pushed into traffic, not along your current lane.
It’s the social norm, not the proper move.
I was involved in a crash many years ago where this resulted in the car in front of us getting pushed into an oncoming car. We were stopped behind a car indicating to turn, hit from behind by a bus (going quite fast), pushed hard into the car in front and they ended up getting smashed from behind and in front.
Don’t turn your wheel until you’re ready to move, folks.
I’m sure a strong legal case can be made here.
An individual driver breaking the law is bad enough but the legal system can be “flexible” because it’s hard to enforce the law against a generalized (bad) social norm and then each individual law breaker can argue an individual case etc.
But a company systematically breaking the law on purpose is different. Scale here matters. There are no individualized circumstances and no crying at a judge that the fine will put this single mother in a position to not pay rent this month. This is systematic and premeditated. Inexcusable in every way.
Like, a single cook forgetting to wash hands once after going to the bathroom is gross but a franchise chain building a business model around adding small quantities of poop in food is insupportable.
I really want to agree, but conservative Florida ruled that people don’t have the right to clean water so I doubt the conservative Supreme Court will think we have the right to safe crosswalks
I am not intimately familiar with your country’s legal conventions, but there is already a law (pedestrians having priority in crosswalks) that is being broken here, right?
People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren’t Asimov’s robots we’re dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. “Should a car veer into oncoming traffic to avoid hitting a child crossing the road?” The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.
But that’s just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.
This behavior isn’t at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don’t stop at crosswalks is because human drivers don’t stop at crosswalks. The machine is simply copying us.
All of which takes you back to the headline, “Waymo trains its cars to not stop at crosswalks”. The company controls the input, it needs to be responsible for the results.
Some of these self driving car companies have successfully lobbied to stop citys from ticketing their vehicles for traffic infractions. Here they are stating these cars are so much better than human drivers, yet they won’t stand behind that statement instead they are demanding special rules for themselves and no consequences.