Does Tesla's "completely autonomous" approach include a "saw problem"?

Does Tesla’s “completely autonomous” approach include a “saw problem”?

Tesla fans are now well aware of Tesla’s approach to “full autonomous driving”, but I’ll give a very quick summary here just to make sure all readers are on the same page. Basically, right now, Tesla drivers in North America who have purchased the “Full Self Driving” package and passed the safety score test have a test version of Tesla Autopilot/Full Self Driving in their cars. If I put a destination in my Tesla Model 3’s navigation system as I’m leaving the lane, my car will drive there on its own — in theory. It’s nowhere near perfect, and drivers must vigilantly monitor the car while it’s driving in order to intervene when necessary, but it now has ample capability to drive “anywhere”. When we’re driving with Full Autopilot Driving (FSD), if there’s a problem (either a disengagement or if the driver clicks a little video icon to send a video of the last drive to the Tesla HQ), members of the Tesla Autopilot team look at the clip. If necessary, they re-command the scenario in a simulator and respond to the problem in the correct way to teach the Tesla software how to handle the situation.

Tesla FSD in action. © Zachary Shahan / Clean Technica

I was able to access the FSD Beta several months ago (early October 2021). When I got it, I was completely surprised at how bad the situation in my area was. I was surprised since 1) I’ve seen a lot of hype about how good this is (including from Elon Musk and other people I generally trust when it comes to Tesla things) and 2) I live in a really easy-to-drive area (a Florida suburb). When I first started using the FSD Beta, I wasn’t expecting to see it have as big of problems with basic driving tasks in an easy driving environment as it gets. However, I kept some hope that she would learn from her mistakes and from the comments I was sending to Tesla HQ. Certainly, some glaring problems will not be difficult to rectify and each update will be better and better.

I’ve seen some improvements since then. However, updates have also brought new problems! I wasn’t expecting it, at least not to the degree I’ve seen. I’ve thought about this for a while. Basically, I was trying to understand the reasons why the Tesla FSD is not as good as I wish it was now, and why it gets worse sometimes. One potential problem is what I call the “swing problem”. If my theory is correct to any appreciable degree, it could be a fatal flaw in Tesla’s widely held, generalized approach to self-driving.

My concern is that while Tesla patches reported problems and uploads new software to Tesla customers’ cars, these patches create problems elsewhere. In other words, they are just playing a software swing. I’m not saying that’s definitely happening, but if it is, Tesla’s approach to AI may not be suitable for this purpose without major changes.

Since I’ve been driving for months thinking about what the car sees and how the FSD program responds, I’ve come to appreciate it Much more A subtle difference in leadership than we usually realize. There are all kinds of little cues, differences in route, differences in traffic flow and visibility, animal activity, and human behavior that we observe and then choose to either ignore or respond to — sometimes we watch closely for a while while we do it. We decide between these two options because we know that small differences in situation can change the way we should respond. The things that make us interact or not are so extensive it can be really hard to put them in the boxes. Or, let’s put it another way: if you put something in a box (“act like this here”) based on how a person responds in one drive, it is inevitable that the rule used for it will not apply correctly in the same thing but a different scenario , and will cause the car to do what it shouldn’t (eg respond instead of ignore).

Let me try to put this in more realistic and explained terms. The most common route I drive is a 10-minute drive from my home to my children’s school. It’s simple driving on mostly residential roads with wide lanes and moderate traffic. Back before I had the FSD Beta, I could use a Tesla Autopilot (adaptive cruise control, keep lane keep, lane change automatically) on most of this route and it would do its job flawlessly. The only reason I didn’t use it on almost the entire drive was the pothole issue and some particularly bumpy sections where you need to drive off-centric in the driveway so you don’t make everyone’s teeth chatter (just a slight exaggeration). In fact, aside from comfort and tire protection issues, the only reason it can’t drive fully is because it can’t turn. When I passed the safety grade test and got the FSD Beta, that also meant giving up using radar and relying on “sight only.” The new and improved FSD software is supposed to do the same job but can do these detours. However, the vision-only (no-radar) FSD Beta had problems – primarily, a lot of phantom braking. Since a new version of the FSD Beta is coming out and some Tesla fans will be interested in how much it gets better, I’ll eagerly upgrade it and give it a try. Sometimes it gets a little better. Other times it got much worse. Lately, she’s been engaging in some crazy phantom distractions and more phantom braking, and she seems to be responding to different cues than she’d responded to in previous drives. This is the kind of thing that gave me intuition that patches for issues identified elsewhere by other Tesla FSD Beta users led to overreactions in some of my driving scenarios.

Tesla FSD on a residential road. © Zachary Shahan / Clean Technica

In short, my hunch is that a very generalized system – at least, a vision-only system – cannot respond adequately to the many different scenarios drivers experience every day. And solving each small operator or wrong operator in the right way involves a lot of nuances. Teaching the software to brake for “ABCDEFGY” but not for “ABCDEFGH” might be easy enough, but teaching it to respond correctly to 100,000 subtle differences in it is impractical and unrealistic.

Perhaps the Tesla FSD can reach an acceptable level of security with this approach though. (I’m skeptical at this point.) However, as several users have pointed out, the goal should be for the drives to be smooth And Attractive. With this approach, it’s hard to imagine that Tesla could cut phantom braking and phantom drift enough to make the riding experience “satisfactory.” If possible, I would be happily surprised as I am one of the first to celebrate it.

Picture a Tesla FSD in a shopping center parking lot. © Zachary Shahan / CleanTechnica

I know this is a very simple analysis, and the “swing problem” is just a theory based on user experience and a very limited understanding of what the Tesla AI team is doing, so I’m not at all saying that this is a certainty. However, it makes more sense to me at this point than assuming that Tesla will adequately teach the AI ​​to drive well through the many slightly different environments and scenarios where the FSD Beta has been deployed. If I’m missing something or have a clearly wrong theory here, feel free to toast me in the comments below.


advertisement


Do we appreciate the originality of CleanTechnica? Consider becoming a CleanTechnica Member, Supporter, Technician, Ambassador – or Patreon Sponsor.

Do you have a tip for CleanTechnica, want to advertise, or want to suggest a guest on our CleanTech Talk podcast? Contact us here.



2022-05-15 06:50:43

Leave a Comment

Your email address will not be published. Required fields are marked *