Tesla FSD Beta is actually really great, but until it can think like an actual human to drive through complicated scenarios like this, it is never going to be ready to be fully autonomous.
We have signs that say "No Right Turn on Red" signs at some intersections. FSD Beta can't read them and the car stops, looks to the left and turns on the red light anyways. I've talked about it until I am blue in the face and I'm ignored by the RU-vidrs in my city who post videos, but coincidentally they've never shown this problems on their channels. I believe that too many people are more concerned with Tesla's stock value and they don't show many of the common problems, It also drives in BUS ONLY lanes which are all over the place. FSD Beta works great in suburban towns with wide intersections and flat roads.
there is a neighborhood near me that says "transponder lane" and I shouldn't get in that lane because I don't have one. I have to show ID to the guard. How will Tesla FSD figure stuff like that out? It takes even more "common sense" like a human.
Great video. Thanks you. I got into a construction area where north heading cars are routed into the oncoming southbound traffic lane. There are normally two lanes going southbound. But now one southbound lane keeps going southbound but the other southbound lane is used for northbound cars. FSD keeps stopping at each cone trying to squeeze back into the northbound lanes which are under construction. 😂. Have to drive manually. I do have confidences that FSD will understand the situation within 2 years.
Were the traffic cones too skinny for fsd to recognize? I just got a Model Y and I've been thinking about subscribing to FSD beta and would love your input and suggestions!
Good example. The first part it would probably have handled, just not elegantly. But he improvised traffic cone layouts will need handling. I'll see what I can do :)
If I'm not mistaken, AI is only for environment understanding, not car direction. But it should be later, I hope. Thanks for this great example of current limitation.
it will never manage completely new situtaions that requires to understand, really understand the world and the purposes of arrangements and rare objects and understand variations of situations that depends on the surrounding and human interactions etc. Like a human waving, that can mean so much, depending on many, many factors, who, where, how, surrounding etc. Even now, it's not managing yet trains and trams and various types of rails and rail crossings and the specific, dangerous surroundings there, or emergency vehicles or flat but dangerous objects at the street. Etc., etc.
Just out of interest, why does America have these stop signs? Is it to reduce crashes i.e. you're forced to stop and give more thinking time as to the next move? In the UK, you can just pull straight out at junctions assuming it's clear, no need to come to a complete stop at all if it's not necessary.
They’re set up very weirdly. In NYC you see stop signs when merging onto parkways all the time. You literally can’t see anything until you’re behind the line. If you were there and thought the stop sign is dumb, then you’re dumber than the fact that very few people know how to use a roundabout (because 90% of it is just common sense).
even 5-year-old children can drive on these roads. In my opinion the only way to accelerate the learning of the FSD is to have her drive in the city of Rome in Italy. Who knows how many disingeage or interventions to do every 10 minutes hahahaha
that's what I said already 2 years ago: now, when Beta makes every 2 or every 5 minutes a mistake, then every driver is super concentrated all the time, but what will happen, if Beta gets so good (but not perfect at all), that it makes only 1 unexpected mistake within 1 hour or so? Which driver will switch then quickly enough back to take over if he realxed before nicely and chatted or looked not where necessary? The dangerous time and the accidents will come in the future, when Beta will be much better.
@@mattesrocket i agree i like that i must pay close attention i don't want to never get that comfortable that i may do something silly just because i can
@@borntowim there are already fully autonomos vehicles without a stearing wheel in serivice. Just look it up on the interwebs... This is going to happen sooner then people think.
Tesla may one day automate this. But for the foreseeable future (10 years to forever, depending on what happens with AI breakthroughs) there will always be edge cases that humans need to be available to help solve. A tougher but very common challenge to solve is being hit by another car, even if not the automated vehicle's fault. Humans are going to need to be nearby to come and handle these types of situations. It's one of the reasons we should expect true driverless autonomy to only be offered gradually, one area at a time. The human support infrastructure will need to keep up.
@@chrisfrye8607 For a vehicle to operate everywhere a human can without a driver - the definition of Level 5 autonomy - it will need to be able to handle every situation a human can. It's not acceptable for people to die simply because the car can't handle a situation that a human can. That includes understanding what to do when you see a tornado. It means that if you're caught in a forest fire with burning trees on a narrow road, your car is smart enough to figure out that it needs to drive in reverse for a mile back to the waterfront like a father and son did when they encountered this situation. It's a car that's smart enough to know that it needs to not stop when a kidnapper is trying to get to the child inside the car. It needs to know what to do with a coming blizzard or flood conditions. When you don't have human infrastructure in place to support the vehicles and prevent them from operating in places and at times they can't be safe, absolutely every single thing a human can do spontaneously is something the vehicle will need to be able to handle. There is simply no way Level 5 can ever happen unless someone solves artificial general intelligence. And the incredibly strong odds are that this will not be managed by a car company at all. The first company to invent a technology that might enable Level 5 driving will most likely be a research company. They'll invent an AI that can do everything a human can, including driving. If they can do that, they won't need cameras on the cars. Tesla has no serious goal for level 5 autonomy. That's marketing. The same is true of the talk about these Hardware 3.0 cars eventually being rentable robotaxis. You can tell this is obviously true because of the timelines Elon gives. He's been claiming Level 4 or 5 operation will happen within a year, and yet we can clearly see many things that the vehicles aren't even _attempting_ to do. The takeaway is clear. Elon doesn't believe what he's saying. He is not so optimistic as to believe that goals he hasn't even started on will be finished in a few months. For now, Tesla is very content to focus on making a level 2 product. It's making them a lot of money. And the marketing claims of higher levels of autonomy have helped in that. I have no doubt that some people would make the argument that it's acceptable for a car to fail in extreme situations like the ones I described if it means that fewer accidents could happen in more normal driving scenarios. The problem with that argument is that it presents a false choice. We can do both. By adding human infrastructure to support these vehicles and limit their operation to the situations they can handle, we can benefit from their safety and also allow humans to solve or prevent the challenges autonomy can't handle. People present arguments for why releasing one region at a time doesn't scale well. They talk about maps, for example, which are a simple and quick step. The regional approach scales fine, especially when we consider that a relatively small number of cities represent the most lucrative markets for robotaxis. And there's no reason for expansion to be linear. We should have expected from the beginning that rollout would be slow initially as problems are solved. But this doesn't mean companies are starting over with each new city. It's very likely we won't see driverless cars that operate everywhere in our lifetimes. But some very smart AI people believe that AGI will be accomplished, and if that happens, then perhaps it's possible. But I'm not hearing many of those experts predict it will happen within 10 years.