• TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    89
    ·
    3 days ago

    Not that vehicles shipped after 2023 will be able to either.

    Waymo, with Lidar, and all the tech still uses remote human drivers to deal with harder situations.

    I don’t think we’re going to see FSD anytime soon. Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      73
      ·
      3 days ago

      Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

      The funny part is that every CS grad student studying AI understood that perfectly well a decade ago.

      • qprimed@lemmy.ml
        link
        fedilink
        English
        arrow-up
        31
        ·
        3 days ago

        exactly. I have at least a dozen projects that prove the fucking point.

        one can hope there will be an avalanche class action suits that crush this nazi and his swasticar.

        • Corkyskog@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 days ago

          It would be funny if damages got astronomical because he promised everyone that they would be able to rent out their car as a taxi service for extra income.

          • finallymadeanaccount@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            Nothing I’d like more than people sitting in my car doing God knows what as it drives around unsupervised.

            “Some guy came in the glovebox! What the actual fuck!”

            • anomnom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              Maybe that’s why they hid the glove box button in the screen.

              No it’s because they are fucking tech morons who love screens.

    • ChicoSuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 days ago

      Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

      Techbros are learning this right now with AI too. Who could have guessed getting mostly there isn’t the same as being there.

    • BillyClark@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 days ago

      95% isn’t good enough, and 100% is really fucking hard.

      This is a more extreme case of the extremely famous rule in programming called the 90-90 rule:

      “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.”

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      23
      ·
      3 days ago

      Humans aren’t better at driving in bad situations, they’re just better at ignoring most of the input and focusing on one thing, and more importantly, taking risks, which a computer isn’t going to be programmed to do. If a human navigates through a bad rainstorm, barely able to see anything, and makes it out fine, then they claim they’re better than a self-driving car which would shut down. Or more simply, a route that is very tight and risky, but a human will YOLO and make it through. They’re not better, they’re just lucky a lot.

      • DomeGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 days ago

        Humans are better in bad situations because humans drive like humans, and they expect all the other cars on the road to also drive like a human.

        The worst thing on the road is to be unpredictable, and an AI encountering a situation not in its training set is unpredictably unpredictable.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          7
          ·
          3 days ago

          You’re correct on AI. But I laughed at you saying humans are predictable. Seen any dashcam footage?

          • ikidd@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 days ago

            I just imagine the dumbest thing someone can do in a situation and model for that.

            Icy roads? Well, the logical thing to do is make sudden moves at the last possible second without leaving a buffer. Stand on the throttle at every intersection, and start braking when you normally would in the middle of summer, of course.

            Honestly, you learn to predict unpredictability. Slight movements will tell you when someone is going to change lanes without a shoulder check or cross three lanes of traffic to make an exit that they could just have easily gone on to the next interchange without endangering themselves and others. Hell, I watch peoples eyes in their side mirrors look at me as they incorrectly judge how much space they have to insert themselves in front of me, when there’s a kilometer of space behind me they could use instead.

      • surewhynotlem@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        Humans aren’t better at driving in bad situations,

        Gives reasons why humans are better at driving in bad situations

        Seriously?

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          9
          ·
          3 days ago

          Yes, if you read them. If you consider doing a Hail Mary in bad weather and managing to not hit anything better, then I guess they are better… at taking risks.

          • surewhynotlem@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Every time you get in a car you’re taking a risk. Being a good driver is about knowing which risks are acceptable to take. That’s why waymo offloads complex situations to humans.

            • Rhaedas@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              2 days ago

              What are the risks taken when you get in a car? Oh, right, accidents. Caused by all the AI on the road. Not the humans.

              If we had developed automated transportation first and then tried to introduce human driving, people would say that’s insane. It’s the human element that breaks things, every time. I don’t care about all the downvotes, but I know each of them is from someone who thinks they’re the best driver out there. It’s a human thing to do.

              • surewhynotlem@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Accidents are caused by plenty of things that aren’t humans. I had a distant relative die from a tree falling on their car at a stop sign. The world is random and unpredictable.

                This entire conversation is about the small percentage of time that AI can’t handle the situation. And you haven’t addressed that point. And neither have AI companies. And that’s why they aren’t succeeding yet.

                And you can’t just say those situations don’t exist. They clearly do. And it’s not because human drivers are out there. Road hazards, shut down roads, sink holes, extreme weather, these things all exist.

                I’m starting to think you don’t have much experience on the road. How long have you been driving for? Have you really never come across a unique situation that you don’t think an AI could handle? Have you never driven in a city?

                • Rhaedas@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  My 40+ years of driving experience with various vehicles and equipment and in all sorts of conditions has nothing to do with this. So let’s not use that fallacy.

                  My first post on this was exactly about the times when AI can’t handle the experience, and was made to point out that sometimes humans just drive through situations they should have not, and get lucky. By design a machine isn’t going to do that, or if it did would be banned because that’s reckless, yet people drive like that daily and manage to usually avoid incidents. If you read back, the whole point I was making is that humans are superior necessarily, they’re just different in how they approach things. Lots of automation for safety is a great thing and has saved lives, because it fills in the gap where humans tend to fail. Full automation just can’t do some of the things humans do, and part of that is taking a risk and getting lucky. One thing that they’ve tried to simulate is our ability to filter out the noise, and that is a difficult task.

                  My answer that involved accidents was to point out that humans tend to cause the accidents that hurt other people. Other things happen too, but they can probably be traced back to a human making a mistake somewhere, even if it’s as simple as following too close or being distracted (sometimes our noise filter doesn’t work well).

                  I don’t think any of my posts have inferred that AI is better or even up to the task, they were more about how humans aren’t as great as they’re made out to be in these AI arguments. We just accept the level of problems and use technology to try and counter their deadliness, or avoid them if possible.