• GamingChairModel@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    13 days ago

    Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.

    The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.

    Waymo’s approach appears to differ in a few key ways:

    • Lidar, as we’ve already been discussing
    • Radar
    • Sensor number and placement: the ugly spinning sensors on the roof get a different vantage point that Tesla simply doesn’t have on its vehicles now, and it does seem that every Waymo vehicle has a lot more sensor coverage (including probably more cameras)
    • Collecting and consulting high resolution 3D mapping data
    • Human staff on standby for interventions as needed

    There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.

    I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.