It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.

  • Bananskal@nord.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    If you only use source image and motion vectors as input, so you’re essentially predicting instead of interpolating, surely that introduces some type of stuttering now and then due to correction which will be necessary eventually? Or am I misunderstanding it?

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      They don’t ever do more than 4 predicted frames per 1 full frame, and usually just 1:1

      • fulg@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        That and the game can flag frames that are too different (camera cuts) to mitigate this problem.

        What the game supplies is the current frame + motion vectors, but the framegen bits take over how the frames are displayed onscreen. This is where the extra latency comes from, at worst you are seeing one true frame behind what the game is rendering, while the presentation layer generates the intermediate frame(s).