Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    Would that be diminishing returns on quality, or training speed?

    If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.

      My guess is that the main reason for all the GPUs is they’re going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as “open” then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don’t really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).