• kadu
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 hours ago

    Why isn’t OpenAi working more modular whereby the LLM will call up specialized algorithms once it has identified the nature of the question?

    Precisely because this is a LLM. It doesn’t know the difference between writing out a maths problem, a recipe for cake or a haiku. It transforms everything into the same domain and is doing fancy statistics to come up with a reply. It wouldn’t know that it needs to invoke the “Calculator” feature unless you hard code that in, which is what ChatGPT and Gemini do, but it’s also easy to break.

      • zalgotext@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 hours ago

        Sort of. There’s a relatively new type of LLM called “tool aware” LLMs, which you can instruct to use tools like a calculator, or some other external program. As far as I know though, the LLM has to be told to go out and use that external thing, it can’t make that decision itself.

      • kadu
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        Can the model itself be trained to recognize mathematical input and invoke an external app, parse the result and feed that back into the reply? No.

        Can you create a multi-layered system that uses some trickery to achieve this effect most of the time? Yes, that’s what OpenAI and Google are already doing by recognizing certain features of the users’ inputs and changing the system prompts to force the model to output Python code or Markdown notation that your browser then renders using a different tool.