It’s probably their own search/RAG backend, or at least their configuration of some open source project.
And that’s the important part. Get the article retrieval right, and the LLM performance isn’t that important; they could self-host Qwen 27B or something and it’d work fine.
I would be surprised if it was something that they trained themselves, and not an off the shelf model hooked up to a search.
It’s probably their own search/RAG backend, or at least their configuration of some open source project.
And that’s the important part. Get the article retrieval right, and the LLM performance isn’t that important; they could self-host Qwen 27B or something and it’d work fine.