We don’t have the same health insurance system as the US, but I really feel for people anywhere in the world who fall ill and can’t access affordable healthcare.
There is health insurance if you want it in the UK, but most people use the National Health Service, which is free. I am always grateful for this.
Like the idea, but would have no clue how feasible something like that is.
I currently run LLMs locally using ollama, and through the terminal - I know most people would not either want or be able to do this, but I’m glad I have an alternative to the current popular AI services.
I don’t disagree. If there is no alternative then this is better than nothing for most people.
Personally speaking, knowing about the background of some of the investors involved (Andreesen), and their active role in what’s happening in the US, I personally have cooled on using Mistral somewhat. Will still use though if I have to, but have plans to ultimately self-host - will have to make do with a stripped back AI experience.
I haven’t done any proper validation - was just a very basic PoC, using resources available to anyone (a list of companies and their subsidiary brands + free LLM) to see if it would work in principle.
The file itself is sourced from Wikipedia, so is probably accurate enough if you just wanted to ctrl+f and search manually.
Hi there. I really like your thinking. I’m not a professional developer, but I suspect it’s only a matter of time that apps that do something like this are available.
Here’s a post I saw on a related community which describes something like what would make you happy :)
Also, I tested my own very basic idea by taking a picture and uploading to the Mistral chat - it identified the products accurately, which was promising.
LLM can hallucinate, but agree with you that RAG accuracy can be made reliable enough - by refining prompts, adjusting temperature, improving data structure etc. Tolerance for potential errors depends on the use case of course, but for something like this I wasn’t too worried. Also this is a very simple PoC use case, just using the chat box of a free LLM. Making your own RAG using this would improve accuracy significantly. I know you know this as you also develop RAG applications, but others less familiar may not.
All the very best to all of you who stand for what is right. You have my respect.