Out of curiosity how are you doing a local LLM? I’ve been trying recently and the results I’ve gotten have been really subpar compared to what the big boys offer. Been using LM Studio.
- Posts
- 7
- Comments
- 1090
- Joined
- 3 yr. ago
- Posts
- 7
- Comments
- 1090
- Joined
- 3 yr. ago
Thank you, very insightful.
Really the big disguishing feature is VRAM. Us consumers just don’t have enough. If I could have a 192GB VRAM system I prolly could run a local model comparable to what OpenAI and others offer, but here I am with a lowly 12GB