FLOSS virtualization hacker, occasional brewer

  • 11 Posts
  • 428 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle





  • I think the article is over complicating things. I work in a project which is heavily forked for a variety of reasons. While it’s academically interesting to look at the reasons for those downstream forks we have no interest in going to the considerable effort of tracking them all.

    If you can take a project and use an LLM to enable your niche use case then more power to you. FLOSS was never about ensuring all patches flow upstream.







  • My kids are growing up in this environment and they already have an eye for ai slop. I suspect it’s the same thing that led to OpenAI’s TikSlop “product” is getting canned. After society had gotten over the sugar rush excitement of new and shiny toys I suspect the interest will fade and people will crave the connection you get from real art made by real people.

    At least I hope that is what will happen. We might have to do something to hold the tech companies accountable for their dopamine trigger machines though.








  • It’s not entirely unexpected, all the AI companies have been heavily subsidising inference to get customers.

    I don’t use Codex but I’ve been experimenting with ECA and I can track my token API costs across Gemini and Anthropic. I’m mostly using Gemini and a heavy days usage would be £1.50 in API costs and I’m certainly not doing that every day. I have to wonder if these Codex users are conscious of how many tokens they are burning underneath or just YOLOing everything until the computer says no?

    ECA allows you to mix and match models to sub-agents and I could certainly see me offloading some tasks like code exploration to a locally hosted models and saving the expensive reasoning tokens for planning.