I do think that the concept of recall is very interesting, I want to explore a FOSS version where you have complete ownership of your data in a secure manner
I'm using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions...
I don't know about you but I feel like I'm living in an age of wonder
I'm not sure what to say about the prompts, I feel like I'm integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced
I feel like "most people" only learn "one technology per category". They know of, one operative system, one browser, one app to mindless scroll, one program to edit text. As a developer it shocks me a little because I'm always eager to try new programming languages, technologies and ways to interact with things. I guess most people only know about edge/safari because they come pre-installed
It's weird that I've been on firefox for the vast majority of my life and I always had this perception that "everyone" was using it. Here in lemmy you hear about it all the time, my friends use it, I see it on my newsfeeds etc
I have decreased my meat consumption to about a third than it used to be in recent years. I'm not qualified to do an in-depth study about all the ramifications of the CO2 emissions, but agriculture being just about 11.2% of all emissions sounds like eating less cow won't cut it to "save ourselves"
I have a hunch that shit will hit the fan and there will be a massive reduction in CO2 emissions because of a supply chain failure. Third world countries produce the vast majority of "low manufacturing complexity" products, which will be made even more unsustainable if those regions become a scorched earth. That, coupled with a lesser incentive to travel due to an adverse climatic situation, and a trend in population decrease due to an overall quality of life degradation, will really be the reason why we will reduce emissions, simply because things stop working and become unsustainable
Either way, I don't think it's possible to really predict the future and even less so in such a complex society where technology might be a game changer all of the sudden, so my opinion is not really that valid. Even educated estimates using proper statistics/data cannot guess the implications of new wars, AI, new scientific breakthroughs etc
That argument it's fallacious and reductionist, I'm not denying the situation it's messed up, but objectively speaking we all have 0 idea about who's making what decisions and how this google search shitstorm was caused
People get very confused about this. Pre-training "ChatGPT" (or any transformer model) with "internet shitposting text" doesn't cause them to reply with garbage comments, bad alignment does. Google seems to have implemented no frameworks to prevent hallucinations whatsoever and the RLHF/DPO applied seems to be lacking. But this is not "problem with training on the entire web". You can pre-train a model exclusively on a 4-chan database that with the right finetuning you would see a perfectly healthy and harmless model. Actually, it's not bad to have "shitposting" or "toxic" text in the pre-training because that gives the model an ability to identify it and understand it
If so, the "problem with training on the entire web" is that we would be drinking from a poisoned well, AI-generated text has a very different statistical distribution from the one users have, which would degrade the quality of subsequent models. Proof of this can be seen with the RedPajama dataset, which improves the scores on trained models simply because it has less duplicated information and is a more dense dataset: https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
Lemmy seems to be very near-sighted when it comes to the exponential curve of AI progress, I think this is an effect because the community is very anti-corp
How did this clickbaity headline got so many upvotes? Are we really cherry-picking some outlier example of a hallucination and using it to say "haha, google dumb"? I think there is plenty of valid criticism out there against google that we can stick to instead of paying attention to stupid and provocative articles
I do think that the concept of recall is very interesting, I want to explore a FOSS version where you have complete ownership of your data in a secure manner