Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
1
Comments
44
Joined
2 yr. ago

  • I like that as well, thank you! Yeah, the "Daily AI Habit" in the MIT article was described as...

    Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.

    Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.

    You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.

    As a daily AI user, I almost never use image or video generation and it is basically all text (mostly in the form of code), so I think this daily habit likely wouldn't fit for most people that use it on a daily basis, but that was their metric.

    The MIT article also mentions that we shouldn't try and reverse engineer energy usage numbers and that we should encourage companies to release data because the numbers are invariably going to be off. And Google's technical report affirms this. It shows that non-production estimates for energy usage by AI are over-estimating because of the economies of scale that a production system is able to achieve.

    Edit: more context: my daily AI usage, on the extremely, extremely high end, let's say is 1,000 median text prompts from a production-level AI provider (code editor, chat window, document editing). That's equivalent to watching TV for 36 minutes. The average daily consumption of TV in the US is around 3 hours per day.

  • Do you have a source for this claim? I see this report by Google and MIT Tech Review that says image/video generation does use a lot of energy compared to text generation.

    Taking the data from those articles, we get this table:

  • Could you explain further?

  • This feels like it would make people buy it more because it's such a rad sticker to have on a box. It's like the Parental Advisory notice on CDs. It just made them way cooler and were like a badge of honor.

  • This further points to the solution being smaller models that know less and are trained for smaller tasks. Instead of gargantuan models that require an insane amount of resources to answer easy questions. Route queries to smaller, more specialized models, based on queries. This was the motivation behind MoE models, but I think there are other architectures and paradigms to explore.

  • Boomer meme alert

  • absolutely no one:

    me: this meme format sucks

  • This fucking sucks

  • AI coding assistants have made my life a lot easier. I've created multiple personal projects in a day that would've taken me multiple days of figuring out frontend stuff.

    It's also helped me in my work, especially in refactoring. I don't know how y'all are using them, but I get a lot of efficient use out of them.

  • Sexting

    Jump
  • I had a short-term ex say that and it really turned me off every time. If the relationship went on longer, I would've eventually said something. It really weirded me out, especially knowing that her dad died of cancer a year prior. It's like, what the hell is going on in your head? Get that checked out.

  • Oh I completely agree that we are turning everything to shit in about a million different ways. And as oligarchs take over more, while AI is a huge money-maker, I can totally see regulation around it being scarce or entirely non-existent. So as it's introduced into areas like the DoD, health, transportation, crime, etc., it's going to be sold to the government first and it's ramifications considered second. This has also been my experience as someone working in the intersection of AI research and government application. I immediately saw Elon's companies, employees, and tech immediately get contracts without consultation by FFRDCs or competition by other for-profit entities. I've also seen people on the ground say "I'm not going to use this unless I can trust the output."

    I'm much more on the side of "technology isn't inherently bad, but our application of it can be." Of course that can also be argued against with technology like atom bombs or whatever but I lean much more on that side.

    Anyway, I really didn't miss the point. I just wanted to share an interesting research result that this comic reminded me of.

  • Yeah, this is what I'm going to do if I think about getting another cat again. These two are probably already gone. I was just entranced yesterday and my imagination was running a little too wild haha.

  • I don't think they're bonded. They were just delivered to the PetCo from a rescue at the same time.

  • Honestly, moving would be devastating at this point. I'd probably have to pay $200 more for a place 1/2 the size (and I currently have a personal garage and a balcony). I'm not gonna risk it because I saw something cute lol

  • 🤣 awesome

  • I think this is the right answer. I've been a little more flippant with rules in my life lately and I think I needed someone else to tell me this. I don't really want to give my landlord any reason to raise the rent more or kick me out.

  • cats @lemmy.world

    Should I adopt one of these as a friend for my 2 year old cat?

  • Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don't. And that when I've introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.

  • Could you explain?

  • That's why too high a level of accuracy in ML is always something that makes me squint... I don't trust it, as an AI researcher and engineer, you have to do the due diligence in understanding your data well before you start training.