Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)R
Posts
5
Comments
266
Joined
2 yr. ago

  • Well that paper only says it's theoretically not possible to completely eliminate hallucination. That doesn't mean it can be migitated and reduced to the point of insignificance. I think fabricating things is part of creativity. I mean LLMs are supposed to come up with new text. But maybe they're not really incentivised to differentiate between fact and fiction. I mean they have been trained on fictional content, too. I think the main problem is to control when to stick close to facts and when to be creative. Sure, I'd agree that we can't make them infallible. But there's probably quite some room for improvement. (And I don't really agree with the premise of the paper that it's caused solely from shortcomings in the training data. It's an inherent problem in being creative and that the world also consists of fiction and opinions and so much more than factual statements... But the training data quality and bias also has a severe effect.)

    That paper is interesting. Thanks!

    But I really fail to grasp the diagonal argument. Can we really choose the ground truth function f arbitrarily? Doesn't that just mean given arbitrary realities, there aren't hallucination-free LLMs in all of them? But I don't really care if there's a world where 1+1=2 and simultaneously 1+1=3 and there can't be an LLM telling the "truth" in that world... I think they need to narrow down "f". To me a reality needs to fulfill certain requirements. Like being contradiction free etc. And they'd need to prove that Cantor applies to that subset of "f".

    And secondly: Why does the LLM need to decide between true and false? Can't it not just say "I don't know?" I think that'd immediately ruin their premise, too. Because they only look at LLMs who don't ever refuse and have to decide on a truth.

    I think this is more related to Gödel's incompleteness theorem, which somehow isn't mentioned in the paper. I'm not a proper scientist and didn't really understand it, so I might be wrong with all of that. But it doesn't feel correct to me. And I mean the paper hasn't been cited or peer-reviewed (as of now). So it's more like just their opinion, anyways. I say (if their maths is correct) they just proved that there can't be an LLM that knows everything in any possible and impossible world. That doesn't quite apply because LLMs that don't know everything are useful, too. And we're concerned with one specific reality here that has some limitations. Like physics, objectivity or consistency.

  • Lemmy.ml tankie censorship problem

    Jump
  • I get you. But they're the flagship instance. At least they used to be. They shape the brand identity of whole Lemmy. And that's being tankie and having a culture that could be nice, but regularly isn't. So everyone on the internet knows Lemmy isn't really something I want to subject myself to. And if we're being honest, alsmost nobody knows the fine nuances of power abuse on specific instances. It's just "Lemmy" that this gets attributed to.

    Every interaction here represents Lemmy. Some disproportionately so.

    And we've established, me leaving (which I've done) is not gonna change anything about it. The communities are still amongst the largest and where most of the users are, and also attracting the new users.

    Your argumantation would be perfectly valid if lemmy.ml were some small instance that's unheard of by most users. Or blocked by the rest of the network. We could ignore them then, let them do their own thing like the Fediverse does with a few nazi and conspiracy instances. But this isn't the case here.

    Regarding money and doing it "for the fun of it": That's not correct. They get money for two or three full-time jobs from the NLNet fund and the EU. They could be having fun, too. But they definitely also get a substancial amount of money for it.

    Concerning the 4chan example: That's on point. 4chan is the epitome of echo chamber and incel culture. That's mainly because there's no one else. They left. And now, why would anyone else visit a place like that in the first place? I'd rather not Lemmy become like that. Do you?

  • Lemmy.ml tankie censorship problem

    Jump
  • And am I supposed to let other people be subject to that, too? Let people like that drag down Lemmy as a whole? Shouldn't I have a nice and welcoming place on the internet for me and my friends?

    Do you like echo chambers? If you want my perspective: I have until now recommended Lemmy to exactly zero of my friends. Because of things like this. Lemmy has quite some potential. But it just has so many issues to tackle and the culture here just isn't what appeals to "normal" people. If other people share my experience, that's exactly why Lemmy still is below 50k active users and super small.

    Sure. I moved away from the .ml communities a few weeks ago because I think it's the right thing to do (for me). It's just dragging down everyone and making Lemmy a worse place. Like we see constantly with all the posts like this. Should we (the people who want more than an echo chamber, and want fair and honest discussions) all abandon Lemmy?

  • Lemmy.ml tankie censorship problem

    Jump
  • Probably !support@lemmy.world

    Go to lemmy.world and have a look at the Sidebar. That's where instances publish info like that. And they list several methods to contact them, there.

  • Lemmy.ml tankie censorship problem

    Jump
  • I don't think so. It's a bit like being bullied and your friends are being bullied, too. What do you do? Leave the room and be happy they bully your friends and not you? Keep silent which ultimately enables them? No. You're being vocal about it. You warn your friends not to go in there. And you try to do sth about it. In the end it's the bullies who should leave, not the nice people. Or the whole place is doomed and just getting worse.

  • Lemmy.ml tankie censorship problem

    Jump
  • My first idea would be to have users report posts and ping a random sample of like 20 active and currently online users of the community and have them decide (democratically). That way prevents brigading and groups collectively mobbing or harassing other users. It'd be somewhat similar to a jury in court. And we obviously can't ask everyone because that takes too much time, and sometimes content needs to be moderated asap.

  • Lemmy.ml tankie censorship problem

    Jump
  • You're 100% right OP. Don't let the people tell you it's a you problem and you should leave. It's exactly like you said (in my opinion.) If at all, it's the bad people who should leave. Not the nice ones and the ones calling out the bullshit.

    Nothing changes if the just people keep silent and let bigotry or whatever just happen. It just makes the whole place become worse. And I'd say it's warranted to speak up or do something. And as far as I heard you're not the only one complaining.

  • Lemmy.ml tankie censorship problem

    Jump
  • Lemmy.ml tankie censorship problem

    Jump
  • Are you referring to me or BigFig? I'm neither a mile (I'm European, so we use the metric system), nor a mole. If you make me choose an animal, I'd like to be an alpaca. And I'd be willing to do a captcha to prove to you that I'm not a bot.

  • Lemmy.ml tankie censorship problem

    Jump
  • Thanks for spreading the word. We get these complaints every few weeks. More people need to be educated and move away from these instances to make the Threadiverse a better place.

  •  
        
    services.tabby.enable = true;
    services.tabby.acceleration = "cuda";
    
      

    ? Could be another way.

  • I'm pretty sure he did this out of this own motivation because he thinks/thought it's a fascinating topic. So, sure this doesn't align with popularity. But it's remarkable anyways, you're right. And I always like to watch the progression. As far as I remember the early videos lacked professional audio and video standards that are nowadays the norm on Youtube. At some point he must have bought better equipment, but his content has been compelling since the start of his Youtube 'career'. 😊

    And I quite like the science content on Youtube. There are lots of people making really good videos, both from professional video producers and also from scientists (or hobbyists) who just share their insight and interesting perspective.

  • And maybe have a look at his Youtube channel and the older videos, too. Lots of them are a bit more philosophical and not too technical for the average person. I think he's quite inspiring and conveys very well what AI safety is about, and what kinds of problems that field of science is concerned with.

  • I'd agree with the recommendation of Lutris and Bottles. Just install the two and see what you like and which works best. I've heard Lutris is pretty good. And both tools handle most of the underlying stuff for you, like managing Wine and Proton.

    There are quite some guides/tutorials/youtube videos on how to use them.

  • I think they're using Widevine DRM. And with DRM they can enforce whatever arbitrary policies they like. They set special restrictions for Linux. I think Amazon set 480p as max, Netflix 720p and YouTube 4k or sth like that. AFAIK it has little to do with technology. It's just a number that the specific company sets in their configuration.

  • Quite some AI questions coming up in selfhosted in the last few days...

    Here's some more communities I'm subscribed to:

    And a few inactive ones on lemmy.intai.tech

    I'm using koboldcpp and ollama. KoboldCpp is really awesome. In terms of hardware it's an old PC with lots of RAM but no graphics card, so it's quite slow for me. I occasionally rent a cloud GPU instance on runpod.io Not doing anything fancy, mainly role play, recreational stuff and I occasionally ask it to give me creative ideas for something, translate something or re-word or draft an unimportant text / email.

    Have tried coding, summarizing and other stuff, but the performance of current AI isn't enough for my everyday tasks.

  • Yeah, but usually with open-source software you get like 150 Github comments complaining and outlining their shady business practices... If there's something to complain about.

    The XZ disaster is an example for sth else. There are probably more backdoors in proprietary software that we just don't know about. And they can just keep it hidden away and force the manufacturers to do so. No elaborate social engineering like in the XZ case needed... And no software is safe. They all have bugs and most of them depend on third-party libraries. That has nothing to do with being open or closed source. If so, being open provides you with more of a chance to catch mischievous behaviour. At least generally speaking. There will be exceptions to this rule.

  • What's that got to do with AI?

    Edit: Ah. Probably the search bar from the screenshot.

  • Isn't that very similar to what TikTok does? Just with a different algorithm and maybe other content than just videos?