• 2 Posts
  • 256 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle





  • Plug it into a computer and see what the computer says.

    I usually use Linux for that because it offers good error messages and I know the tools. But other operating systems might help, too.

    And if you start writing to the card or executing recovery tools, make a backup / image first.

    If the files are very important, maybe don’t tamper with it and ask for help. Like a repair shop, your local Linux community or any trustworthy computer expert friend.

    The biggest enemy is probably encryption, if it’s encrypted. The files are definitely still there if you just ripped it out. In the old days you could just run a recovery program and get everything back.





  • I’ve used laptops for more than a decade. And sure, in the early times thermal management wasn’t that elborate. But I really haven’t seen any laptop in many, many years that doesn’t do it with perfect accuracy. And usually it’s done in hardware so there isn’t really any way for it to fail. And I played games and compiled software for hours with all CPU cores at 100% and fans blasting. At least with my current laptop and the two Thinkpads before. The first one had really good fans and never went to the limit. The others hit it with an accuracy of like 2 or 3 degrees. No software necessary. I’m pretty sure with the technology of the last 10 years, throttling doesn’t ever fail unless you deliberately mess with it.

    But now that I’m thinking of the fans… Maybe if the fan is clogged or has mechanically failed, there is a way… A decent Intel or AMD CPU will still throttle. But without a fan and airflow inside the laptop, other components might get too hot. But I’m thinking more of some capacitors or the harddisk which can’t defend itself. The iGPU should be part of the thermal budget of the rest of the processor. Maybe it’s handled differently because it doesn’t draw that much power and doesn’t really contribute to overheating it. I’m not sure.

    Maybe it’s more a hardware failure, a defective sensor, dust, a loose heat conductor, thermal paste or the fan? I still can’t believe a laptop would enter that mode unless something was wrong with the hardware. But I might be wrong.



  • Why does it force the processor over the limit in the first place?

    I think in every other laptop the CPU just throttles when it gets too hot. Meaning it can never exceed the maximum temperature. I wonder if this is a misunderstanding or if HP actually did away with all of that and designed a laptop that will cook itself.

    And it’s not even a good design decision to shutdown the PC if someone runs a game… Aren’t computers meant to run them? Why not automatically lower the framerate by throttling? Why shut down instead?




  • Thanks for taking the time to explain it to me. The Github issue also is very helpful. Seems that’s exactly my answer to “Why do I need a fourth store in addition to F-Droid, AuroraStore and Obtanium” 😉

    Have a nice day, thanks for the STT keyboard! I didn’t really engage in the discussion because I’m exactly in the same situation as other people here. I already have the FUTO one and Sayboard… But eventually I’d like to replace FUTO software with free software alternatives. I don’t like their licensing. So this is very welcome.





  • It’s lemmy.ml . During the API wars on Reddit lots of people came here and lots of new instances were founded. lemmy.world was part of that and quickly grew into -I think- the now largest instance by far. But lemmy.ml is at least 2 years older and hosted by the actual developers. And due to history hosts to this point some of the large communities.

    Yeah. And “Lemmy self-corrects” is kind of what this post is about (in my opinion.) I’d like to see lemmy.world and a few other instances now do it and defederate. That’s how it should be, call out bullshit, be vocal and then do something about it. My point is, we’re at phase 1 or 2. Now we’re going to see if Lemmy self-corrects. As of now it didn’t.

    I think just hoping for a bright future isn’t cutting it. And if you ask me, all the infighting and defederating each other also isn’t healthy.


  • Well that paper only says it’s theoretically not possible to completely eliminate hallucination. That doesn’t mean it can be migitated and reduced to the point of insignificance. I think fabricating things is part of creativity. I mean LLMs are supposed to come up with new text. But maybe they’re not really incentivised to differentiate between fact and fiction. I mean they have been trained on fictional content, too. I think the main problem is to control when to stick close to facts and when to be creative. Sure, I’d agree that we can’t make them infallible. But there’s probably quite some room for improvement. (And I don’t really agree with the premise of the paper that it’s caused solely from shortcomings in the training data. It’s an inherent problem in being creative and that the world also consists of fiction and opinions and so much more than factual statements… But the training data quality and bias also has a severe effect.)

    That paper is interesting. Thanks!

    But I really fail to grasp the diagonal argument. Can we really choose the ground truth function f arbitrarily? Doesn’t that just mean given arbitrary realities, there aren’t hallucination-free LLMs in all of them? But I don’t really care if there’s a world where 1+1=2 and simultaneously 1+1=3 and there can’t be an LLM telling the “truth” in that world… I think they need to narrow down “f”. To me a reality needs to fulfill certain requirements. Like being contradiction free etc. And they’d need to prove that Cantor applies to that subset of “f”.

    And secondly: Why does the LLM need to decide between true and false? Can’t it not just say “I don’t know?” I think that’d immediately ruin their premise, too. Because they only look at LLMs who don’t ever refuse and have to decide on a truth.

    I think this is more related to Gödel’s incompleteness theorem, which somehow isn’t mentioned in the paper. I’m not a proper scientist and didn’t really understand it, so I might be wrong with all of that. But it doesn’t feel correct to me. And I mean the paper hasn’t been cited or peer-reviewed (as of now). So it’s more like just their opinion, anyways. I say (if their maths is correct) they just proved that there can’t be an LLM that knows everything in any possible and impossible world. That doesn’t quite apply because LLMs that don’t know everything are useful, too. And we’re concerned with one specific reality here that has some limitations. Like physics, objectivity or consistency.