Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
3
Comments
105
Joined
6 mo. ago

  • Graphene (more specifically its founder) is always in a vicious cycle of claiming that everyone asking for proof of Graphene being "under attack" is in itself making an "attack". You can consider yourself Graphene's enemy for life for your transgression.

    Watching these youtube links makes you an attacker also, so be careful: https://youtu.be/Dx7CZ-2Bajg https://youtu.be/4To-F6W1NT0

  • I don't want to write up a whole paper at the moment but I'll note that you really shouldn't be trusting any cloud providers with your data, because you should always be fully encrypting your data before they get their hands on it. Plasma Vaults (if you use KDE) are one way to do this, or you can use something like Cryptomator, gocryptfs, etc. Basically how it works is that you store files encrypted in one directory (/home/me/Encrypted), then transparently unencrypt that data to another mountpoint for your regular usage (/home/me/Unencrypted). Modifications in the Unencrypted directory will automatically affect the Encrypted directory through the use of magic. The cloud provider will only sync the Encrypted directory, and without the key they know nearly nothing about what your data is.

    Given this sort of workflow, you can store your data anywhere, as long as you have a nice (open-source) way of syncing to that provider that can't introduce any further vulnerability.

  • I'm not like a sponge connoisseur, but I've been using "O-Cedar Scrunge" sponges for about a year and they're pretty rugged. I have two sponges in rotation, and every time I do a dishwasher load I alternate them through it. They've never really fallen apart on me, but I think the green scratchy side gets a little less scratchy over time, and I just replace both of them every 2-3 months for good measure. I'm assuming that there's a scientific paper somewhere that says using sponges for that long will kill me or something, but I'm still alive so far so fingers crossed.

  • I'm curious to see how the price will be affected as consumer PCs get stronger every year. Will they update the Steam Machine every couple of years, or will they decrease the price? I have to assume they are targeting a neutral price because their primary goal is to assemble a linux box with as little margin as possible and put it in front of you for an actual fair price, but "fair price" is a moving target.

    Personally, I'm all for getting what I pay for. People who sell to you at a loss are up to something.

  • I don't entirely mean to throw rocks, but there's something funny about them dragging their feet so long on supporting a linux version (8+ years) that by the time their personal breaking point with windows came, they discovered themselves on the other side of the issue with no one to blame but themselves. Maybe a parable.

  • Absolutely not trusting this. Uninstalling until we know more, and ideally just getting a different solution entirely. A new account tried to impersonate Catfriend1 directly at first, and then they switched to researchxxl when someone called it out (both are new accounts). Meanwhile the original Catfriend1 has provided no information about this, and we only have the new person's word as to what's going on. There's way too many red flags here.

  • It's not only that 10% is wrong, it's knowing which 10% is wrong, which is more important than it seems at first glance. I feel strongly that AI is contributing to people's inability to really perceive reality. If you direct all your learning through a machine that lies 10% of the time, soon enough your entire world-view will be on shaky ground. How are you going to have a debate with someone when you don't know which parts of your knowledge are true? Will you automatically concede your knowledge to others, who may be more convincing and less careful about repeating what they've learned through AI?

    I think all that AI really needs to do is translate natural language requests ("What factors led to WW2?") into normal destinations for further learning. Letting AI try to summarize those destinations seems like a bad idea (at least with where the technology is right now)

  • on debian 13.1 I just updated yt-dlp to stable@2025.11.12 but I still cannot download videos. What am I doing wrong?

    Jump
  • Semi-related for people whose distros don't package deno, I installed deno in a distrobox and exported it with distrobox-export and yt-dlp picked it up just fine from my $PATH. Before I did so, running yt-dlp gave the following error:

       
        
    WARNING: [youtube] No supported JavaScript runtime could be found. YouTube extraction without a JS runtime has been deprecated, and some formats may be missing. See  https://github.com/yt-dlp/yt-dlp/wiki/EJS  for details on installing one. To silence this warning, you can use  --extractor-args "youtube:player_client=default"  
    
      
  • I just want to note that Jellyfin MPV Shim exists and can do most of this MPV stuff while still getting the benefits of Jellyfin. You're putting a lot of emphasis on Plex-specific limitations (which Jellyfin doesn't have obviously) and transcoding (which is a FEATURE to stopgap an improper media player setup, not a limitation of Jellyfin).

    Pretty much every single "Pro" is not exclusive to pure MPV vs. Jellyfin MPV Shim, which mainly leaves you with the cons. Also as another commenter said, I set my Jellyfin up so that my friends and family can use it, and that's its primary value to me. I feel like a lot of this post should be re-oriented towards MPV as a great media player, not against Jellyfin as a media platform.

  • I can launch it fine:GE-Proton10-21WINEDLLOVERRIDES="wsock32=n,b" %command% -skip_intro -steamMM -NewCPU

  • Yep, I forgot it's not a company. The point stands though; someone has to pay for the servers and administration, and if they run out of money or the foundation falls apart, then the problem happens in the same way. I don't know much about Wikipedia's structure, but I would guess it's a similar situation in terms of needing money to stay running and also being able to be salvaged by the community if it does go down.

  • Weirdly, Chris does have a GitHub repo for it, which would be way more secure for serving downloads, but it's not mentioned on the download page. We're also assuming that Chris is trustworthy and has not included a malicious payload into this file, in which case it wouldn't matter if the file is correct - it would still be malicious.

    My advice is don't run things you don't understand or that people you trust have not vouched for. If you use Linux, you inherently trust your repository maintainers to not serve you malicious code and to audit the packages they are maintaining, so you can be delivered safe software from secure repositories without needing to understand every line of code. This isn't 100% bulletproof, as we saw with the XZ Utils backdoor, but it's a hell of a lot safer than piping things raw.

  • Worth noting that when What died, ~4 new sites popped up immediately and invited all the old members, and everyone raced to re-upload everything from What onto them, which was actually pretty effective. At this point, RED and OPS have greatly surpassed What in many ways, aside from some releases that never made it back (you can actually find out which releases used to exist because What's database was made available after its death). Users and staff are a lot more prepared if it happens again, e.g. keeping track of all metadata via "gazelle-origin".

    If by "in" you mean how to get into them, generally you're supposed to have a friend invite you. If you don't have anyone you know on private trackers, you've gotta get in from scratch. Luckily, RED and OPS both do interviews to test your knowledge on the technicals of music formats, though I've heard RED's interview queues are long and OPS's interviews are often just not happening: https://interviewfor.red/en/index.html https://interview.orpheus.network/

    Alternatively, you can interview for MAM, which is IMO the best ebook/audiobook tracker. They're super chill and have a very simple interview e.g. "what is a tracker": https://www.myanonamouse.net/inviteapp.php. After that, you can just hang around there for a while until you can get into their recruitment forums to get invites to other entry-level trackers, and then on those entry-level trackers you can get recruited into slightly higher-level trackers, and so on, and eventually RED/OPS should be recruiting from somewhere.

    This can feel a little silly and convoluted, but I guess I'd just appreciate that these sites put the effort into conducting interviews for new people at all, since the alternative is that you will just never get into anything without a friend. Reddit's /r/trackers wiki is unfortunately one of the better places for information about private trackers if you want to do further reading.

  • As an aside, please don't pipe arbitrary code from the internet directly into execution. Download the file and read it first. Someone could easily pwn that site and host malware at that URL, for example.

  • If you have any drive to get back into it, TMK the interview for RED is roughly the same as the interview for WCD, and although OPS isn't interviewing right now it's fairly easy to get to power user on RED and get an invite to OPS that way. I think RED is a little bit more hard-ratio than WCD was because RED doesn't do freeleech staff picks or site-wides, but they do give out handfuls of freeleech tokens from time to time, so even if you can't keep up with ratio requirements you can still nab free stuff with those just by having an account. As before, having an OPS account will help tremendously for keeping up with RED ratio, and eventually it'll become a non-issue.

  • Yes, it's allowed and encouraged between RED<->OPS. There are a few tools on the RED and OPS forums to automate most of the process (e.g. Transplant, REDCurry, Takeout, Orpheus-Populator, etc.). Cross-posting torrents on many sites is allowed and fine, you just have to be aware of the rules of the source site, e.g. some places don't want their internals to be shared, or some have a literal timer countdown before cross-posting is allowed. On the other hand, most sites are not going to enforce other sites' exclusivity demands (PTP explicitly has a note about this). If an exclusive file is cross-posted onto PTP, PTP isn't going to take it down on anyone's behalf.

    I'll note that private tracker culture has warmed up quite a bit in the past decade and a half that I've been on them. Trackers (and their users) don't usually see other trackers as rivals/competitors anymore, release groups are respectful of each other, there are a ton of tutorials and help forums around to help low-skill members learn how to do the advanced stuff, and so on. There are recognizable usernames everywhere, and the general vibe is to cross-upload as much as possible and help build everyone's trackers together. Cross-seed (the program) has helped a lot with this, and seedbases have become very strong even on smaller trackers as a result.

  • Mainly, HDDs are bigger and FLAC is future-proof for future audio formats, as well I think the death of What.CD has really impressed upon the next generation that preservation is of utmost importance. A lot of albums were fully lost during the transition to RED/OPS, and a good chunk of albums that used to have a lossless copy now only have lossy versions from those who kept MP3 libraries. IMO, piracy is ownership, and owning the master lossless copy so you can generate any other formats is that concept taken to its logical conclusion.

  • Seconding the notion to get into OPS somehow if at all possible. RED's economy is one of the few economies that is actually non-trivial, whereas OPS's economy is totally trivial. A large amount of RED stuff is automatically mirrored to OPS, so you can just grab it at OPS and cross-seed back to RED (there are a few tools to do this automatically, e.g. nemorosa). RED is still definitely the more active and qualitative place to be, but cross-seeding shenanigans with OPS will keep RED's economy in-check.

  • A lot of people just rip Qobuz, Deezer, and Tidal FLAC for free using shared keys that you can find on the megathread ("Knowledge & Tokens"). Autosnatchers will give you at least one snatch per upload. No one is actually buying most of that WEB FLAC. There also might be a big batch of freeleech tokens during December for kickstarting a library. Also, I'd recommend just going full FLAC from the start; MP3 is easier/smaller to snatch, but it's 2025 and no one wants MP3, so long-term you'll get the best results by perma-seeding a large FLAC library.