

I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.


I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.
How much do you think a weighted blanket weighs?
Why would it be too late?
If the state of things really are that bad, doesn’t it make sense to try to correct things? The best time to start correcting is certainly in the past… But the next best time to start correcting is now.


I realised a while ago that it’s way cheaper to hunt for second-hand intel NUCs, and the resulting machine is way more powerful… And the RAM and storage is upgradeable, if the NUC didn’t come with plenty of storage or RAM already…


Immich is a self-hosted photo and video management system. It’s heavily inspired by Google Photos.


“Codebase drag” formerly known as “technical debt”.
In that case Debian, Ubuntu, and Windows server should absolutely dwarf OSX…
That said, I personally wouldn’t consider “server use” to be “mainstream”… To me install-base does not equate to “mainstream”.
Shouldn’t Ubuntu be more “mainstream” than debian?


The “routing” can still refer to routing to devices attached via a switch. So no need for a third port to qualify as a router.


Thad looks like a crazy efficient way of downing drones… And a lot cheaper in the long run than kamikazee’ing into it…
Awesome development, and nice aiming.
Slava Ukraini


All my docker images are in code in Github.
Renovate makes a PR when there are image or helm chart updates.
ArgoCD sees the PR merge and applies to Kubernetes.
For a few special cases I use ArgoCD-image-updater.


Well… Canada has a land border with Denmark…


I have my Firefox configured to force HTTPS, so it’s rather inconvenient to work with any non-HTTPS sites.
Because of that I decided to make my own CA. But since I’m running in Kubernetes and using cert-manager for certs, this was really easy. Add a resource for a self-singed issuer, issue a CA cert, then create an issuer based on that CA cert. 3 Kubernetes resources total: https://cert-manager.io/docs/configuration/ca/ and finally import the CA cert on your various devices.
However this can also be done using LetsEncrypt, with the DNS01 challenge. That way you don’t need to expose anything to the Internet, and you don’t need to import a CA on all of your devices. Any cert you issue will however appear in certificate transparency logs. So if you don’t want anyone to know that you are running a Sonarr instance, you shouldn’t issue a certificate with that in it’s name. A way around that is a wildcard cert. Which you can then apply to all your subservices without exposing the individual service in logs. The wildcard will still be visible in the logs though…


PSU can indeed make a pretty big difference.
If you only have a 80 Plus certified PSU, and see 65 watts drawn at the wall, your system might actually only be using 52 watts, the remaining 13 watts are wasted as heat in your PSU. 80 Plus Gold, Platinum, or Titanium all carry higher efficiencies, but also cost more to buy.
Actual efficiency is also heavily influenced by the load. Most PSUs are most efficient at 50% load. Both lower and higher loads with result in worse efficiency.
Here’s an article with some more details: https://www.technewstoday.com/power-supply-efficiency/


https://minerva-archive.org/ is working on archiving all the data from Myrient. Unfortunately their main page is down right now. But they have a client that volunteers are running to coordinate the archival efforts. Last I heard they already had >80% of all the content archived


Resilio wouldn’t work well for distribution…
But archive.org seems to handle torrents pretty well. When they have a bundle they add a torrent with the same content, and set up themselves as webseed… Then everyone can download either directly or through torrent, and choose to seed what they want. If the content changes, post a new torrent… Of course that means that any old seeders get invalidated… But if they care about seeding they could update the torrent and point it at the old download to avoid redownloading everything. But also, how often does this content actually change? If a game iso/rom is ripped/dumped correctly isn’t that data kind of final? Why would the bit-perfect data need to change?
Yep