Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)R
Posts
0
Comments
42
Joined
3 yr. ago

  • Why are AI agents on the org chart? That's odd and sketchy. Seems like it could be some sort of fraud to pad numbers.

  • Yea, after reading the article, this is an overall of the electronic application process that needs to happen before entry. And it'll include not just social media handles, but also email addresses. Seems reasonably easy for a "bad guy" to skirt.

  • Honestly, I suspect this is a sneaky way to get CBP access to what ever data sharing shit the social media companies have with the rest of the spooks. Simply by attempting to enter the US someone "agrees" to an automatic search of their social data.

  • It's probably because entry for Canadians is specified by a different program. Even the State Department website seems to exclude Canada from the VWP.

  • If you were to actually read the substack the original author wrote, it's well justified reasoning. The original poverty calculation was based on the cost of food as a percentage of income of a family that is fully participating in society. The author explains though that food is a much smaller portion of our daily expenses and that the cost of fully participating in society includes significantly more expenses. So, if we still use food as a baseline, but re-evaluate it's percentage of expenses. The new poverty line comes out to about 130k. The author also validates this by looking at the national average expenses and indeed for a family, fully participating in society with no government support, it's around that range. But you know, continue being snarky.

  • You're going to a lot of effort to not actually mention what this thing is, which makes me wonder what it is and I suspect knowing that would provide additional and useful context.

  • I work at an Infrastructure Cloud company. I design and implement API and Database schemas, I plan out backend workflows and then implement the code to perform the incremental steps of each workflow. That's lots of code, and a little openapi and other documentation. I dig into bugs or other incidents. That's spent deep in Linux and Kubernetes environments. I hopefully build monitors or dashboards for better visibility into issues. That's spent clicking around observability tooling, and then exporting things I want to keep into our gitops repo. Occasionally, I'll update our internal WebUI for a new feature that needs to be exposed to internal users. That's react and CSS coding. Our external facing UI and API is handled by a dedicated team.

    When it comes to learning, Id say find a problem you have and try to build something to improve that problem. Building a home lab is a great way to give yourself lots of problems. Ultimately, it's about being goal oriented in a way where your goal isn't just "finish this class".

  • This is because there isn't a job shortage. It's offshoring. The company I (thankfully willingly) left 2 years ago has shifted all of their software hiring to Europe. And since I left has had multiple US focused layoffs. All while the Euro listings keep popping up. And I get it, the cost of living is much lower and the skill set is equivalent. So yea, get your bank. But, this is companies exploiting Europe/Asia, rather than it being something Europe/Asia is immune to.

  • Yea, it's the combo of the chiller and cooling tower is analogous to a swamp cooler. The cooling tower provides the evaporative cooling. The difference is that rather than directly cooling the environment around the cooling tower, the chiller allows indirect cooling of the DC via heat exchange. And isolated chiller providing heat exchange is why humidity inside the DC isn't impacted by the evaporative cooling. And sure, humidity is different between hot and cold isles. That is just a function of temperature and relative humidity. But, no moisture is exchanged into the DC to cool the DC.

    Edit: Turns out I'm a bit misinformed. Apparently in dry environments that can deal with the added moisture, DCs are built that indeed use simple direct evaporative cooling.

  • Practically all even semi-modern DCs are built for servers themselves to be air cooled. The air itself is cooled via a heat exchanger with a separate and isolated chiller and cooling tower. The isolated chiller is essentially the swamp cooler, but it's isolated from the servers.

    There are cases where servers are directly liquid cooled, but it's mostly just the recent Nvidia GPUs and niche things like high-frequency-trading and crypto ASICs.

    All this said... For the longest time I water cooled my home lab's compute server because I thought it was necessary to reduce noise. But, with proper airflow and a good tower cooler, you can get basically just as quiet. All without the maintenance and risk of water, pumps, tubing, etc.

  • I'd take a look at packer and ansible. Packer can be used to prepare a new base image for your VMs. And ansible can be used to automate the provisioning of a VM once it's booted.

  • Slightly educated guess. True organic cork is produced by cutting the bark off specific trees. There are limited climates it grows. I would guess the scale with which we produce bottled drinks would require significantly more trees and labor that we currently have. And thus cork prices would skyrocket.

  • If you're considering video transcoding, I'd give Intel a look. Quicksync is pretty well supported across all of the media platforms. I do think Jellyfin is on a much more modern ffmpeg than Plex, and it actually supports AMD. But, I don't have any experience with that... Only Nvidia and Intel. You really don't need a powerful CPU either. I've got my Plex server on a little i5 NUC, and it can do 4k transcodes no problem.

  • You really don't need an AIO with a 5600X. Just grab a reasonably sized tower cooler and call it a day. There's less to fail, and less risk of water damage if it fails catastrophically. I've found thermalright to be exceptionally good for how well priced they are. Not as quiet as Noctua, but damn near the same cooling performance.

    Another thing to consider is that a 5600X doesn't have built in graphics. I think you'd need to jump up to AM5/7600X for that.

  • Deleted

    Permanently Deleted

    Jump
  • A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.

  • Deleted

    Permanently Deleted

    Jump
  • Honestly, I don't mind them adding ads. They've got a business to support. But, calling them "quests" and treating them as "rewards" for their users is just so tone-deaf and disingenuous. Likewise, if I've boosted even a single server, I shouldn't see this crap anywhere, let alone on the server I've boosted.

  • In the US, salaried engineers are exempt from overtime pay regulations. He is telling them to work 20 extra hours, with no extra pay.

  • Commentary from someone quite trusted in the historical gun community and who's actually shot multiple Welrods/VP9s: https://www.youtube.com/shorts/POubd0SoCQ8

    It's not a VP9. Even at the very start of the video, on the first shot before the shooter even manually cycles the gun, gas is ejected backwards out of the action rather than forward out of the suppressor.

  • In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don't think it's horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that's one less thing to set up if you need to re-provision your docker host.

    (quick edit) I don't think docker compose reads and re-reads compose files. They're read when you invoke docker compose but that's it. So...

    If you're simply invoking docker compose to interact with things, then I'd say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you're concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

    As long as the user invoking docker compose can read the compose files, you're good. When it comes to mounting data into containers from NFS.... yes permissions will matter and it might be a pain as it depends on how flexible the container you're using is in terms of user and filesystem permissions.