🌨️ 💻
Comments like this is why I come to Red…Lemmy.
Imagine never hearing the word “No.” as a complete sentence ever again in your life.
Mine does, but I need an elaborate system of organic strings to make it work and an over powered processor.
This feels like opening the Overton window.
llama?
But how will they know what movies to watch or what’s the latest in fashion?
Can’t even get HL3 on an alternate timeline.
I need to compile my kernel… by hand with tools from beige-age computing.
Hey Ralph can you get that post-it from the bottom of your keyboard?
there are other kinds of hackers?
I’m in this photo and I don’t like it.
I dunno I RMA’d my Nomad so many times.
If budget is no object it’s only kind of a pain in the ass with Nvidia’s vGPU solutions for data centers. Even with $10 grand spent there’s hypervisor compatibility issues, license servers, compatibility challenges with drivers for games/consumer OS’s on hypervisors, and other inane garbage.
Consumer wise it’s technically the easiest it’s ever been with SRIOV support for hardware accelerating VMs on Intel 13 & 14 gen procs with iGPUs, however iGPU performance is kinda dogshit, drivers are wonky, and multiple display heads being passed through to VMs is weird for hypervisors.
On the docker side of things YMMV based on what you’re trying to accomplish. Technically nvidia container toolkit does support CUDA & display heads for containers: https://hub.docker.com/r/nvidia/vulkan/tags. I haven’t gotten it working yet, but this is the basis for my next set of experiments.
You don’t compile all your packages from source, do you?
fix it again op