SR-IOV works by presenting one device as many, which you can passthrough one of those to your VM. Meaning SR-IOV only works through PCIe passthrough, so you'd have to figure that out first. The GPU guides should get you most of the way there.
Some distros include an ACS patch into their kernel (e.g. Proxmox, and I think CachyOS), which lets you passthrough devices without hardware support (but lacking some security features).
I believe it might be possible to 'passthrough' the VF from the host without PCIe passthrough (I've only done this with containers though), but performance is often worse than just using a bridge.
Probably an unpopular opinion, but I don't actually mind it. There's now a bunch of layers that are selectable on the radar page itself, which were either nonexistent or hard to find on the old site. There's an easy to understand hourly forecast, instead of the text only one (which is still there), and I had no problem finding the 7 day forecast. Also there's finally HTTPS by default!
Of course if you don't like it, this still seems to work for the old website: https://reg.bom.gov.au/
I've always wondered, why do we put the GPU drivers and their firmware into the initramfs? Can't we just rely on the framebuffer drivers until the root partition is mounted? Since most of the firmware size is from GPUs, that should reduce initramfs size, and speed up booting as there's less to load into memory.
And I feel like it’s not a good idea to have a modem directly attached to the pc directly unless you’re using it as a router?
Yeah I feel like this is the issue. The modem/router would be firewalling between the networks hiding the PC behind it.
Also from the description, does OP have a router at all? Is their ISP somehow just allocating public IPs to everything? Do your IPs start with 192.168 or something else?
The EMC2101 is a slightly modified clone of the LM63, so if you connect it to your board's I2C bus and instantiate the lm63 driver to the right address, it should show up in lm-sensors like a normal PC fan. Or there's userspace python drivers, if you don't need a kernel hwmon interface or can't get it to work.
SR-IOV works by presenting one device as many, which you can passthrough one of those to your VM. Meaning SR-IOV only works through PCIe passthrough, so you'd have to figure that out first. The GPU guides should get you most of the way there.
Some distros include an ACS patch into their kernel (e.g. Proxmox, and I think CachyOS), which lets you passthrough devices without hardware support (but lacking some security features).
I believe it might be possible to 'passthrough' the VF from the host without PCIe passthrough (I've only done this with containers though), but performance is often worse than just using a bridge.