I might have a solution for you by doing what I'm doing. I'm running OPNSense as my firewall as well. I have one NAT:Port Forward rule for torrents (I really am seeding linux iso torrents) and that is it. Any services I'm hosting outside the network are done using Cloudflare tunnels from either a Cloudflared instance or from the LXC itself. This method has fixed my issues with Plex outside of my network since I was able to turn off "Remote Access" and make it available to my friends/family through a "Custom server access URL" (in the network settings, looks like: https://plex.domain.url,http://192.168.1.xx:32400/). No messy NAT rules to complicate things.
I am also using Tailscale, but I don't terminate it on my firewall. I terminate Tailscale on another host inside my network, you could probably use an LXC container. It's a Debian system with Tailscale installed, routing enabled (https://tailscale.com/docs/features/subnet-routers), and set up as an exit node and subnet router. On OPNSense, I set up a Gateway on the LAN interface pointing to my Debian Tailscale router node. Then I pointed the remote networks of my family to the Tailscale router using the routes in OPNSense. Fortunately, for me (and because I set them up), they are all different networks.
The benefit to this method is also that when remotely reaching my services, the traffic looks to the services on my network as if they are coming from the Tailscale router and so return there instead of trying to go out my firewall. Tailscale maintains the tunnel through the firewall so it really isn't a participant in the Tailnet. The only issue I've really had had been DNS with the Tailscale Magic DNS wanted to respond instead on my internal DNS servers. I've got MagicDNS disabled. but it always messed stuff up. The way I fixed it was to put Tailscale on my Adguard container and make it's Tailscale IP the first DNS server, followed by the internal IP addresses of my DNS servers (192. addresses). This has worked for me pretty well.
Please let me know if you want any follow up info. I've been doing this for a long time. It's my main hobby (and directly congruent to my job).
Edit: My security tingle just activated from what I told you. I am the only user on my Tailnet. If you use this method, you will want to configure the Tailnet ACL's/grants appropriately to restrict access to only what you would want another user to use, rather than have full access inside your network. You can add each host inside your network that they would reach as an ipset and then set restrictions for users inside the rules. I will admit that I did have to use some AI to figure out some of the specific syntax for the access controls, but understand it pretty well now.
This is what I'm doing as well. The nice thing about it is that it supports different sized drives in the same mergerfs mount and with snapraid, you just need to make sure one of your biggest drives is the parity drive. I've got 10 drives right now with 78TB usable in the mergerfs mount and two 14TB drives acting as parity. I've been able to build it up over the years and add slowly.
It sounds to me that for your specific use case, the tailscale free option would be a better match. You can self host it if you would like, using headscale (involves a little more work though). It's basically like an orchestrator for wireguard tunnels.
I'm running tailscale on quite a few of my systems. I've configured the Grants (like advanced ACL's) to allow for only specific services available from certain hosts while other hosts can act as exit nodes like a VPN egress. I've found it very useful for connecting families networks up so that I can assist with remote troubleshooting help and I've used it to reach back into my own network while traveling.
Very few people actually change their SSID. The bigger point is that, considering sites like Wigle.net exist and the Google Street view cars were designed to capture all SSID data (they hired the guy who made NetStumbler, a popular open source SSID scanning tool in the early 2000's), it's trivial to get within a few hundred feet with just a few SSID's in an area. When your neighbor has an SSID of Comcast-12345 (aka random string), there is probably only one location that has your SSID and the Comcast one in the same location. You can change your SSID every day, but your neighbors probably don't change theirs.
Tailscale would probably be easier for this. Install tailscale on the server and configure only that service available in the tailscale dashboard. I've used this method for ssh access to family members devices.
I'm sure you could run the same setup using headscale (tailscale self hosted), it would require a bit more setup though and dynamic dns would probably have to be working.
I know the process. There's also the option of attaching an ESXi datastore to a Proxmox system and importing it that way. The PCIe passthrough makes it a little more complex but not insurmountable. I've built the Proxmox server on another host with 10Gbe ports and am going to rebuild that way. Export the config, modify what I need to, import the sections I need and swap cables.
Once I've swapped hardware, I'm putting Proxmox on the current VMware host and I'll have a backup system just in case of hardware failure.
I'm also doing some other stuff, replacing HAProxy with Caddy, maybe deploy a grafana dashboard so I can start monitoring all 60+ services on my network, and configure my network for IPv6.
I know there are a lot of recommendations here, but I can provide some insight as someone who has been looking into this heavily for the past several months.
I will start by saying that the GL.iNet Flint 2 running OpenWrt is probably going to be your best option. It meets your price point and concerns. The Flint 3 is an upgrade that just came out that could also be considered, but is currently at $190. I currently have the Flint 2 running at my mom's house providing her network coverage. It's a nice all in one device and I believe she's running a 500Mbps service.
Some of the other responses here mention OPNSense. That's what I'm running right now as a virtual machine. I'm using TP-Link Omada access points for Wi-Fi coverage. OPNSense or PFSense might be a bit much to start. They are good options, but can get rather advanced quickly and still require a method to provide Wi-Fi. I'd been looking at replacements heavily lately, but decided to stick with OPNSense (I just have to migrate it from running on VMware ESXi to Proxmox now).
I might have a solution for you by doing what I'm doing. I'm running OPNSense as my firewall as well. I have one NAT:Port Forward rule for torrents (I really am seeding linux iso torrents) and that is it. Any services I'm hosting outside the network are done using Cloudflare tunnels from either a Cloudflared instance or from the LXC itself. This method has fixed my issues with Plex outside of my network since I was able to turn off "Remote Access" and make it available to my friends/family through a "Custom server access URL" (in the network settings, looks like: https://plex.domain.url,http://192.168.1.xx:32400/). No messy NAT rules to complicate things.
I am also using Tailscale, but I don't terminate it on my firewall. I terminate Tailscale on another host inside my network, you could probably use an LXC container. It's a Debian system with Tailscale installed, routing enabled (https://tailscale.com/docs/features/subnet-routers), and set up as an exit node and subnet router. On OPNSense, I set up a Gateway on the LAN interface pointing to my Debian Tailscale router node. Then I pointed the remote networks of my family to the Tailscale router using the routes in OPNSense. Fortunately, for me (and because I set them up), they are all different networks.
The benefit to this method is also that when remotely reaching my services, the traffic looks to the services on my network as if they are coming from the Tailscale router and so return there instead of trying to go out my firewall. Tailscale maintains the tunnel through the firewall so it really isn't a participant in the Tailnet. The only issue I've really had had been DNS with the Tailscale Magic DNS wanted to respond instead on my internal DNS servers. I've got MagicDNS disabled. but it always messed stuff up. The way I fixed it was to put Tailscale on my Adguard container and make it's Tailscale IP the first DNS server, followed by the internal IP addresses of my DNS servers (192. addresses). This has worked for me pretty well.
Please let me know if you want any follow up info. I've been doing this for a long time. It's my main hobby (and directly congruent to my job).
Edit: My security tingle just activated from what I told you. I am the only user on my Tailnet. If you use this method, you will want to configure the Tailnet ACL's/grants appropriately to restrict access to only what you would want another user to use, rather than have full access inside your network. You can add each host inside your network that they would reach as an ipset and then set restrictions for users inside the rules. I will admit that I did have to use some AI to figure out some of the specific syntax for the access controls, but understand it pretty well now.