Skip Navigation

Posts
0
Comments
69
Joined
2 yr. ago

  • What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

    I ask because...

    The "file lock ESTALE" error in the context of NFS indicates that the file lock has become "stale." This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.

  • My npm has web sockets enabled and blocking common exploits.

    Just checked syncthing and it's set to 0.0.0.0:8384 internally but that shouldn't matter if you changed the port.

    When Syncthing is set to listen on 0.0.0.0, it means it's listening on all available network interfaces on the device. This allows it to accept connections from any IP address on the network, rather than just the local interface. Essentially, it makes Syncthing accessible from any device within the network.

    Just make sure you open those firewall ports on the server syncthing is running on.

    Btw the syncthing protocol utilizes port 22000 tcp and udp. Udp utilizing a type of quic if you let it.

    So it's a good idea to allow udp and tcp on 22000 if you have a firewall configured on the syncthing server.

    Edit

    Wording for firewall ports and the purpose of 0.0.0.0

  • If you are somewhat comfortable with the cli you could install proxmox as zfs then create datasets off the pool to do whatever you want. If you wanted a nicer gui to manage zfs you could also install cockpit on the proxmox hypervisor directly along with the zfs plugin to manage the datasets and share them a bit easier. Obviously you could do all of that from the command line too.

    Personally I use proxmox now where before I made use of Debian. The only reason I switched was it made vm/lxc management easy. As for truenas it's also basically Debian with a different gui. These days I'm more focused on optimization in my home lab journey. I hope you enjoy the experience however you begin and whatever applications you start with.

  • Firewall and deciding on an entry point for system administration is a big consideration.

    Generating a strong unique password helps immensely. A password manager can help with this.

    If this is hosting services reducing open ports with something like Nginx Proxy Manager or equivalent. Tailscale and equivalent(wire guard, wireguard-easy, headscale, net bird, and net maker) are also options.

    Getting https right. It's not such a big deal if all the services are internal. However, it's not hard to create an internal certificate authority and create certs for services.

    If you have server on a VPS. Firewall is again your primary defense. However, if you expose something like ssh fail2ban can help ban ips that make repeated attempts to login to your system. This isn't some drop in replacement for proper ssh configuration. You should be using key login and secure your ssh configuration away from password logins.

    It also helps if you are using something like a proxy for services to setup a filter list. NPM for example allows you to outright deny connection attempts from specific IP ranges. Or just deny everything and allow specific public IPs.

    Also, if you are using something like proxmox. Remember to configure your services for least privileges. Basically the idea being just giving a service what it needs to operate and no more. This can encompass service user/group names for file access ect.

    All these steps add up to pretty good security if you constantly assess.

    Even basic steps in here like turning on the firewall and only opening ports your services need help immensely.

  • I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.

    Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.

    For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.

    Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.

    My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn't hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.

    OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.

    (It's important to get this number right for performance because it can't be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)

    Only do zfs if you can bypass all the raid functions.

    I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.

    From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.

    If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.

    Have fun creating lxcs and vms for all the services you could want.

  • I like to utilize nginx proxy manager alongside docker-ce and portainer-ce.

    This allows you to forward web traffic to a single internal NPM IP. As for setting up the service ips. I like to utilize the gateway ips that docker generates for each service.

    If you have docker running on the same internal IP as NPM you can directly configure the docker gateway ips for each service within the NPM web configuration.

    This dumps the associated traffic into the container network for another layer of isolation.

    This is a bit of an advanced configuration but it works well for my environment.

    I would just love some support for quic within NPM.

  • Well specifically I'm referring to the internal hub on your system and how it shares port bandwidth. It doesn't really matter for things like a mouse or keyboard. However, when you are talking like permanent flash disks it's worth investigating how the bandwidth is shared between ports. Specifically the switching back and forth between the storage devices. Some filesystems handle this better than others.

    I was was also referring to a way I found that stabilizes the connection. That being a USB to SATA controller via like one port. That way that port tends to take advantage of all the bandwidth without switching around.

    Also keep in mind USB flash media is notorious for wear compared to something like nvme/msata disks.

    It's possible to combat writes on flash media by utilizing things like ram disks in Linux. Basically migrating write heavy locations like temp and logs to the ram disk. Though you need to consider that restarting wipes those locations because they are living in ram now. Some operating systems do this automatically like opnsense with a check mark.

  • Just a little heads up about multiple USB drives. They kinda suck sharing on the hub and raids tend to destroy them because of the way they "share" bandwidth on the hub.

    To avoid this problem one solution is a USBc to SATA enclosure. The idea being the enclosure having a SATA controller and a few SATA ports you can plug in a few drives. You would be avoiding the multi USB port "sharing" issue. The enclosure would have all the usb hub bandwidth and the hub wouldn't be switching around between ports.

    I learned this little bit of info messing with zfs and a few different types of flash media. In the end the most stable connection less prone to error was a single USB connection. However, it didn't matter if it was a single drive or a multi drive enclosure.

    Today I wouldn't recommend doing this at all. However if you are going to. Have a look at how USB port sharing on a usb hub works and how that can wreck a raid system over time.

    Edit Spelling

  • Usually this comes down to resource and energy efficiency. While a vm works perfectly fine you will find you can share video and storage resources in efficient ways with lxc.

    For example you can directly pass a zfs dataset into a lxc with a simple lxc.mount.entry:

    This would allow you to configure options like cluster size, atime, compression algorithm, xattr, etc.. without much overhead.

    It's also nice to know you can share your GPU with multiple lxc without it being locked into a single vm.

  • Yup and negligible. If I'm forced to contend with a windows environment bitlocker is utilized.

    I also utilize a ram disk in a windows os. Imdisk in windows. I migrate temp files and logs into the ram disk. It saves on disk writes and increases privacy.

    If pretty straightforward to encrypt if utilizing Linux right from install time.

    As for my server I too utilize nextcloud. However, the nextcloud data is on a zfs dataset. This dataset is encrypted.

    I did this by installing nextcloud from docker running within a proxmox container. That proxmox lxc container has the nextcloud dataset passed into it.