Skip Navigation

Posts
0
Comments
69
Joined
2 yr. ago

  • It's definitely encrypted they can just tell by signature that it is wireguard or whatever and block it.

    They could do this with ssh if they felt like it.

  • Usually a reverse proxy runs behind the firewall/router. The idea you are pointing 80/443 at the proxy with port forwarding once traffic hits your router.

    So if someone goes to service.domain.com

    You would have dynamic DNS telling domain.com the router is the IP.

    You would tell domain.com that service.domain.com exists as a cname or a record. You could also say *.domain.com is a cname. That would point any hosttname to your router.

    From here in the proxy you would say service.domain.com points to your services IP and port. Usually that is would be on the lan but in your case it would be through a tunnel.

    It is possible and probably more resource efficient to just put the proxy on the VPS and point your public domain traffic directly at the VPS IP.

    So you could say on the domain service.domain.com points to the VPS IP as an a record. Service2.domain.com points to the VPS IP as another a record.

    You would allow 80/443 on the VPS and create entries for the services

    Those would look like the service.domain.com pointing to localhost:port

    In your particular case I would just run the proxy on the public VPS the services are already on.

    Don't forget you can enable https certificates when you have them running. You can secure the management interface on its own service3.domain.com with the proxy if you need to.

    And op consider some blocklists for your vps firewall like spamhaus. It wouldn't hurt to setup fail2ban either.

  • You can do that or you can use a reverse proxy to expose your services without opening ports for every service. With a reverse proxy you would point port 80 and 443 to the reverse proxy once traffic hits your router/firewall. In the reverse proxy you would configure hostnames that point to the local service IP/ports. Reverse proxy servers like nginx proxy manager then allow you to setup https certificates for every service you expose. They also allow you to disable access to them through a single interface.

    I do this and have setup some blocklists on the opnsense firewall. Specifically you could setup the spamhaus blocklists to drop any traffic that originates from those ips. You can also use the Emerging Threats Blocklist. It has spamhaus and a few more integrated from dshield ect. These can be made into simple firewall rules.

    If you want to block entire country ips you can setup the GeoIP blocklist in opnsense. This requires a maxmind account but allows you to pick and choose countries.

    You can also setup the suricatta ips in opnsense to block detected traffic from daily updates lists. It's a bit more resource intensive from regular firewall rules but also far more advanced at detecting threats.

    I use both firewall lists and ips scanning both the wan and lan in promiscuous mode. This heavily defends your network in ways that most modern networks can't even take advantage.

    You want even more security you can setup unbound with DNS over TLS. You could even setup openvpn and route all your internal traffic through that to a VPN provider. Personally I prefer having individual systems connect to a VPN service.

    Anyway all this to say no you don't need a VPN static IP. You may prefer instead a domain name you can point to your systems. If you're worried about security here identify providers that allow crypto and don't care about identity. This is true for VPN providers as well.

  • This is a journey that will likely fill you with knowledge. During that process what you consider "easy" will change.

    So the answer right now for you is use what is interesting to you.

    Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

    Just remember modern CPUs can host many services from a single box. How they do that can vary.

  • Probably these directories...

    /tmp /var/tmp /var/log

    Two are easy to migrate to tmpfs if you are trying to reduce disk writes. Logs can be a little tricky because of the permissions. It is worth getting it right if you are concerned about all those little writes on an SSD. Especially if you have plenty of memory.

    This is filesystem agnostic btw so the procedure can apply to other filesystems on Linux operating systems.

  • I have to admit I was doing the same but with the greek versions. Though I liked to throw in hydra's and the like.

  • That's somewhat true. However, the hardware support in bsd especially around video has been blah. If you are interesting in playing with zfs on linux I would recommend proxmox. That particular os is one of the few that allows you to install on a zfs rpool from the installer. Proxmox is basically a debian kernel that's been modified a bit more for virtualization. One of the mods made was including zfs support from the installer.

    Depending on what you get if you go the prox route you could still install bsd in a vm and play with filesystem. You may even find some other methods to get jellyfin the way you like it with lxc, vm, or docker.

    I started out on various operating systems and settled on debian for a long time. The only reason I use prox is the web interface is nice for management and the native zfs support. I change things from time to time and snapshots have saved me from myself.

  • Hardware support can be a bit of an issue with bsd in my experience. But if you're asking for hardware it doesn't take as much as you may think for jellyfin.

    It can transcode just fine with Intel quic sync.

    So basically any moden Intel CPU or slightly older.

    What you need to consider more is storage space for your system and if your system will do more than just Jellyfin.

    I would recommend a bare bones server from super micro. Something you could throw in a few SSDs.

    If you are not too stuck on bsd maybe have a look at Debian or proxmox. Either way I would recommend docker-ce. Mostly because this particular jellyfin instance is very well maintained.

    https://fleet.linuxserver.io/image?name=linuxserver/jellyfin

  • So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I'm more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to "passthru" as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

    I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn't import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn't directly managing the disks. It was managing virtual disks.

    Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

    If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

    I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

    Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

    As for the error it's typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

    My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


    Repost to op as op claims his comments are being purged

  • Well op look at it this way...

    A single 50mb nginx docker image can be used multiple times for multiple docker containers.

  • most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

    I would never utilize 1mb for any dataset that had vm disks inside it.

    I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

    The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.

  • It looks like you are using legacy bios. mine is using uefi with a zfs rpool

     
        
    proxmox-boot-tool status
    Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
    System currently booted with uefi
    31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)
    
      

    However, like with everything a method always exists to get it done. Or not if you are concerned.

    If you are interested it would look like...

    Pool Upgrade

     
        
    sudo zpool upgrade <pool_name>
    
      

    Confirm Upgrade

     
        
    sudo zpool status
    
    
      

    Refresh boot config

     
        
    sudo pveboot-tool refresh
    
    
      

    Confirm Boot configuration

     
        
    cat /boot/grub/grub.cfg
    
      

    You are looking for directives like this to see if they are indeed pointing at your existing rpool

     
        
    root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
    
      

    here is my file if it helps you compare...

     
        
    #
    # DO NOT EDIT THIS FILE
    #
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    #
    
    ### BEGIN /etc/grub.d/000_proxmox_boot_header ###
    #
    # This system is booted via proxmox-boot-tool! The grub-config used when
    # booting from the disks configured with proxmox-boot-tool resides on the vfat
    # partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
    # /boot/grub/grub.cfg is NOT read when booting from those disk!
    ### END /etc/grub.d/000_proxmox_boot_header ###
    
    ### BEGIN /etc/grub.d/00_header ###
    if [ -s $prefix/grubenv ]; then
      set have_grubenv=true
      load_env
    fi
    if [ "${next_entry}" ] ; then
       set default="${next_entry}"
       set next_entry=
       save_env next_entry
       set boot_once=true
    else
       set default="0"
    fi
    
    if [ x"${feature_menuentry_id}" = xy ]; then
      menuentry_id_option="--id"
    else
      menuentry_id_option=""
    fi
    
    export menuentry_id_option
    
    if [ "${prev_saved_entry}" ]; then
      set saved_entry="${prev_saved_entry}"
      save_env saved_entry
      set prev_saved_entry=
      save_env prev_saved_entry
      set boot_once=true
    fi
    
    function savedefault {
      if [ -z "${boot_once}" ]; then
        saved_entry="${chosen}"
        save_env saved_entry
      fi
    }
    function load_video {
      if [ x$feature_all_video_module = xy ]; then
        insmod all_video
      else
        insmod efi_gop
        insmod efi_uga
        insmod ieee1275_fb
        insmod vbe
        insmod vga
        insmod video_bochs
        insmod video_cirrus
      fi
    }
    
    if loadfont unicode ; then
      set gfxmode=auto
      load_video
      insmod gfxterm
      set locale_dir=$prefix/locale
      set lang=en_US
      insmod gettext
    fi
    terminal_output gfxterm
    if [ "${recordfail}" = 1 ] ; then
      set timeout=30
    else
      if [ x$feature_timeout_style = xy ] ; then
        set timeout_style=menu
        set timeout=5
      # Fallback normal timeout code in case the timeout_style feature is
      # unavailable.
      else
        set timeout=5
      fi
    fi
    ### END /etc/grub.d/00_header ###
    
    ### BEGIN /etc/grub.d/05_debian_theme ###
    set menu_color_normal=cyan/blue
    set menu_color_highlight=white/blue
    ### END /etc/grub.d/05_debian_theme ###
    
    ### BEGIN /etc/grub.d/10_linux ###
    function gfxmode {
            set gfxpayload="${1}"
    }
    set linux_gfx_mode=
    export linux_gfx_mode
    menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
            load_video
            insmod gzio
            if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
            insmod part_gpt
            echo    'Loading Linux 6.5.13-5-pve ...'
            linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
            echo    'Loading initial ramdisk ...'
            initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
    }
    submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
    }
    
    ### END /etc/grub.d/10_linux ###
    
    ### BEGIN /etc/grub.d/20_linux_xen ###
    
    ### END /etc/grub.d/20_linux_xen ###
    
    ### BEGIN /etc/grub.d/20_memtest86+ ###
    ### END /etc/grub.d/20_memtest86+ ###
    
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    
    ### BEGIN /etc/grub.d/30_uefi-firmware ###
    menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
            fwsetup
    }
    ### END /etc/grub.d/30_uefi-firmware ###
    
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries.  Simply type the
    # menu entries you want to add after this comment.  Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f  ${config_directory}/custom.cfg ]; then
      source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
      source $prefix/custom.cfg
    fi
    ### END /etc/grub.d/41_custom ###
    
      

    You can see the lines by the linux sections.

  • You are very welcome :)

  • Keep in mind it's more an issue with writes as others mentioned when it comes to ssds. I use two ssds in a zfs mirror that I installed proxmox directly on. It's an option in the installer and it's quite nice.

    As for combating writes that's actually easier than you think and applies to any filesystem. It just takes knowing what is write intensive. Most of the time for a linux os like proxmox that's going to be temp files and logs. Both of which can easily be migrated to tmpfs. Doing this will increase the lifespan of any ssd dramatically. You just have to understand restarting clears those locations because now they exist in ram.

    As I mentioned elsewhere opnsense has an option within the gui to migrate tmp files to memory.

  • I'm specifically referencing this little bit of info for optimizing zfs for various situations.

    Vms for example should exist in their own dataset with a tuned record size of 64k

    Media should exist in its own with a tuned record size of 1mb

    lz4 is quick and should always be enabled. It will also work efficiently with larger record sizes.

    Anyway all the little things add up with zfs. When you have an underlying zfs you can get away with more simple and performant filesystems on zvols or qcow2. XFS, UFS, EXT4 all work well with 64k record sizes from the underlying zfs dataset/zvol.

    Btw it doesn't change immediately on existing data if you just change the option on a dataset. You have to move the data out then back in for it to have the new record size.

  • Upgrading a ZFS pool itself shouldn't make a system unbootable even if an rpool (root pool) exists on it.

    That could only happen if the upgrade took a shit during a power outage or something like that. The upgrade itself usually only takes a few seconds from the command line.

    If it makes you feel better I upgraded mine with an rpool on it and it was painless. I do have a everything backed up tho so I rarely worry. However ai understand being hesitant.

  • It looks like you could also do a zpool upgrade. This will just upgrade your legacy pools to the newer zfs version. That command is fairly simple to run from terminal if you are already examining the pool.

    Edit

    Btw if you have ran pve updates it may be expecting some newer zfs flags for your pool. A pool upgrade may resolve the issue enabling the new features.

  • Out of curiosity what filesystem did you choose for you opnsense vm. Also can you tell if it's a zvol, qcow2, or raw disk. FYI if it's a qcow2 or a raw they both would benefit from a record size of 64k if they exist in a vm dataset. If it's a zvol still 64 k can help.

    I also utilize a heavily optimized setup running opnsense within proxmox. My vm filesystem is ufs because it's on top of proxmox zfs. You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory. That will heavily reduce disk writes from opnsense.

  • Hmm. If you are going to have proxmox managing zfs anyway then why not just create datasets and share them directly from the hypervisor?

    You can do that in terminal but if you prefer a gui you can install cockpit on the hypervisor with the zfs plugin. It would create a separate web gui on another port. Making it easy to create, manage, and share datasets as you desire.

    It will save resources and simplify zfs management operations if you are interested in such a method.