Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
1
Comments
57
Joined
2 yr. ago

  • More like he buys powerball ticket in his country and numbers win equivalent prize in lucky guys country

  • I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product

  • Just came here to say this, it workson a 10 dollar a year racknerd vps for me no problem. Matrix chugs on my much bigger vps, although it is sharing that with a bunch of other things, overall it should have mich more resources.

  • Those are puny mortal numbers.... my backup nas is more than twice that.......

  • I use rss-bridge for the popular stuff but I've found rss-funnel to be nicer for creating my own scrapes (mostly taking rss feeds that link to the website instead of the article and adding a link to the article mentioned on the website (https://github.com/shouya/rss-funnel)

  • Pretty sure that title is firmly held by mcafe, even now.

  • Pretty much this. I don't even bother with watchtower anymore. I just run this script from cron pointed at the directory I keep my directories of active docker containers and their compose files:

    #/bin/sh for d in /home/USERNAME/stacks/*/ do (cd "$d" && docker compose pull && docker compose up -d --force-recreate) done; for e in /home/USERNAME/dockge/ do (cd "$e" && docker compose pull && docker compose up -d --force-recreate) done;

    docker image prune -a -f

  • Does yours have 8 sata ports or dual external sf8088 ports per chance and moreram?

  • Nevee saw that on wireguard once i foind the better connections for my location, weird

  • Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

    Portability and backup are dead simple.

  • you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here's an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.

    `version: '3' services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: "UTC" ports: - "5810:5800" volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: "nfs" o: "addr=NFSSERVERIP,nolock,soft" device: ":NFSPATH"

    cleanedaudiobooks: driver_opts: type: "nfs" o: "addr=NFSSERVERIP,nolock,soft" device: ":OTHER NFSPATH" `

  • dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don't like dockge, you just go back to cli and do your docker compose up -d --force-recreate .

  • jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).

  • just fyi, direct streaming isn't really direct streaming as you may think of it if you have specified samba shares on your nas instead of something on the vm running jellyfin. it will still pull from the nas into jellyfin and then http stream from jellyfin, whihc is super annoying.

  • That's pretty much exactly my story except I went with fastmail.com, mullvad for vpn (you really need to test with some script to find your best exit nodes I forget which one I used ages ago but it found me a couple of nodes about 1000 kms away from my location and in a different country that I can do nearly a gig through routinely.. Maybe it was this script? https://github.com/bastiandoetsch/mullvad-best-server) . I went with pcloud for a bit but tailscale and now currently netbird make it kind of irrelevant since its' so easy to get all my devices able to communicate back to my house file server. I want to like hetzner so bad but every time I try it the latency to north america just kills me and the north american offering was really far away and undeveloped last time Itried it