Skip Navigation

Posts
13
Comments
115
Joined
2 yr. ago

  • I'm a +1 on this. A secondhand Synology set up with some RAID will delay this decision for a few years and give you time to build your expertise on the other aspects without worrying much about data security. It's a pity that you're nearly at the limit of 8TB - otherwise I would have suggested a two bay NAS with 2x8TB, but if you're going to use second hand drives (I do because I'm confident of my backup systems) maybe 4x6TB is better. Bigger drives are harder to come by 2nd hand - and plenty of people will not be comfortable with secondhand spinning rust anyway - if that's you, then a 2 bay with 2x12TB might be a good choice.

    The main downside (according to me) of a Synology is no ZFS, but that didn't bother me until I was two years in and the owner of three of them.

  • Thanks for this thoughtful write up of your process. I'm increasingly thinking about what context the model has and keeping it as focused as possible - both to reduce token usage, and to ensure it doesn't have any cruft in it that potentially causes the model to go down an un-useful path. The prompts for this read like what I imagine a conversation with a junior developer would be when handing off a task.

    In practice, this is usually clearing the context after quite small changes and the prompting for the next one with just what I think it is going to need. I guess this is 'context engineering' although that sounds like too fancy a term for it.

  • Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC's are cloned from a golden image that has my keys, tailscale setup etc.

  • I'm as disappointed in our pollies as anyone, but if we don't want corruption we need to treat them something similar to business executives. I'd go further than that - we should have generous superannuation to avoid them currying favour to work with powerful interests when they leave. Both these measures are insurance against corruption.

  • Thanks for posting, I'd never heard of the Joe Walker Podcast, but will be checking it out.

    For anyone interested in an Australian focused take on international relations who hasn't discovered the "Australia in the World" podcast yet, I highly recommend that as well.

  • Batocera surely?

  • Time to book another press conference at that landscaping company.

  • PIC

    Jump
  • Peter Knapp - On the road to Thoiry (1970)

  • 100% this. And Lenovos and HPs designed for the business market generally are a pleasure to work on (in the hardware sense) if you need, with good manuals and secondhand spare parts.

  • the easier a statement is to disprove, the more of a power move it is to say it, as it symbolizes how far you’re willing to go. - ie "faith" in religion.

  • I run nearly all my Docker workloads with their data just in the home directory of the VM (or LXC actually since that's how I roll) I'm running them in, but a few have data on my separate NAS via and NFS share - so through a switch etc with no problems - just slowish.

  • Great. There's two volumes there - firefly_iii_upload & firefly_iii_db.

    You'll definitely want to docker compose down first (to ensure the database is not being updated), then:

     
        
    docker run --rm \
      -v firefly_iii_db:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."
    
    
      

    and

     
        
    docker run --rm \
      -v firefly_iii_upload:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."
    
    
      

    Then copy those two .tar files to the new VM. Then create the new empty volumes with:

     
        
    docker volume create firefly_iii_db
    docker volume create firefly_iii_upload
    
    
      

    And untar your data into the volumes:

     
        
    docker run --rm \
      -v firefly_iii_db:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar"
    
    docker run --rm \
      -v firefly_iii_upload:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"
    
    
      

    Then make sure you've manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.

  • I'm not clear from your question, but I'm guessing you're talking about data stored in Docker volumes? (if they are bind mounts you're all good - you can just copy it). The compose files I found online for FireflyIII use volumes, but Hammond looked like bind mounts. If you're not sure, post your compose files here with the secrets redacted.

    To move data out of a Docker volume, a common way is to mount the volume into a temporary container to copy it out. Something like:

     
        
    docker run --rm \
      -v myvolume:/from \
      -v $(pwd):/to \
      alpine sh -c "cd /from && tar cf /to/myvolume.tar ."
    
    
      

    Then on the machine you're moving to, create the new empty Docker volume and do the temporary copy back in:

     
        
    docker volume create myvolume
    docker run --rm \
      -v myvolume:/to \
      -v $(pwd):/from \
      alpine sh -c "cd /to && tar xf /from/myvolume.tar"
    
      

    Or, even better, just untar it into a data directory under your compose file and bind mount it so you don't have this problem in future. Perhaps there's some reason why Docker volumes are good, but I'm not sure what it is.

  • Season 2

  • Thanks - I have now! It looks like updates of repos I've stared? But I'll never go there again, and suggest OP not do that either if it's upsetting to them. I just go to my profile, or the project I'm interested in.

  • Ideas

    Jump
  • I'm local first - stuff I'm testing, playing with, or "production" stuff like Jellyfin, Forgeo, AudioBookshelf, Kavita etc etc. Local is faster, more secure, and storage is cheap. But then some of my other stuff that needs 24/7 access from the internet - websites and web apps - they go on the VPS.

  • Selfhosted @lemmy.world

    Tailscale MagicDNS issues since 1.84.1 mac?

  • Selfhosted @lemmy.world

    Good experience with neko remote browser

  • Amateur Radio @sh.itjust.works

    Frequency physics

  • RetroGaming @lemmy.world

    Gamers go offline in retro console revival | The Guardian

    www.theguardian.com /games/2025/feb/15/theres-no-stress-gamers-go-offline-in-retro-console-revival
  • Selfhosted @lemmy.world

    Bathroom scale options?

  • RetroGaming @lemmy.world

    Powkiddy RGB10 Max 3 - first impressions

  • Selfhosted @lemmy.world

    Beware Hollywood’s digital demolition: it’s as if your favourite films and TV shows never existed

    www.theguardian.com /commentisfree/2024/oct/01/hollywood-digital-demolition-films-tv-shows-wiped
  • Selfhosted @lemmy.world

    Selfhosted S3 compatible recommendations?

  • Selfhosted @lemmy.world

    ‘My whole library is wiped out’: what it means to own movies and TV in the age of streaming services

    www.theguardian.com /media/article/2024/may/14/my-whole-library-is-wiped-out-what-it-means-to-own-movies-and-tv-in-the-age-of-streaming-services
  • Docker @programming.dev

    Confused about image digests

  • Selfhosted @lemmy.world

    Certbot is great. Let's Encrypt is great.

  • Selfhosted @lemmy.world

    wildcard email hosting/forwarding?

  • Selfhosted @lemmy.world

    Cancelled Dropbox