Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)P
Posts
0
Comments
17
Joined
1 yr. ago

  • I post there every 6-12 months in the hope of receiving some help or intelligent feedback, but usually just have my question locked or removed. The platform is an utter joke and has been for years. AI was not entirely the reason for its downfall imo.

  • Deleted

    Permanently Deleted

    Jump
  • Success story here. 6+ years running pihole on proxmox as my primary DNS for everything on my network. It’s never missed a beat, never crashed. I update infrequently. It’s just good software.

  • Coffeescript..

  • Looks fantastic! I’m going to install it on my cluster this coming week for sure. Didn’t see but is there any plan to support locations or routes? Having location data along with some basic plotting could make this perfect for travel journaling too, which I’d be interested in..

  • Pregananat?

  • The author uses their feelings a lot to describe their distaste for MCP. It doesn’t read like a very well informed article should, in my opinion.

    MCP servers are like proxies, that can adapt their represented tools and resources based upon external conditions. They’re far from being static in nature and can provide an entry point for otherwise hidden or secured functionality. Ie. some actions may be provided via an MCP and not otherwise. File resources may be present behind an MCP server and not otherwise. Tools may be relevant for a certain agent and not others, or they may become unavailable.

    That, and regular APIs often don’t expose data in a streaming capacity that LLMs benefit from. That’s why you see MCP servers serving HTTP streams or SSE.

    Static files make this inflexible for what, simplicity?

    At least we have a standard now, for this kind of thing. Static files would be a lazy half-arsed solution at beast.

  • Bachelor party for sure

  • They definitely look light brown / yellow to me.. maybe it’s just a causality of the camera sensor? There’s enough green contrast though.

  • Deleted

    Permanently Deleted

    Jump
  • I’ve already reached out to several in my country, on the maybe list. Annoying to see this crop up again.

  • And Tuxedo in Germany - just got my InfinityBook pro 14 and it’s been great.

  • Now it takes four engineers, three frameworks, and a CI/CD pipeline just to change a heading. It’s inordinately complex to simply publish a webpage.

    Huh? I mean I get that compiling a webpage that includes JS may appear more complex than uploading some unchanged HTML/CSS files, but I’d still argue you should use a build system because what you want to write and what is best delivered to browsers is usually 2 different things.

    Said build systems easily make room for JS compilation in the same way you can compile SASS to CSS and say PUG or nunjucks to HTML. You’re serving 2 separate concerns if you at all care about BOTH optimisation and devx.

    Serious old grump or out of the loop vibes in this article.

  • Both are rubbish in my experience - both on the development side and installation side. To be honest I don’t love building any of the package formats for Linux, and prefer installing deb/rpm. Old school I guess.

  • This is why I unplugged my TV from the internet some time ago. It’s been bad for a while but this is insane.

  • Get everything migrated across to my new k3s cluster. I’ve been using larger boxes (unraid) and a couple of 1L mini PCs with proxmox to run my homelab until now.. but I work with kubernetes and terraform daily and wanted something declarative.

    I’ve now got k3s setup with a handful of services migrated (Immich, Tailscale, Nextcloud etc) but there’s still a ton to go (arr suite, various databases, Plex, Tautulli etc). It’s another job entirely.

    I love it but sometimes I wonder why I do this to myself 😅

  • I appreciate the sentiment here, though I would agree that it is certainly paranoid 😅. I think if you’re careful with that you self host, where you install it from, how you install it and then what you expose, you can keep things sensible and reasonably secure without the need for strong isolation.

    I keep all of my services in my k3s cluster. It spans 4 PCs and sits in its own VLAN. There isn’t any particular security precautions I take here. I’m a developer and can do a reasonable job verifying each application I install, but of course accept the risk of running someone else’s software in my homelab.

    I don’t expose anything except Plex publicly. Everything else goes over Tailscale. I practise 3-2-1 backups with local disks and media as well as offsite to Backblaze. I occasionally offsite physical media backups as well.

    I’d be interested to see what others think about this.. most hosting solutions leave it all open my default. I think there’s a lot of small and easy ways one can practice good lab hygiene without air-gapping.