• 0 Posts
  • 5 Comments
Joined 8 months ago
cake
Cake day: November 28th, 2024

help-circle
  • Now it takes four engineers, three frameworks, and a CI/CD pipeline just to change a heading. It’s inordinately complex to simply publish a webpage.

    Huh? I mean I get that compiling a webpage that includes JS may appear more complex than uploading some unchanged HTML/CSS files, but I’d still argue you should use a build system because what you want to write and what is best delivered to browsers is usually 2 different things.

    Said build systems easily make room for JS compilation in the same way you can compile SASS to CSS and say PUG or nunjucks to HTML. You’re serving 2 separate concerns if you at all care about BOTH optimisation and devx.

    Serious old grump or out of the loop vibes in this article.



  • This doesn’t exactly help your situation, but as a developer that builds and publishes docker images most days of my work week, I’d not suggest anyone do the same on a drive smaller than 512GB. Docker builds create layers on the fly as changes are seen and these can range from bytes to hundreds of megs at least. Casual docker development will easily chew through a few hundred gigs after a while, in my experience.

    Just trying to put things in perspective: sadly, 70GB is peanuts here if you’re working with popular software stacks. Yes there needs to be some virtual image for docker desktop and due to the above, I usually have mine set at over 200GB.


  • Get everything migrated across to my new k3s cluster. I’ve been using larger boxes (unraid) and a couple of 1L mini PCs with proxmox to run my homelab until now… but I work with kubernetes and terraform daily and wanted something declarative.

    I’ve now got k3s setup with a handful of services migrated (Immich, Tailscale, Nextcloud etc) but there’s still a ton to go (arr suite, various databases, Plex, Tautulli etc). It’s another job entirely.

    I love it but sometimes I wonder why I do this to myself 😅


  • I appreciate the sentiment here, though I would agree that it is certainly paranoid 😅. I think if you’re careful with that you self host, where you install it from, how you install it and then what you expose, you can keep things sensible and reasonably secure without the need for strong isolation.

    I keep all of my services in my k3s cluster. It spans 4 PCs and sits in its own VLAN. There isn’t any particular security precautions I take here. I’m a developer and can do a reasonable job verifying each application I install, but of course accept the risk of running someone else’s software in my homelab.

    I don’t expose anything except Plex publicly. Everything else goes over Tailscale. I practise 3-2-1 backups with local disks and media as well as offsite to Backblaze. I occasionally offsite physical media backups as well.

    I’d be interested to see what others think about this… most hosting solutions leave it all open my default. I think there’s a lot of small and easy ways one can practice good lab hygiene without air-gapping.