Physics is like sex: sure, it may give some practical results, but that’s not why we do it.
— Richard P. Feynman
I think the same is true for a lot of folks and self hosting. Sure, having data in our own hands is great, and yes avoiding vendor lock-in is nice. But at the end of the day, it’s nice to have computers seem “fun” again.
At least, that’s my perspective.
99% of people want computers to serve them, not to be fun. My SO couldn’t care less how much fun I have setting up home assistant. They just want to turn on the lights.
Well, yes, most people want computers to be unnoticable and boring. I agree, we need more boring tech that just does a job and doesn’t bother us. That said, plenty of people find self-hosting to be fun - your SO and mine excepted, of course.
most people want computers to be unnoticable and boring. I agree, we need more boring tech
professional UI designers don’t seem to agree. they always feel the urge to come up with the next worst design
For me it’s not even about better or worse, but about different. For them it’s a nice iteration after many years, but for be it is one of the dozens of apps I use irregularly that suddenly behaves and works different and forces me to relearn things I don’t have any gain from. Since each of the different apps get that treatment every once in a while, I end up having to adjust all the damn time for something else.
I would really like we could go back to functional applications being sold as is without forced updates. I do not need constant changes all the time. WinAmp hasn’t changed in 20 years and still does exactly what it is supposed to. I could probably spin up an old MS Word 2000 and it would work just like it did 20 years ago.
Many modern apps however change constantly. No wonder they all lean towards subscriptions if they “have to” work on it all the time. But I, as a user, don’t even want that. I want to buy the thing that does what it’s supposed to and then I want it to stay that way.
Sure, but did your SO set up home assistant?
No. They just want to buy an Apple home thingy 🥹
Yeah that kinda enforces their point.
Personally I don’t enjoy setting things up. I do enjoy not being tied down to evil corporations.
I do like setting things up.
Then I realise I need to fuck around with DNS to get it working nicely.
This same argument goes for Linux as well. Linux allows you to turn the computer into anything you want it to be!
Recently getting back into Linux, it’s like choose your own adventure in computing. It’s been fun.
Self-housing, Linux, vim; hell, even gardening – they all fit this saying?axiom? pretty well.
People are looking to reclaim their agency and autonomy, we over relied on corpos and they used that as opportunity to price gouge us.
Escaping vendor lock-in. It’s why people hate the cloud when it used to be the answer for everything. You make a good product that can only be used with your hardware/software, whatever, and people run from that shit because it’s abused more often than not.
Apple is the biggest example of this. Synology is getting worse and worse. Plex not far behind either.
I recently discovered that Plex no longer works over local network, if you lose internet service. A) you can’t login without internet access. B) even if you’re already logged in, apps do not find and recognize your local server without internet access. So, yeah, Plex is already there.
I try to explain this to the plex cultists and they usually have one of two responses;
- “Why would I be without internet?”
- “How is that helpful?”
Takes every ounce of willpower I have to not eye roll.
A lot of people that run Plex have a Jellyfin container on standby, or they’ll use Plex for friends and family and use JF at home.
What is the point of Plex? I just went straight for Jellyfin and it does everything I need and then some. Is it just that people went with Plex initially and then stuck with it as it got enshittified?
Plex has better security, federates and shares with other plex servers and generally is less hands-on for transcoding.
But, I don’t use it. I like Jellyfin. It’s free and while it may lack a few features, it isn’t worse by any measure.
Plex has better security, federates and shares with other plex servers and generally is less hands-on for transcoding.
Regarding security, it’d be interesting to see how secure it actually is. Yeah, the individual endpoints might be protected better, but is Plex the company maybe a single point of failure?
generally is less hands-on for transcoding.
Yeah, I’m not gonna give you that one. It’s a single option that you toggle. Wanna use your nvidia GPU? Enable NVENC. AMD gpu/cpu? AMF. Intel CPU? QSV.
Really not that hard…
Because why run one server for all your needs when you can double up, right? /s
I didn’t say it was a good idea…
KODI is calling.
What!?! Damn. I didn’t know it got that enshitty already.
No way, plex is completely enshitified.
I wanted to ask where the border of selfhosting is. Do I need to have the storage and computing at home?
Is a cheap VPS on hetzner where I installed python, PieFed and it’s Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?
I would say yes, it’s still self-hosting. It’s probably not “home labbing”, but it’s still you responsible for all the services you host yourself, it’s just the hardware which is managed by someone else.
Also don’t let people discourage you from doing bare-metal.
That’s actually a good point, self hosting and home lab are similar things but don’t necessarily mean the same thing
Interesting distinction. I use a small managed vps, but didn’t consider that self-hosting, personally. I do aspire to switch to a homelab and figure out dynamic DNS and all that one day.
It depends who you ask (which we can already tell hehe), but I’d say YES, because you’re the one running the show – you’re free to grab all of your bits and pieces at any time, and move to a different provider. That flexibility of not being locked into one specific cloud service (which can suddenly take a bad turn) is what’s precious to me.
And on a related note, I also make sure that this applies to my software-stack too – I’m not running anything that would be annoying to swap out if it turns bad.
It’s self hosting as long as you are in control of the data you’re hosting.
I would say there’s no value in assigning such a tight definition on self-hosting–in saying that you must use your own hardware and have it on premise.
I would define selfhost as setting up software/hardware to work for you, when turn-key solutions exist because of one reason or another.
Netflix exists. But we selfhost Jellyfin. Doesn’t matter if its not on our hardware or not. What matters is that we’re not using Netflix.
Is a cheap VPS on hetzner where I installed python, PieFed and it’s Postgres database but also nginx and letsencrpt manually by mydelf and pointed my domain to it, selfhosting?
I don’t get hung up on the definitions and labels. I run a hybrid of 3 vps and one rack in the closet. I’m totally fine with you thinking that is not selfhosting or homelabbing. LOL I have a ton of fun doing it, and that’s the main reason why I do it; to learn and have fun. It’s like producing music, or creating bonsai, or any of the other many hobbies I have.
Your stuff is still in the cloud, so I would say no. It’s better than using the big tech products, but I wouldn’t say it’s fully “self hosted”. Not that that really makes much of a difference. You’re still pretty much in control of everything, so you should be fine.
Where is the tipping point though? If I have a server at my parents house, they live in Germany and I in Korea, does my dad host it then because he is paying for the electricity and the access to the internet and makes sure those things work?
Your parents’ house isn’t the cloud, so yeah, it’s self hosted. The “tipping point” is whether you’re using a hosting provider.
They are using a hosting provider - their dad.
“The cloud” is also just a bunch of machines in a basement. Lots of machines in lots of “basements”, but still.
“hosting provider” in this instance I think means “do you pay them (whoever has the hardware in their possession) a monthly/quarterly/yearly fee”
otherwise you can also say “well ACTUALLY your isp is providing the ability to host on the wan so they are the real hosting provider” and such…
Their dad is not a hosting provider. I mean, maybe he is, but that would be really weird.
Isn’t my dad the hosting provider? I ordered the hardware, he connected it to his switch and his electricity and pressed the button to start it the first time. From there on I logged in to his VPN and set up the server like I would at Hetzner.
But you’re right it doesn’t really make a difference. I feel the only difference it makes for me where I post my questions on Lemmy if it is in a !selfhosting community or a !linux community.
From a feeling perspective, even if I use Hetzners cloud, I feel I self host my single user PieFed instance (and matrix, my other websites, mastodon, etc.) because I have to preform basically the same steps as for things I’m really hosting at home like open-webui, immich, peertube.
A hosting provider is a business. If your dad is a business and you are buying hosting services from him, then yes, he is a hosting provider and you are not self hosting. But that’s not what you’re doing. You’re hosting on your own hardware on your family’s internet. That’s self hosting.
When you host on Hetzner, you’re hosting on their hardware using their internet. That’s not self hosting. It’s similar, cause like you said, you have to do a lot of the same administration work, but it’s not self hosting.
Where it gets a little murky is rack space providers. Then you’re hosting on your own hardware, but it’s not your own internet, and there’s staff there to help you… kinda iffy whether you’re self hosting, but I’d say yeah, since you own the hardware.
I’d say you need storage. Once you get storage, use cases start popping up into view over time.
Personally, I’d say no. At that point you are administering it, not hosting it yourself.
Why wouldn’t you just use Docker or Podman
Manually installing stuff is actually harder in a lot of cases
Yeah why wouldn’t you want to know how things work!
I obviously don’t know you, but to me it seems that a majority of Docker users know how to spin up a container, but have zero knowledge of how to fix issues within their containers, or to create their own for their custom needs.
That’s half the point of the container… You let an expert set it up so you don’t have to know it on that level. You can manage fast more containers this way.
OK, but I’d rather be the expert.
And I have no troubling spinning up new services, fast. Currently sitting at around ~30 Internet-facing services, 0 docker containers, and reproducing those installs from scratch + restoring backups would be a single command plus waiting 5 minutes.
I’d rather be the expert
Fair, but others, unless they are getting paid for it, just want their shit to work. Same as people who take their cars to a mechanic instead of wrenching on it themselves, or calling a handyman when stuff breaks at home. There’s nothing wrong with that.
I literally get paid to do this type of work and there is no way for me to be an expert in all the services that our platform runs. Again, that’s kind of the point. Let the person who writes the container be the expert. I’ll provide the platform, the maintenance, upgrades, etc… the developer can provide the expertise in their app.
30, that’s cute. I currently have 70 containers running on my home server. That doesn’t include any lab I run or the stuff I use at work. Containers make life much easier. I also guarantee you don’t know those apps as well as you think you do either. Just being able to install and configure something doesn’t mean you know the inner workings of them. I used to do the same thing you do. Eventually, I would rather spend my time doing other things or learning certain things more in-depth and be okay with a working knowledge of others. It can be fun and rewarding to do things the hard way but don’t kid yourself and think you’re somehow superior for doing it that way.
Containers != services.
I don’t think I am better than anyone. I jumped into these comments because docker was pushed as superior, unprompted.
Installing and configuring does not an expert make, agreed; but that’s not what I said.
I would say I’m pretty knowledgeable about the things I host though, seeing as I am a contributor and / or package maintainer for a number of them…
Correct, not all containers are for services. I would never say that docker is superior. I would however say that containers are (I can be pedantic too). They’re version-controlled, they come with the correct dependencies, etc… There are many reasons why developing with containers is superior and I’m sure you’re aware of them already. Everyone is moving to do exactly that. There are always edge cases, but those are few and far between these days.
I use apps on my phone, but have no clue how to troubleshoot them. I have programs on my computer that I hardly know how to use, let alone know the inner workings of. How is running things in Docker any different? Why put down people who have an interest in running things themselves?
I know you’re just trying to answer the above question of “why do it the hard way”, but it struck me as a little condescending. Sorry if I’m reading too much into it!
No, I actually think that is a good analogy. If you just want to have something up and running and use it, that’s obviously totally fine and valid, and a good use-case of Docker.
What I take issue with is the attitude which the person I replied to exhibits, the “why would anyone not use docker”.
I find that to be a very weird reaction to people doing bare metal. But also I am biased. ~30 Internet facing services, 0 docker in use 😄
This is interesting to me. I run all of my services, custom and otherwise, in docker. For my day job, I am the sole maintainer of all of our docker environment and I build and deploy internal applications to custom docker containers and maintain all of the network routing and server architecture. After years of hosting on bare metal, I don’t know if I could go back to the occasional dependency hell that is hosting a ton of apps at the same time. It is just too nice not having to think about what version of X software I am on and to make sure there isn’t incompatibility. Just managing a CI/CD workflow on bare metal makes me shudder.
Not to say that either way is wrong, if it works it works imo. But, it is just a viewpoint that counters my own biases.
Sorry, I should have mentioned: liking bare-metal does not mean disliking abstraction.
I would absolutely go insane if I had to go back to installing and managing each and every services in their preferred way/config file/config language, and to diy backup solutions, and so on.
I’m currently managing all of that through a single nix config, which doesn’t only take care of 90% of the overhead, it also contains all config in a single, self-documenting, language.
You can customize or build custom containers with a Dockerfile
Also, I want to know how containers work. That’s way more useful.
I did that first but that always required much more resources than doing it yourself because every docker starts it’s own database and it’s own nginx/apache server in addition to the software itself.
Now I have just one Postgresql database instance running with many users and databases on it. Also just one Nginx which does all the virtual host stuff in one central place. And both the things which I install with apt and manually are set up similarly.
I use one docker setup for firefox-sync but only because doing it manually is not documented and even the docker way I had to research for quite some time.
What? No it doesn’t… You could still have just one postgresql database if you wanted just one. It is a big antithetical to microservices, but there is no reason you can do it.
But then you can’t just use the containers provided by the service developers and have to figure out how to redo their container which in the end is more work than just run it manually.
Some examples:
- Lemmy: https://github.com/LemmyNet/lemmy-ansible/blob/main/templates/docker-compose.yml#L81
- Firefox Sync: https://github.com/mozilla-services/syncstorage-rs/blob/master/docker-compose.mysql.yaml#L13
- TinyTinyRSS: https://gitlab.tt-rss.org/tt-rss/tt-rss/-/blob/master/docker-compose.yml?ref_type=heads#L10
- Mastodon: https://github.com/mastodon/mastodon/blob/main/docker-compose.yml#L5
- PeerTube: https://github.com/Chocobozzz/PeerTube/blob/develop/support/docker/production/docker-compose.yml#L71
and many more.
Well, yes that’s best practice. That doesn’t mean you have to do it that way.
all of these run the database in a separate container, not inside the app container. the latter would be hard to fix, but the first is just that way to make documentation easier, to be able to give you a single compose file that is also functional in itself. none of them use their own builds of the database server (though lemmy with its postgres variant may be a bit of an outlier), so they are relatively easy to configure for an existing db server.
all I do in cases like this is look up the database initialization command (in the docker compose file), run that in my primary postgres container, create a new docker network and attach it to the postgres stack and the new app’s stack (stack: the container composition defindd by the docker compose file). and then I tell the app container, usually through envvars or command line parameters embedded in the compose file, that the database server is at xy hostname, and docker’s internal DNS server will know that for xy hostname it should return the IP address of the container that is named xy, through the appropriate docker network. and also the user and pass for connection. from then, from the app’s point of view, my database server in that other container is just like a dedicated physical postgres machine you put on the network with its own cable going to a switch.
unless very special circumstances, where the app needs a custom build of postgres, they can share a single instance just fine. but in that case you would have to run 2 postgres instances even without docker, or migrate to the modified postgres, which is an option with docker too.
I have very rarely ran into such issues. can you give an example of something that works like that? it sounds to be very half-assed by the developer. only pihole comes to mind right now (except for the db part, because I think it uses sqlite)
edit: I now see your examples
You absolutely can. It’s not like the developers of postgresql maintain a version of postgresql that only allows one db. You can connect to that db and add however many things you want to it.
Learn Podman since Docker has some licensing restrictions in some cases.
deleted by creator
It is less user friendly but theoretically more powerful and secure
The learning curve can be steep but if you have ever worked with config files it isn’t bad.
really? like what? i’ve been using docker completely free and unrestricted - at i think so haha
I think the restrictions are just for publishing containers on Docker Hub. If you aren’t doing that, you aren’t impacted.
And Docker desktop on Windows
It doesn’t impact the “Linux native” people but for those starting out on Windows it is a problem.
That’s like insult to injury… Docker Desktop is already way worse than running on linux!
I’m curious if this community would do a community survey.
Ethen Sholly has done surveys before on his website selfh.st
edit: I’m an idiot
Don’t be hard on yourself.
I refuse to answer that or any other question.
What’s your favorite color?
Good for her
It’s all about privacy.
I am amazed at services offered that run rampant in the home.
My ISP offers fiber. But only if you also sign up for managed wifi where they manage your internal net…no way
I got a quote for solar power…but they must use a 3rd party cloud to manage your power and it uses eth over electrical … If you use eth over electrical already, then it does whatever it wants in your home network …no way.
Cell phones? They all go into a guest wifi…not on my home network.
Yeah I just have ai build my uis and are slowly spinning up my own version of the web