I understand that people enter the world of self hosting for various reasons. I am trying to dip my toes in this ocean to try and get away from privacy-offending centralised services such as Google, Cloudflare, AWS, etc.
As I spend more time here, I realise that it is practically impossible; especially for a newcomer, to setup any any usable self hosted web service without relying on these corporate behemoths.
I wanted to have my own little static website and alongside that run Immich, but I find that without Cloudflare, Google, and AWS, I run the risk of getting DDOSed or hacked. Also, since the physical server will be hosted at my home (to avoid AWS), there is a serious risk of infecting all devices at home as well (currently reading about VLANS to avoid this).
Am I correct in thinking that avoiding these corporations is impossible (and make peace with this situation), or are there ways to circumvent these giants and still have a good experience self hosting and using web services, even as a newcomer (all without draining my pockets too much)?
Edit: I was working on a lot of misconceptions and still have a lot of learn. Thank you all for your answers.
This is nonsense. A small static website is not going to be hacked or DDOSd. You can run it off a cheap ARM single board computer on your desk, no problem at all.
What?
I’ve popped up a web server and within a day had so many hits on the router (thousands per minute) that performance tanked.
Yea, no, any exposed service will get hammered. Frankly I’m surprised that machine I setup didn’t get hacked.
Don’t leave SSH on port 22 open as there are a lot of crawlers for that, otherwise I really can’t say I share your experience, and I have been self-hosting for years.
Am I missing something? Why would anyone leave SSH open outside the internal network?
All of my services have SSH disabled unless I need to do something, and then I only do it locally, and disable as soon as I’m done.
Note that I don’t have a VPS anywhere.
How do you reach into your server with SSH disabled without lugging a monitor and keyboard around?
My firewall, server, NAS and all my services have web GUIs. If I need SSH access all I have to do is enable it via web GUI, do what I need to, disable again.
If push comes to shove, I do have a portable monitor and a keyboard in storage if needed, but have not had the need to use them yet.
Some people want to be able to reach their server via SSH when they are not at home, but yes I agree in general that is not necessary when running a real home server.
Then use Wireguard to get into your local network. Simple as. All security risks that don’t need to be accessed by the public (document servers, ssh, internal tools, etc…) can be accessed via VPN while the port forwarded servers are behind a reverse proxy, TLS, and an authentication layer like Authelia/authentik for things that only a small group needs to access.
Sorry, but there is 1 case in 10000 where a home user would have to have publicly exposed SSH and 9999 cases of 10000 where it is not needed at all and would only be done out of laziness or lack of knowledge of options.
Yeah, I guess I’ve never needed to do that. That may change as I’m thinking of moving all my services from UnRaid to ProxMox to leave UnRaid for storage only.
I guess that’ll bring me back here soon enough.
I’ve been self-hosting a bunch of stuff for over a decade now, and have not had that issue.
Except for a matrix server with open registration for a community that others not in the community started to use.
Yes my biggest mistake was leaving a vps dns server wide open. It took months for it to get abused though.
Lol
What class of IP was it?
You left stuff exposed is the only explanation. I’ve had services running for years without a problem
I can’t say I’ve seen anything like that on the webservers I’ve exposed to the internet. But it could vary based on the IP you have if it’s a target for something already I suppose.
Frankly I’m surprised that machine I setup didn’t get hacked.
How could it if all you had was a basic webserver running?
One aspect is how interesting you are as a target. What would a possible attacker gain by getting access to your services or hosts?
The danger to get hacked is there but you are not Microsoft, amazon or PayPal. Expect login attempts and port scans from actors who map out the internets. But I doubt someone would spend much effort to break into your hosts if you do not make it easy (like scripted automatic exploits and known passwords login attempts easy) .
DDOS protection isn’t something a tiny self hosted instance would need (at least in my experience).
Firewall your hosts, maybe use a reverse proxy and only expose the necessary services. Use secure passwords (different for each service), add fail2ban or the like if you’re paranoid. Maybe look into MFA. Use a DMZ (yes, VLANs could be involved here). Keep your software updated so that exploits don’t work. Have backups if something breaks or gets broken.
In my experience the biggest danger to my services is my laziness. It takes steady low level effort to keep the instances updated and running. (Yes there are automated update mechanisms - unattended upgrades i.e. -, but also downwards compatibility breaking changes in the software which will require manual interactions by me.)
+1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.
I’m in this comment.
It’s crowded.
…maybe use a reverse proxy…
+1 post.
I would suggest definitely reverse proxy. Caddy should be trivial in this use case.
cheers,
Reverse proxies don’t add security.
I don’t get why they say that? Sure, maybe the attackers don’t know that I’m on Ubuntu 21.2 but if they come across https://paperless.myproxy.com and the Paperless-NGX website opens, I’m pretty sure they know they just visited a Paperless install and can try the exploits they know. Yes, the last part was a bit snarky, but I am truly curious how it can help? Since I’ve looked at proxies multiple times to use it for my selfhosted stuff but I never saw really practical examples of what to do and how to set it up to add an safety/security layer so I always fall back to my VPN and leave it at that.
Not every path is mapped with the reverse proxy.
I’m positive that F5’s marketing department knows more than me about security and has not ulterior motive in making you think you’re more secure.
Snark aside, they may do some sort of WAF in addition to being a proxy. Just “adding a proxy” does very little.
So, you’ve gone from:
reverse proxies don’t add security
to:
“adding a proxy” does very little
What’s next?
Give up. You don’t know what the fuck you’re talking about.
… You’re joking right?
No.
I have a dozen services running on a myriad of ports. My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web, plus the fact that an entity needs to know a hostname now instead of just an exposed port. IPS signatures can help identify abstract hostname scans and the proxy can be configured to permit only designated sources. Reverse proxies also commonly get used to allow for SSL offloading to permit clear text observation of traffic between the proxy and the backing host. Plenty of other use cases for them out there too, don’t think of it as some one trick off/on access gateway tool
My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web,
The mapping is helpful but not a security benefit. The latter can be done with a firewall.
Paraphrasing - there is a bunch of stuff you can also do with a reverse proxy
Yes. But that’s no longer just a reverse proxy. The reverse proxy isn’t itself a security tool.
I see a lot of vacuous security advice in this forum. “Install a firewall”, “install a reverse proxy”, etc. This is mostly useless advice. Yes, do those things but they do not add any protection to the service you are exposing.
A firewall only protects you from exposing services you didn’t want to expose (e.g. NFS or some other service running on the same system), and the rproxy just allows for host based routing. In both cases your service is still exposed to the internet. Directly or indirectly makes no significant difference.
What we should be advising people to do is “use a valid ssl certificate, ensure you don’t use any application default passwords, use very good passwords where you do use them, and keep your services and servers up-to-date”.
A firewall allowing port 443 in and an rproxy happily forwarding traffic to a vulnerable server is of no help.
They’re a part of the mix. Firewalls, Proxies, WAF (often built into a proxy), IPS, AV, and whatever intelligence systems one may like work together to do their tasks. Visibility of traffic is important as well as the management burden being low enough. I used to have to manually log into several boxes on a regular basis to update software, certs, and configs, now a majority of that is automated and I just get an email to schedule a restart if needed.
A reverse proxy can be a lot more than just host based routing though. Take something like a Bluecoat or F5 and look at the options on it. Now you might say it’s not a proxy then because it does X/Y/Z but at the heart of things creating that bridged intercept for the traffic is still the core functionality.
You can’t port map the same port to different services on a firewall. Reverse proxy lets you open one port and have multiple services on it. Firewall can protect exposed services one I geoip block every country but my own two use crowded to block what they consider malicious ips.
May not add security in and of itself, but it certainly adds the ability to have a little extra security. Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles. Use firewall to only expose certain ports and destinations exposed to your origins. Install a single wildcard cert and easily cover any subdomains you set up. There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.
Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles
So what? I can still access your application through the rproxy. You’re not protecting the application by doing that.
Install a single wildcard cert and easily cover any subdomains you set up
This is a way to do it but not a necessary way to do it. The rproxy has not improved security here. It’s just convenient to have a single SSL endpoint.
There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.
If you do that, sure. But that’s not the advice given in this forum is it? It’s “install an rproxy!” as though that alone has done anything useful.
For the most part people in this form seem to think that “direct access to my server” is unsafe but if you simply put a second hop in the chain that now you can sleep easily at night. And bonus points if that rproxy is a VPS or in a separate subnet!
The web browser doesn’t care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it’s game over.
The web browser doesn’t care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it’s game over.
Right!?
Your castle can have many walls of protection but if you leave the doors/ports open, people/traffic just passes through.
So I’ve always wondered this. How does a cloudflare tunnel offer protection from the same thing.
They may offer some sort of WAF (web application firewall) that inspects traffic for potentially malicious intent. Things like SQL injection. That’s more than just a proxy though.
Otherwise, they really don’t.
A reverse proxy is used to expose services that don’t run on exposed hosts. It does not add security but it keeps you from adding attack vectors.
They usually provide load balancing too, also not a security feature.
Edit: in other words what he’s saying is true and equal to “raid isn’t baclup”
deleted by creator
All reverse proxies i have used do rudimentary DDoS protection: rate limiting. Enough to keep your local script kiddy at bay - but not advanced stuff.
You can protect your ssh instance with rate limiting too but you’ll likely do this in the firewall and not the proxy.
Drink less paranoia smoothie…
I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.
Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.
DDoS and hacking are like taxes: you should be so lucky as to have to worry about them, because that means you’re wildly successful. Worry about getting there first because that’s the hard part.
You don’t have to be successful to get hit by bots scanning for known vulnerabilities in common software (e.g. Wordpress), but OP won’t have to worry about that if they keep everything up to date. However, this is also necessary when renting a VPN from said centralised services.
Well he specified static website, which rules out WP, but yes. If your host accepts posts (in the generic sense, not necessarily specifically the http verb POST) that raises tons of other questions, that frankly were already well addressed when I made my post.
he specified static website, which rules out WP
Oops missed that
EDIT: And I missed Immich too
A static website and Immich
Use any old computer you have lying around as a server. Use Tailscale to connect to it, and don’t open any ports in your home firewall. Congrats, you’re self-hosting and your risk is minimal.
Exactly what I do and works like a dream. Had a VPS and nginx to proxy domain to it but got rid of it because I really had no use for it, the Tailscale method worked so well.
I’ve been thinking of trying this (or using Caddy instead of nginx) so I could get Nextcloud running on an internal server but still have an external entry point (spousal approval) but after setting up the subdomain and then starting caddy and watching how many times that subdomain started to get scanned from various Ips all over the world, I figured eh that’s not a good plan. And I’m a nobody and don’t promote my domain anywhere.
I feel like you have the wrong idea of what hacking acting a actually is… But yes, as long as you don’t do anything too stupid line forwarding all of your ports or going without any sort of firewall, the chances of you getting hacked are very low…
As for DDOSing, you can get DDOSed with or without self hosting all the same, but I wouldn’t worry about it.
Exactly piss off a script kiddie and get ddosed weather your self hosting or not.
A VPS with fail2ban is all you need really. Oh and don’t make ssh accounts where the username is the password. That’s what I did once, but the hackers were nice, they closed the hole and then just used it to run a irc client because the network and host was so stable.
Found out by accident, too bad they left their irc username and pw in cleartext. Was a fun week or so messing around with their channels
Talk about a reverse UNO card.
The DDOSED hype on this site is so over played. Oh my god my little self hosted services are going to get attacked. Is it technically possible yes but it hasn’t been my experience.
DDoSing cost the attacker some time and resources so there has to something in it for them.
Random servers on the internet are subject to lots of drive-by vuln scans and brute force login attempts, but not DDoS, which are most costly to execute.
99% of people think they are more important than they are.
If you THINK you might be the victim of an attack like this, you’re not going to be a victim of an attack like this. If you KNOW you’ll be the victim of an attack like this on the other hand…
DDOS against a little self hosted instance isn’t really a concern I’d have. I’d be more concerned with the scraping of private information, ransomware, password compromises, things of that nature. If you keep your edge devices on the latest security patches and you are cognizant on what you are exposing and how, you’ll be fine.
Self hosting can save a lot of money compared to Google or aws. Also, self hosting doesn’t make you vulnerable to DDOS, you can be DDOSed even without a home server.
You don’t need VLANs to keep your network secure, but you should make sure than any self hosted service isn’t unnecessarily opens up tot he internet, and make sure that all your services are up to date.
What services are you planning to run? I could help suggest a threat model and security policy.
Getting DDOSed or hacked is very very rare for anyone self hosting. DDOS doesn’t really happen to random people hosting a few small services, and hacking is also rare because it requires that you expose something with a significant enough vulnerability that someone has a way into the application and potentially the server behind it.
But it’s good to take some basic steps like an isolated VLAN as you’ve mentioned already, but also don’t expose services unless you need to. Immich for example if it’s just you using it will work just fine without being exposed to the internet.
Why would anyone ddos you? Ddos costs money andor effort. Noone is going to waste that on you. Maybe dos but not ddos. And the troll will go away after some time as well. There’s no gain in dosing you. Why would anyone hack your static website? For the lulz? If everything is https encrypted on your local net how does a hacker infest everything on your network?
DDOS can happen just from a script hammering on an exposed port trying to brute force credentials.
Then block them there are tools that restrict abuse
You can. I am lucky enough to not have been hacked after about a year of this, and I use a server in the living room. There are plenty of guides online for securing a server. Use common sense, and also look up threat modeling. You can also start hosting things locally and only host to the interwebs once you learn a little more. Basically, the idea that you need cloudflare and aws to not get hacked is because of misleading marketing.
Man if your lucky enough after a year I must be super duper lucky with well over a decade.
It’s very possible. If you carefully manage your attack surface and update your software regularly, you can mitigate your security risks quite a bit.
The main problem is going to be email. I have found no reliable way to send email that does not start with “have someone else do it for you” or “obtain an IP block delegation”.
email isn’t that hard when you have a static IP, either from your network provider or via a VPS. Then, setup SPF, DKIM and DMARC and you’re good to go (at least for simple use cases like notifications. When you want to send out thousands of emails, you might need more.)
I host a handful of Internet facing sites/applications from my NAS and have had no issues. Just make sure you know how to configure your firewall correctly and you’ll be fine.