To expand my knowledge on this topic, generally, after I finish reading this type of content, I copy the article link, put it in an LLM and prompt it:
"here's an article on 'topic name/article title': https://article.link. Grasp it, analyze it then expand each section mentioned from your own knowledge. Add additional sections relative to the subject"
OVH VPS - 24 vCPU, ( or Threads ) 96GB RAM for $53.40/month.
Hetzner VPS [1] - 16vCPU, 32GB, $54.90/month
DO Droplet - General Compute, Regular CPU, 16 vCPU 64GB RAM, $504/month.
Linode - 20 vCPU, 96GB RAM, $576/month
Upcloud - 24 vCPU, 96GB RAM, $576/month
I dont know what CPU OVH is using, because all the others are AMD EPYC or Newer Intel Xeon. But the pricing difference is too great that even if they were Intel E-Core CPU it would still be pretty damn good deal.
[1] There is a cheaper option from Intel vCPU, but those hardware are older and is only available when other customers cancel their plan to free up slot. So only the newer AMD option is used for comparison.
Well Hetzner's "VPS" [1] is more like the "Cloud" [2] from OVH rather than OVH's VPS [3]
(no /hours pricing, cannot instantly deploy, etc.)
Not that their pricing isn't really really good, but it depends on your use-case. DO / Linode / Upcloud / EC2 / etc. do have an insane pricing in comparison, yes.
how are ovh and hetzner like an order of magnitude cheaper than everyone else? maybe w/ a lot of sharing for VPSs it's understandable, but they also sell dedicated for super cheap...
is it a honeypot? also did ovh change prices recently? I remember checking a couple years ago and it was more expensive vs hetzner
Don't know about OVH (it might be a very similar story?) but Hetzner is from my region and I've known the brand since back in the 1990ies. The difference to most (all?) large American hosting services is that they never went through some big investment scale-up of the type "spend now to earn later" where costs just don't matter as long as there is some growth to handwave it away, but have come to where they are now through continuous bootstrapping. The same applies to hundreds of much smaller hosters, but few (none?) reach anywhere close to Hetzner's economy of scale.
I can't talk about Hetzner, but re OVH, they are absolutely not a honeypot.
Most of the SMEs in France are customers.
They are cheap because they do most things in-house, with a lot a recycling, because their DCs are mostly located in low-cost places (real estate, rents, salaries...) and because they go for low margins.
Hetzner has a very bespoke setup. Their DC's mostly run on their own renewable power sources and have been refined to the limit, combined with recycling hardware for longer periods, not using server chassis or off the shelf components, and a highly bespoke racking setup and it makes for mass scale at a very low cost.
OVH has a similar setup but is way more diversified into other product lines. I'd personally never touch them after the fire that they never bothered to explain to those of us affected by it. With the amount of downtime they had there it made it very clear that their ability to recover a situation - any situation is crap.
Yes, OVH changed their VPS offer and pricing around this summer. They just became very competitive, on top of leading the way in making their data centers (really) carbon-neutral.
Not using Server Grade Hardware. Although one could argue Server Grade Hardware are not worth the premium, that is up to its customer to decide i.e Ryzen vs EPYC. ECC Memory, Server Grade SSD, Power Supply, etc. If you look at their dedicated they aren't really super cheap, there are plenty of other dedicated server out there that goes for similar pricing. The difference is that those companies only offer dedicated options and dont provide the range of VPS OVH and Hetzner offers.
Custom Hardware, down to the DC design, rack, water cooling and economy of scale. There are reasons why some Datacenter are more expensive than others. And the fire at previous OVH DC shows why. Although I remember OVH did explain they dont use that design anywhere else. Doing Custom hardware part like water cooling with Racks isn't the rocket science part, doing it great while doing it at cost efficiency is the most difficult part.
Network quality. OVH owns its own Network. Layering Cables across its own DC along with other exchanges. It used to be slower but this has become less of an issue in 2025. But in the old days the difference between premium network connected and other commodity partners from DC makes a lot of difference. ( It still does but less of an concern )
Minimal Support - Although that is not a concern anymore in 2025 because everyone got used to Cloud computing that has zero support most of the time.
Expectation of Low Margin. I think both Hetzer and OVH have accepted the fact they are in computing commodity business with low margin and aim for volume. While most US business will always try to improve their margin and venture into SaaS or other managed services. Which means both Hetzer and OVH are also the expert in squeezing penny out of everything. As someone who used to work in commodity business I have a lot of respect for these people as they are harder than most people think.
Again, these are things on top of my head when I was keeping an eye on VPS. I just checked LowEndBox ( https://lowendbox.com ) is still alive and well after almost 20 years! Before cloud computing was a thing or went mainstream there were plenty of low cost low end VPS options like OVH and Hetzner. So this isn't exactly new, they just happened to have grown into current size.
On the hardware side of things not using server grade stuff really isn't as big of a deal these days. I'd happily take a decent Ryzen 5 or 7 series over a "new" Xeon that has twice the power consumption and mysteriously the same specs as an older Xeon made a decade ago.
Even ECC - for 99% of applications (and especially on low-end VPS servers) its less likely to be a problem.
The only thing I have found to be an issue with Hetzner is on dedicated servers, and specifically the hard drives. I've had new servers provisioned and they've given me decade old drives that are on the verge of failure - it's less of an issue now as most of their servers are shipping with new nvme drives but I dare say in 3-4 years time it'll be a problem when they reuse those and have instant non-recoverable failures for some of the hardware range.
Agree it is definitely less of an issue. It also used be Xeon and EPYC ( or Opteron ) exclusive for higher core count. But Desktop CPU has caught up and now offer up to 32 vCPU for $600.
Although in 2025 AMD decided instead of people using Ryzen for server they launched EPYC Grado instead. Which is similar if not slightly cheaper than Ryzen at 32 vCPU and offer official ECC Memory support.
Its important to note that all these CPUs are likely shared. And they don't tell you how much.
Hetzner has two servers with the same amount of cores but one only costs half as much. They don't say this anywhere but if you test the performance you indeed only get half as much on the cheaper server.
I've had the same experience. Hetzners ARM VPS servers have been noticeably better than even their own AMD and Intel (The Intel ones are awful and clearly running on old customer hardware).
OVH was not great for me at the previous startup. The virtual network card in an API server would detach every night at somewhat unpredictable times.
OVH support response times were atrocious, multiple days of waiting until weeks later it was escalated.
They never figured it out, just suggested spinning a new server. By that point I had already migrated, but it was a bit scary since it was my first time managing infrastructure.
Just anecdata :) maybe buy a support plan if they have it.
Would you not want a server that's nearer to you? For example looking on serversearcher.com 4GB/2cpu is ~$70-80/year from e.g clouvider and you get to chose from like 7 US cities.
Don't do this; just create a new user and give it sudo privileges.
The utility of changing the SSH port is debatable, but it would lead to less noise in logs. Also, instead of limiting SSH connections to a source IP, you might consider putting the server behind Tailscale and only allowing incoming SSH connections over its interface: https://tailscale.com/kb/1077/secure-server-ubuntu (this also solves the logs problem)
And so, instead of having an open port for ssh, (ideally) with certificate-only authentication, optionally MFA, you trade it for an open port for tailscale/wireguard, handing over "all" your data to a company who is offering you a service for no monetary compensation.
Also, why do you think that it is better to not change the root password? It sounds like a very suspicious recommendation.
I clicked the article because I wanted to hear about Coolify, but its not mentioned at all beyond the article tag, intro, and closing statements. I don't think Coolify should be mentioned at all.
This article is really about preparing a VPS for Coolify deployment, but stops short of Coolify setup AFAICT
I only clicked this to see if Coolify could be a compelling option against my current setup, of using Docker Compose for everything on my VM (including a private Docker registry for my images, and a Traefik frontend proxy to route it all).
Zero actual mention of Coolify, and the manual steps to PREPARE for it seem far more complicated than, "Just base your VM on the Docker Compose base image, and then tweak a couple things".
I'll stick with what I have. Nice advantage is that I can migrate from host to host and 99% of it is just copying the Docker Compose YAML file.
I tried it a few months back but as soon as you want a project that has multiple containers using compose all sorts of issues start popping up.
Like it "forgets" which containers it started and then can't stop them any more or now you have 2 containers of the same service running even though coolify only recognizes one.
I think if you do register each service separately in coolify it runs OKish.
But I've now switched to the same setup as you had and ironically it has been so much simpler to run than coolify.
I'm really happy people are working on projects like coolify, but currently it's far from ready for any serious use (imo).
Until coolify and similar projects support DB backups with streaming replication, it will just remain as a hobby project and won’t be used for anything customer facing.
Docker compose and bash script is all I need to run 2 vms, with hourly backups to s3 + wal streaming to s3 + PG and redis streaming replication to another vm. That is bare minimum for production
Coolify uses Traefik and Docker under the hood and is really just a UI for it. It's definitely missing some critical backup features (solvable through restic or similar) and the UX is... good enough but no better.
Coolify still requires root for installation, though they have a branch that doesn't that they're working on.
So you can just ssh in and do the coolify install and then switch off root login I guess, if you're willing to just blow away the server and start over if you ever needed to ssh in again.
I tried a from scratch coolify deploy recently and it kept failing with ssh key errors. On the other server we have it working and deploying many projects however the "just give it a docker compose" method has never worked for us.
Coolify and friends (Dokploy?) look like nice tools. But I am not very comfortable with them because the state of my server(s) isn't present in code. So, I like NixOS or Ansible more but then they require a bunch of boilerplate and custom infrastructure for setting up production.
Anyone know some infrastructure-as-code framework that makes it easy to spin up and maintain production servers? Something declarative, perhaps, but not Kubernetes?
I’ve been working on doing this with Coolify. There are very few coolify settings to backup, and then all the application configs are stored in /data/coolify. And I use kopia to backup all the volumes. It’s not pretty, and a little hacky, but workable for disaster recovery.
I recently migrated one of my FreeBSD servers to hetzner and it was a breeze. The only wrinkle was that, until you've completed a billing cycle, you can't host an email server as the required ports are blocked.
For me this was fine and I understand why they do this but it wasn't clear to me at the start.
You can ask and explain to them what kind of traffic you'll have. I've shown them the project I'm migrating, and they've opened ports for me right from the start.
Note that if your credit card expires, Hetzner will just turn off networking to your stuff until you fix it. No warnings given, and you'll find out when your alerting/customers/staff contact you to let you know something is wrong.
You can pre-charge your account to give yourself a buffer in case your payment method doesn't work for whatever reason, although it requires a bank transfer.
While I guess that's useful, when my CC expired other places sent reminders/warnings which is the standard business approach.
It was only Hetzner which didn't, and instead they turned off networking to all of our stuff (dedicated servers, some VMs, etc) with no warning. Then their support team screwed us around for a while as well.
I'm about as unimpressed with them as it's possible to get. :(
The standard business approach is to update card details before the card expires, instead of relying on service providers sending warnings when payments are already failing.
Sure. In this particular case it was "expired" early due to some random place guessing the number and the bank rightfully taking precautions.
I updated all of the places I remembered, but missed Hetzner and a few others. Only Hetzner didn't have their shit together enough to gracefully notify us. Or account support staff who were at all interested in assisting.
There are multiple warning levels and you should get email notifications. I happened to overlook those as well and also only noticed it when they turned off networking. However, that was two weeks after the invoice due and it got unblocked in seconds after the payment went through.
I assumed I'd missed warnings as well, but when I actually checked (after fixing the issue, because priorities) there were indeed no warning emails/sms/etc at all sent.
Literally, no kind of notification, warnings, anything at all. Due to this, and their support team being incredibly unhelpful during the outage, they're now on my personal blacklist for literally everything.
So instead of strongly recommended them, which I used to do, we've migrated 95% of everything off Hetzner and I'm hanging out for it to be 100%. And I warn others away from them at every opportunity. Like here. :)
I was using pre-charged account until waiting for a new bank account and credit card. I wasn't even hosting any VPS for a month or so, but Hetzner closed my account with no explanation, never got my money back. F*ck them, thieves.
Great guide, but I disagree on the firewall settings, specially using Hetzner.
If you only need this simple configuration, their firewall solution is more than enough, and do a great job "outsourcing" the problem.
The guide mentions that Hetzner was chosen over other providers and platforms because they didn’t wish to get tied into a whole ecosystem, and could take this setup and move it more or less anywhere
You can always reset stuff from the Hetzner dashboard. But yes, rather than locking it down to some dynamic residential IP, it would be better to set up something like Tailscale, or to have a VPN with a dedicated static IP.
The only thing you really need to do with SSH is to use keys with it, not passwords. That should be secure enough for almost all cases.
Another layer on top is useful to remove the noise from the logs. And if you have anything aside from SSH on the server that doesn't need to be public, restricting it via a VPN or something like that is useful anyway. Most other software that listens on your server has likely much more attack surface than SSH.
Yea agreed. Its dangerous. Lot of people have dynamic IPs at their home. Once you have setup ssh keys and disabled root login, you should be good to go.
Agreed. You should assume you have a dynamic IP unless you’ve specifically arranged for a static one. It’s a “business” feature where I live at least, so personal internet connections will be dynamic.
> 2–3x cheaper for the same specs compared to DO/AWS
specs != performance
When I was looking for a hobby cloud provider, I did some benchmarking of similarly spec'd instances. Note that the degree of overcommitting CPU/RAM varies by cloud provider and instance type. I found Vultr to be the most consistently faster than DO. I had used OVH in the past and wasn't interested. I also didn't consider Hetzner because it seemed unlikely they could match performance at their prices. I later saw other benchmarking that showed Vultr as being one of the fastest. That was quite some time ago and I haven't checked lately, but also have no reason to switch.
Big cloud provider (AWS, Azure, GP) is great for all the managed ecosystem; if you mostly only need raw computing (CPU, memory, bandwidth), then a provider like Hertzner makes a lot of sense (plus they have API and basic services like LB/firewall and object storage).
We at SadServers moved from big cloud managed K8s to Hetzner + Edka and it's an order of magnitude cheaper (obv some perks are missing).
I’ve seen you make this response to a couple different threads, and I wonder what you mean by it.
Are you just hoping to gain more insight on the differing proposed technologies and waiting for someone to give you more information, or are you expressing frustration that that people have their own opinions on which layers to use for their own setups?
If you’re simply asking for information on how to use docker, and how to adapt TFA to include it, you’re in luck. One can find many tutorials on how to dockerize a service (docker’s own website has quite a lot of excellent tutorials and documentation on this topic), and plenty of examples of how to harden it, use SSL, et cetera. This is a very well trodden path.
That said, I’m tempted to read your response with the latter interpretation and my response would be to observe that holding a different opinion on something isn’t inherently ungrateful, or rude, nor is it presumptuous to share that one would, say, recommend dockerizing the production app instead of deploying directly to the server.
That’s the nature of discourse, and the whole reason why hacker news has a comment section in the first place. A lovely article such as TFA is shared by someone, and then folks will want to talk about it and share their own insights and opinions on the contents. Disagreeing with a point in the article is a feature, not a bug.
You are reading too much into me. I am a noob and are interested in an opinion about a good tutorial. As you mentioned, I also asked on another thread and that dude was very friendly. Not so much luck here it seems, that people even downvote me, well, their karma.
I mean, look at you, how pathetic you behave. Instead of answering my simple question with a link of your choice, you are writing down five paragraphs accusing me of whatnot. Learn to answer simple questions with simple answers. And learn to ask a question when you have one. You would not believe how much simpler your life becomes.
And the anonymous down voters? An even weaker group of humans than you. Unimportant people who can do nothing else than stop others. For everything else, their energy is too low.
Since this is a beginner's guide I would mention this docker/ufw pitfall [0] when publishing container ports. Many a containers have been erroneously exposed to the public net because of this.
You pay for the privilege of paying for what you use - every second of CPU time when a lambda is running is marked up immensely versus the same second of compute on bare metal or even a VPS. So your workload needs to be sufficiently "duck curved", parabolic, or erratically spiky in order to _actually_ make cost savings on compute.
The personnel matter is harder to quantify. But note that the need for infra skills didn't go away with cloud. Cloud is complicated, you still need people who understand it, and that still costs money. Be it additional skills spread across developers, or dedicated cloud experts depending on organisation size. These aren't a far cry from sysadmims. It really depends on the skillset of your individual team. These days traditional hosting has got so much easier with so much automation, that it's not as specialist a skill or as time consuming or complicated as many people think it is.
Cloud _can_ be cheaper, but you need the correct mix of requirements and skills gap to make it actually cheaper.
Thanks for sharing this! I have been using a Hetzner VPS + Coolify setup for personal projects for around a year and it has been a great Heroku-like experience and very easy on the wallet. I originally found out about both Hetzner and Coolify from this 1.5 hour guide on getting started from the Syntax podcast: https://www.youtube.com/watch?v=taJlPG82Ucw
Hetzner is great, but it has some minor region problems and SLO issues, so you want to have a fallback to degrade gracefully.
I set my clients up with Hetzner for the core, and front it with Cloudflare. You can front KEDA scaled services with Cloudflare containers and you're pretty much bulletproof, even if Hetzner shits the bed you're still running.
But the real issue is that the price is a bit of red herring: the CX22 plan is not available everywhere (only in the old datacenters in Europe I think) and if you need to scale up your machine you can't use the bigger Intel plans (CX32, CX42 etc) because they have been unavailable for long time, and you have either to move to Amd based plans (CPX31 etc), which cost almost double for the same amount of ram, or to Arm64 based plans.
You could also sign up for System Initiative, enter your hetzner credentials, connect an ai agent, and tell it what you want to do, and iterate your way there.
It's pretty amazing how well it works and how much you learn I. The process.
I love these blogs. Making infra wherever it is or however it's done seems to be a lost art.
I only know a little bit about what Google does to secure the VMs and hypervisors and that the attitude several years ago was that even hardened VMs weren't really living up to their premise yet.
When using one of these cost-focused providers do people typically just assume the provider has root in the VM? I sometimes see them mentioned in the context of privacy but I haven't seen much about the threat model.
Yes I think you have to, to a extent the same also applies to dedicated servers. Even if you own a server that you place in a Colo, they can still pull your drives or plug in a KVM.
If you're data is sensitive encrypt it locally and send it. The reality is most people are running something like a website, API or a SAAS and basically just have to have a provider they trust somewhat and take reasonable security precautions themselves. Beyond that it's probably not as secure as it could be unless it's in a facility you own or control access to.
That's correct. I wouldn't think of it as a VM (a container) though but rather as a server which happens to be virtual. Yes, that's literally just a different word for the same thing but the different emphasis affects thought patterns. For all intents and purposes, from the buyer's perspective, a VPS is a small server, not a different type of thing.
It's true you shouldn't put super sensitive data on a VPS because the host could access it. Regular sensitive is fine - your host will be in a world of trouble if they access your data without permission, so you can generally trust them not to read your emails or open your synced nudes. But if your data is so sensitive that the host would risk everything to read it, or would avoid getting in trouble at all (e.g. national security stuff) then absolutely don't use a VPS. For that level of paranoia you'd need at least a dedicated server which makes it unlikely the host has a live backdoor into the system, ideally your own server so you know they don't, and for super duper stuper paranoid situations, one with a chassis intrusion switch linked to a bag of thermite (that's a real thing).
one of the big things that is actually stopping me from migrating to say hetzner is the fact that our infrastructure is coded in CDK. I dont want to sit and deploy teardown manually anymore. Does hetzner coolify etc support a CDK type IaaS provider? what is the learning curve
There are many upvotes so clearly people like the guide. Congrats on documenting something useful!
Is anyone else immediately turned off by articles like this written in "ChatGPT voice"? The information in the guide might be good, but I didn't make it past the introduction.
I've been burned too many times by LLM-slop. If an article is written in ChatGPT voice, it might still have good content but your readers don't know that. Editing for style and using your own voice helps credibly signal that you put effort into the content.
Hetzner has been a very reliable provider for our hosting. We combine it with Cloud 66 for server hardening and deployment automation at a fraction of the cost of a PaaS
Check out the wide breadth of tuts provided by Digital Ocean. This is just one post, misleadingly titled at that, whereas DO has LOADS of excellent and clearly explained tuts.
Hetzner is one terrible company to do business with and I wouldn't recommend their shit client service to anyone. I tried to make data backup work with one of their low cost storage boxes only to have them entirely block my nascent account, demand I hand over ID copies for identity verification and even take a photo of myself to make sure my face matches. Really? Who the fuck are you to demand this? Why don't I go to Wasabi or Backblaze B2 and just... pay for shit to receive it reliably, with no further problems.
I have seen that they do this very frequently to many people for all kinds of convoluted reasons, and often block accounts that have years running because they don't please the requirements of such a demand out of the blue (but without clarifying why they didn't comply well enough)
For example, the Reddit page for Hetzner has no shortage of desperate clients suddenly blocked, and trying to read the corporate runes of this company's policies and whatever means of appealing can be improvised, just so they can regain access to some service they'd come to depend on.
Imagine depending on that for your personally important backend infrastructure or data backup. No thanks, fuck them.
Great summary for beginners like me! Definitely bookmarking it.
One negative feeling however is that the author didn't mentioned Coolify in the article while being stated in the title :(
Another good article on the same topic that I have already bookmarked is: Setting up a Production-Ready VPS from Scratch (https://dreamsofcode.io/blog/setting-up-a-production-ready-v...)
To expand my knowledge on this topic, generally, after I finish reading this type of content, I copy the article link, put it in an LLM and prompt it:
"here's an article on 'topic name/article title': https://article.link. Grasp it, analyze it then expand each section mentioned from your own knowledge. Add additional sections relative to the subject"
In addition, I can wholeheartedly recommend this video tutorial that guided me through setting up Coolify for the first time: https://www.youtube.com/watch?v=taJlPG82Ucw
Been running this setup for about a year now, and it's the first time I am actually self-hosting and feeling fairly confident about it.
OVH is just as reliable as Hetzner, and right now they have a much cheaper offer: https://us.ovhcloud.com/vps/configurator/?planCode=vps-2025-...
Aside from that, which distro would you choose for Coolify? I’m debating between Ubuntu 24.04 and Debian 13.
OH Wow.
OVH VPS - 24 vCPU, ( or Threads ) 96GB RAM for $53.40/month.
Hetzner VPS [1] - 16vCPU, 32GB, $54.90/month
DO Droplet - General Compute, Regular CPU, 16 vCPU 64GB RAM, $504/month.
Linode - 20 vCPU, 96GB RAM, $576/month
Upcloud - 24 vCPU, 96GB RAM, $576/month
I dont know what CPU OVH is using, because all the others are AMD EPYC or Newer Intel Xeon. But the pricing difference is too great that even if they were Intel E-Core CPU it would still be pretty damn good deal.
[1] There is a cheaper option from Intel vCPU, but those hardware are older and is only available when other customers cancel their plan to free up slot. So only the newer AMD option is used for comparison.
Well Hetzner's "VPS" [1] is more like the "Cloud" [2] from OVH rather than OVH's VPS [3]
(no /hours pricing, cannot instantly deploy, etc.)
Not that their pricing isn't really really good, but it depends on your use-case. DO / Linode / Upcloud / EC2 / etc. do have an insane pricing in comparison, yes.
[1] https://www.hetzner.com/cloud/
[2] https://www.ovhcloud.com/en-ca/public-cloud/prices/
[3] https://www.ovhcloud.com/en-ca/vps/
Shame OVH has no availability in North America (except Canada)
Why not get a dedicated server from OVH\Hetzner at that point?
COST is $153/ONE YEAR (not monthly).
8 vCore + 24 GB RAM + 200 GB SSD NVMe VPS @ OVH
vps provides some advantages, for example snapshoting
how are ovh and hetzner like an order of magnitude cheaper than everyone else? maybe w/ a lot of sharing for VPSs it's understandable, but they also sell dedicated for super cheap...
is it a honeypot? also did ovh change prices recently? I remember checking a couple years ago and it was more expensive vs hetzner
Don't know about OVH (it might be a very similar story?) but Hetzner is from my region and I've known the brand since back in the 1990ies. The difference to most (all?) large American hosting services is that they never went through some big investment scale-up of the type "spend now to earn later" where costs just don't matter as long as there is some growth to handwave it away, but have come to where they are now through continuous bootstrapping. The same applies to hundreds of much smaller hosters, but few (none?) reach anywhere close to Hetzner's economy of scale.
I can't talk about Hetzner, but re OVH, they are absolutely not a honeypot.
Most of the SMEs in France are customers.
They are cheap because they do most things in-house, with a lot a recycling, because their DCs are mostly located in low-cost places (real estate, rents, salaries...) and because they go for low margins.
Talk about bloat. American SaaS providers are paid too much.
Hetzner has a very bespoke setup. Their DC's mostly run on their own renewable power sources and have been refined to the limit, combined with recycling hardware for longer periods, not using server chassis or off the shelf components, and a highly bespoke racking setup and it makes for mass scale at a very low cost.
OVH has a similar setup but is way more diversified into other product lines. I'd personally never touch them after the fire that they never bothered to explain to those of us affected by it. With the amount of downtime they had there it made it very clear that their ability to recover a situation - any situation is crap.
Yes, OVH changed their VPS offer and pricing around this summer. They just became very competitive, on top of leading the way in making their data centers (really) carbon-neutral.
Not using Server Grade Hardware. Although one could argue Server Grade Hardware are not worth the premium, that is up to its customer to decide i.e Ryzen vs EPYC. ECC Memory, Server Grade SSD, Power Supply, etc. If you look at their dedicated they aren't really super cheap, there are plenty of other dedicated server out there that goes for similar pricing. The difference is that those companies only offer dedicated options and dont provide the range of VPS OVH and Hetzner offers.
Custom Hardware, down to the DC design, rack, water cooling and economy of scale. There are reasons why some Datacenter are more expensive than others. And the fire at previous OVH DC shows why. Although I remember OVH did explain they dont use that design anywhere else. Doing Custom hardware part like water cooling with Racks isn't the rocket science part, doing it great while doing it at cost efficiency is the most difficult part.
Network quality. OVH owns its own Network. Layering Cables across its own DC along with other exchanges. It used to be slower but this has become less of an issue in 2025. But in the old days the difference between premium network connected and other commodity partners from DC makes a lot of difference. ( It still does but less of an concern )
Minimal Support - Although that is not a concern anymore in 2025 because everyone got used to Cloud computing that has zero support most of the time.
Expectation of Low Margin. I think both Hetzer and OVH have accepted the fact they are in computing commodity business with low margin and aim for volume. While most US business will always try to improve their margin and venture into SaaS or other managed services. Which means both Hetzer and OVH are also the expert in squeezing penny out of everything. As someone who used to work in commodity business I have a lot of respect for these people as they are harder than most people think.
Again, these are things on top of my head when I was keeping an eye on VPS. I just checked LowEndBox ( https://lowendbox.com ) is still alive and well after almost 20 years! Before cloud computing was a thing or went mainstream there were plenty of low cost low end VPS options like OVH and Hetzner. So this isn't exactly new, they just happened to have grown into current size.
On the hardware side of things not using server grade stuff really isn't as big of a deal these days. I'd happily take a decent Ryzen 5 or 7 series over a "new" Xeon that has twice the power consumption and mysteriously the same specs as an older Xeon made a decade ago.
Even ECC - for 99% of applications (and especially on low-end VPS servers) its less likely to be a problem.
The only thing I have found to be an issue with Hetzner is on dedicated servers, and specifically the hard drives. I've had new servers provisioned and they've given me decade old drives that are on the verge of failure - it's less of an issue now as most of their servers are shipping with new nvme drives but I dare say in 3-4 years time it'll be a problem when they reuse those and have instant non-recoverable failures for some of the hardware range.
Agree it is definitely less of an issue. It also used be Xeon and EPYC ( or Opteron ) exclusive for higher core count. But Desktop CPU has caught up and now offer up to 32 vCPU for $600.
Although in 2025 AMD decided instead of people using Ryzen for server they launched EPYC Grado instead. Which is similar if not slightly cheaper than Ryzen at 32 vCPU and offer official ECC Memory support.
Its important to note that all these CPUs are likely shared. And they don't tell you how much.
Hetzner has two servers with the same amount of cores but one only costs half as much. They don't say this anywhere but if you test the performance you indeed only get half as much on the cheaper server.
Hetzner cloud servers perform a lot better than ovh vps from my (limited) experience, ymmv though. (happy customer of both)
I've had the same experience. Hetzners ARM VPS servers have been noticeably better than even their own AMD and Intel (The Intel ones are awful and clearly running on old customer hardware).
OVH was not great for me at the previous startup. The virtual network card in an API server would detach every night at somewhat unpredictable times.
OVH support response times were atrocious, multiple days of waiting until weeks later it was escalated.
They never figured it out, just suggested spinning a new server. By that point I had already migrated, but it was a bit scary since it was my first time managing infrastructure.
Just anecdata :) maybe buy a support plan if they have it.
Would you not want a server that's nearer to you? For example looking on serversearcher.com 4GB/2cpu is ~$70-80/year from e.g clouvider and you get to chose from like 7 US cities.
That link leads me to a VPS for $15 per month. Hetzner has VPS for €3.60 per month.
https://www.ovhcloud.com/en/vps/
that's quite the deal. i casually clicked through expecting not much... i was wrong!
Except when their datacenter burns down...
How is that different than Hetzner for a VPS though? As far as I'm aware a Hetzner VPS won't automatically fail over to a different region either.
I guess the joke is that OVH lost a lot of customer data in a big fire in 2021 (30k servers/blades AFAIK).
Not a real issue if you design for HA. E.g. servers are in different AZ. Replica storage etc.
They have placed different data centers very close so it might not be enough
Guess that is why bigger clouds cost more. Partly! No free lunch.
Or you can use a more reliable host like Hetzner.
it is reliable until they decide to close your account for no reason
I hear stories like this on every provider.
Turning these two css settings off improved the UI/UX of the blog a thousand times:
pre { margin: 2rem 0 !important; padding: 1rem !important; }
Each code block has such giant padding and margins that you can only read 3 lines of text in a viewport.
Also, I would suggest installing Webmin/Virtualmin which takes care of a lot of issues like deploying new subdomains or new users.
> Change root password
Don't do this; just create a new user and give it sudo privileges.
The utility of changing the SSH port is debatable, but it would lead to less noise in logs. Also, instead of limiting SSH connections to a source IP, you might consider putting the server behind Tailscale and only allowing incoming SSH connections over its interface: https://tailscale.com/kb/1077/secure-server-ubuntu (this also solves the logs problem)
And so, instead of having an open port for ssh, (ideally) with certificate-only authentication, optionally MFA, you trade it for an open port for tailscale/wireguard, handing over "all" your data to a company who is offering you a service for no monetary compensation.
Also, why do you think that it is better to not change the root password? It sounds like a very suspicious recommendation.
I clicked the article because I wanted to hear about Coolify, but its not mentioned at all beyond the article tag, intro, and closing statements. I don't think Coolify should be mentioned at all.
This article is really about preparing a VPS for Coolify deployment, but stops short of Coolify setup AFAICT
I only clicked this to see if Coolify could be a compelling option against my current setup, of using Docker Compose for everything on my VM (including a private Docker registry for my images, and a Traefik frontend proxy to route it all).
Zero actual mention of Coolify, and the manual steps to PREPARE for it seem far more complicated than, "Just base your VM on the Docker Compose base image, and then tweak a couple things".
I'll stick with what I have. Nice advantage is that I can migrate from host to host and 99% of it is just copying the Docker Compose YAML file.
I tried it a few months back but as soon as you want a project that has multiple containers using compose all sorts of issues start popping up. Like it "forgets" which containers it started and then can't stop them any more or now you have 2 containers of the same service running even though coolify only recognizes one.
I think if you do register each service separately in coolify it runs OKish.
But I've now switched to the same setup as you had and ironically it has been so much simpler to run than coolify.
I'm really happy people are working on projects like coolify, but currently it's far from ready for any serious use (imo).
Until coolify and similar projects support DB backups with streaming replication, it will just remain as a hobby project and won’t be used for anything customer facing.
Docker compose and bash script is all I need to run 2 vms, with hourly backups to s3 + wal streaming to s3 + PG and redis streaming replication to another vm. That is bare minimum for production
Any pointers in how you run the backups and Wal streaming?
Coolify uses Traefik and Docker under the hood and is really just a UI for it. It's definitely missing some critical backup features (solvable through restic or similar) and the UX is... good enough but no better.
Coolify still requires root for installation, though they have a branch that doesn't that they're working on.
So you can just ssh in and do the coolify install and then switch off root login I guess, if you're willing to just blow away the server and start over if you ever needed to ssh in again.
I tried a from scratch coolify deploy recently and it kept failing with ssh key errors. On the other server we have it working and deploying many projects however the "just give it a docker compose" method has never worked for us.
Good basic guide. Hetznet is fine, but I prefer Linode and DigitalOcean for the fact that they have far more options for servers located in the US.
Coolify and friends (Dokploy?) look like nice tools. But I am not very comfortable with them because the state of my server(s) isn't present in code. So, I like NixOS or Ansible more but then they require a bunch of boilerplate and custom infrastructure for setting up production.
Anyone know some infrastructure-as-code framework that makes it easy to spin up and maintain production servers? Something declarative, perhaps, but not Kubernetes?
I’ve been working on doing this with Coolify. There are very few coolify settings to backup, and then all the application configs are stored in /data/coolify. And I use kopia to backup all the volumes. It’s not pretty, and a little hacky, but workable for disaster recovery.
What you are describing sounds more like backups (which is great) but not necessarily a declarative setup.
I recently migrated one of my FreeBSD servers to hetzner and it was a breeze. The only wrinkle was that, until you've completed a billing cycle, you can't host an email server as the required ports are blocked.
For me this was fine and I understand why they do this but it wasn't clear to me at the start.
You can ask and explain to them what kind of traffic you'll have. I've shown them the project I'm migrating, and they've opened ports for me right from the start.
Note that if your credit card expires, Hetzner will just turn off networking to your stuff until you fix it. No warnings given, and you'll find out when your alerting/customers/staff contact you to let you know something is wrong.
Guess how I found out... :(
You can pre-charge your account to give yourself a buffer in case your payment method doesn't work for whatever reason, although it requires a bank transfer.
While I guess that's useful, when my CC expired other places sent reminders/warnings which is the standard business approach.
It was only Hetzner which didn't, and instead they turned off networking to all of our stuff (dedicated servers, some VMs, etc) with no warning. Then their support team screwed us around for a while as well.
I'm about as unimpressed with them as it's possible to get. :(
The standard business approach is to update card details before the card expires, instead of relying on service providers sending warnings when payments are already failing.
Sure. In this particular case it was "expired" early due to some random place guessing the number and the bank rightfully taking precautions.
I updated all of the places I remembered, but missed Hetzner and a few others. Only Hetzner didn't have their shit together enough to gracefully notify us. Or account support staff who were at all interested in assisting.
There are multiple warning levels and you should get email notifications. I happened to overlook those as well and also only noticed it when they turned off networking. However, that was two weeks after the invoice due and it got unblocked in seconds after the payment went through.
I assumed I'd missed warnings as well, but when I actually checked (after fixing the issue, because priorities) there were indeed no warning emails/sms/etc at all sent.
Literally, no kind of notification, warnings, anything at all. Due to this, and their support team being incredibly unhelpful during the outage, they're now on my personal blacklist for literally everything.
So instead of strongly recommended them, which I used to do, we've migrated 95% of everything off Hetzner and I'm hanging out for it to be 100%. And I warn others away from them at every opportunity. Like here. :)
We will not be returning to Hetzner. Ever.
I was using pre-charged account until waiting for a new bank account and credit card. I wasn't even hosting any VPS for a month or so, but Hetzner closed my account with no explanation, never got my money back. F*ck them, thieves.
Great guide, but I disagree on the firewall settings, specially using Hetzner. If you only need this simple configuration, their firewall solution is more than enough, and do a great job "outsourcing" the problem.
If you want to get a bit more fancy than just using their panel for it, you can configure via API: https://docs.hetzner.cloud/reference/cloud#firewalls
Does anyone have objections against Hetzner's firewall solution that I'm not aware of?
The guide mentions that Hetzner was chosen over other providers and platforms because they didn’t wish to get tied into a whole ecosystem, and could take this setup and move it more or less anywhere
>Restrict SSH to your IP (optional but recommended)
That's dangerous, because what if your IP changes? You'll be locked out?
You can always reset stuff from the Hetzner dashboard. But yes, rather than locking it down to some dynamic residential IP, it would be better to set up something like Tailscale, or to have a VPN with a dedicated static IP.
The only thing you really need to do with SSH is to use keys with it, not passwords. That should be secure enough for almost all cases.
Another layer on top is useful to remove the noise from the logs. And if you have anything aside from SSH on the server that doesn't need to be public, restricting it via a VPN or something like that is useful anyway. Most other software that listens on your server has likely much more attack surface than SSH.
Also, change the port sshd listens to from 22 to something else. Cuts down on the noise considerably.
Yea agreed. Its dangerous. Lot of people have dynamic IPs at their home. Once you have setup ssh keys and disabled root login, you should be good to go.
Agreed. You should assume you have a dynamic IP unless you’ve specifically arranged for a static one. It’s a “business” feature where I live at least, so personal internet connections will be dynamic.
And change the port from 22. I tend to use the 400 range for SSH ports.
You'll be surprised how many bots get thwarted by just changing the port.
> 2–3x cheaper for the same specs compared to DO/AWS
When I was looking for a hobby cloud provider, I did some benchmarking of similarly spec'd instances. Note that the degree of overcommitting CPU/RAM varies by cloud provider and instance type. I found Vultr to be the most consistently faster than DO. I had used OVH in the past and wasn't interested. I also didn't consider Hetzner because it seemed unlikely they could match performance at their prices. I later saw other benchmarking that showed Vultr as being one of the fastest. That was quite some time ago and I haven't checked lately, but also have no reason to switch.The reason hetzner is cheaper is because they're using consumer hardware.
The last time I compared several vps with similar pricing, hetzner was by far the fastest - but I did not try vultr back then.
Big cloud provider (AWS, Azure, GP) is great for all the managed ecosystem; if you mostly only need raw computing (CPU, memory, bandwidth), then a provider like Hertzner makes a lot of sense (plus they have API and basic services like LB/firewall and object storage).
We at SadServers moved from big cloud managed K8s to Hetzner + Edka and it's an order of magnitude cheaper (obv some perks are missing).
The production app setup section should probably be replaced by Docker. Much more repeatable and easier to configure these days.
So, where is the walkthrough for that?
I’ve seen you make this response to a couple different threads, and I wonder what you mean by it.
Are you just hoping to gain more insight on the differing proposed technologies and waiting for someone to give you more information, or are you expressing frustration that that people have their own opinions on which layers to use for their own setups?
If you’re simply asking for information on how to use docker, and how to adapt TFA to include it, you’re in luck. One can find many tutorials on how to dockerize a service (docker’s own website has quite a lot of excellent tutorials and documentation on this topic), and plenty of examples of how to harden it, use SSL, et cetera. This is a very well trodden path.
That said, I’m tempted to read your response with the latter interpretation and my response would be to observe that holding a different opinion on something isn’t inherently ungrateful, or rude, nor is it presumptuous to share that one would, say, recommend dockerizing the production app instead of deploying directly to the server.
That’s the nature of discourse, and the whole reason why hacker news has a comment section in the first place. A lovely article such as TFA is shared by someone, and then folks will want to talk about it and share their own insights and opinions on the contents. Disagreeing with a point in the article is a feature, not a bug.
You are reading too much into me. I am a noob and are interested in an opinion about a good tutorial. As you mentioned, I also asked on another thread and that dude was very friendly. Not so much luck here it seems, that people even downvote me, well, their karma.
I mean, look at you, how pathetic you behave. Instead of answering my simple question with a link of your choice, you are writing down five paragraphs accusing me of whatnot. Learn to answer simple questions with simple answers. And learn to ask a question when you have one. You would not believe how much simpler your life becomes.
And the anonymous down voters? An even weaker group of humans than you. Unimportant people who can do nothing else than stop others. For everything else, their energy is too low.
Good lord
(Downvotes do not affect the downvoters’ karma.)
Hahaha, I am talking of real karma.
Since this is a beginner's guide I would mention this docker/ufw pitfall [0] when publishing container ports. Many a containers have been erroneously exposed to the public net because of this.
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
Every step to improve raw hosting as an alternative to full cloud offering is a blessing.
Cloud pricing no longer makes any sense.
Bold claim! If my company was to leave the cloud we would easily 5-10x our costs and need to go on a hiring spree. Curious what you mean.
You pay for the privilege of paying for what you use - every second of CPU time when a lambda is running is marked up immensely versus the same second of compute on bare metal or even a VPS. So your workload needs to be sufficiently "duck curved", parabolic, or erratically spiky in order to _actually_ make cost savings on compute.
The personnel matter is harder to quantify. But note that the need for infra skills didn't go away with cloud. Cloud is complicated, you still need people who understand it, and that still costs money. Be it additional skills spread across developers, or dedicated cloud experts depending on organisation size. These aren't a far cry from sysadmims. It really depends on the skillset of your individual team. These days traditional hosting has got so much easier with so much automation, that it's not as specialist a skill or as time consuming or complicated as many people think it is.
Cloud _can_ be cheaper, but you need the correct mix of requirements and skills gap to make it actually cheaper.
Can you elaborate on why?
Thanks for sharing this! I have been using a Hetzner VPS + Coolify setup for personal projects for around a year and it has been a great Heroku-like experience and very easy on the wallet. I originally found out about both Hetzner and Coolify from this 1.5 hour guide on getting started from the Syntax podcast: https://www.youtube.com/watch?v=taJlPG82Ucw
Beautiful, thanks!
There are many variations you can do. I would recommend caddy instead of nginx for beginners these days.
So, where is the walkthrough for that?
I wish I had a good one! I wrote this a while ago, but it has a different set of assumptions:
https://www.nhatcher.com/post/a-cto-on-a-shoestring/
Thank you!
Hetzner is great, but it has some minor region problems and SLO issues, so you want to have a fallback to degrade gracefully.
I set my clients up with Hetzner for the core, and front it with Cloudflare. You can front KEDA scaled services with Cloudflare containers and you're pretty much bulletproof, even if Hetzner shits the bed you're still running.
Cool guide
https://hostup.se/en
Is much cheaper than hetzner and still in Europe.
Much cheaper?
Hostup also doesn't include the 25% taxes in that price.
Hetzner price doesn't include VAT.
But the real issue is that the price is a bit of red herring: the CX22 plan is not available everywhere (only in the old datacenters in Europe I think) and if you need to scale up your machine you can't use the bigger Intel plans (CX32, CX42 etc) because they have been unavailable for long time, and you have either to move to Amd based plans (CPX31 etc), which cost almost double for the same amount of ram, or to Arm64 based plans.
As another commenter pointed out, the pricing is very similar. They charge more for networking though and they're not as well-connected as Hetzner:
Hetzner: https://bgp.he.net/AS24940
Hostup: https://bgp.he.net/AS214640
Hetzner also has extra features like firewalls and whatnot that it doesn't seem Hostup has.
That is impressively cheap alright. How is the reliability as I haven't heard of them?
Netcup ftw
You could also sign up for System Initiative, enter your hetzner credentials, connect an ai agent, and tell it what you want to do, and iterate your way there.
It's pretty amazing how well it works and how much you learn I. The process.
I love these blogs. Making infra wherever it is or however it's done seems to be a lost art.
VPS just means a rented VM right?
I only know a little bit about what Google does to secure the VMs and hypervisors and that the attitude several years ago was that even hardened VMs weren't really living up to their premise yet.
When using one of these cost-focused providers do people typically just assume the provider has root in the VM? I sometimes see them mentioned in the context of privacy but I haven't seen much about the threat model.
Yes I think you have to, to a extent the same also applies to dedicated servers. Even if you own a server that you place in a Colo, they can still pull your drives or plug in a KVM.
If you're data is sensitive encrypt it locally and send it. The reality is most people are running something like a website, API or a SAAS and basically just have to have a provider they trust somewhat and take reasonable security precautions themselves. Beyond that it's probably not as secure as it could be unless it's in a facility you own or control access to.
That's correct. I wouldn't think of it as a VM (a container) though but rather as a server which happens to be virtual. Yes, that's literally just a different word for the same thing but the different emphasis affects thought patterns. For all intents and purposes, from the buyer's perspective, a VPS is a small server, not a different type of thing.
It's true you shouldn't put super sensitive data on a VPS because the host could access it. Regular sensitive is fine - your host will be in a world of trouble if they access your data without permission, so you can generally trust them not to read your emails or open your synced nudes. But if your data is so sensitive that the host would risk everything to read it, or would avoid getting in trouble at all (e.g. national security stuff) then absolutely don't use a VPS. For that level of paranoia you'd need at least a dedicated server which makes it unlikely the host has a live backdoor into the system, ideally your own server so you know they don't, and for super duper stuper paranoid situations, one with a chassis intrusion switch linked to a bag of thermite (that's a real thing).
Kinda weird - Coolify doesn't come up except in the first and last paragraphs. Seems like the page is incomplete or just mis-titled.
It’s a classic marketing “trick”: name-drop multiple related-ish companies, even if the product applies only to one of them.
AI slop. Coolify was probably used in the original message, before OP pivoted the article to a barebone setup.
one of the big things that is actually stopping me from migrating to say hetzner is the fact that our infrastructure is coded in CDK. I dont want to sit and deploy teardown manually anymore. Does hetzner coolify etc support a CDK type IaaS provider? what is the learning curve
There are many upvotes so clearly people like the guide. Congrats on documenting something useful!
Is anyone else immediately turned off by articles like this written in "ChatGPT voice"? The information in the guide might be good, but I didn't make it past the introduction.
I've been burned too many times by LLM-slop. If an article is written in ChatGPT voice, it might still have good content but your readers don't know that. Editing for style and using your own voice helps credibly signal that you put effort into the content.
Hetzner has been a very reliable provider for our hosting. We combine it with Cloud 66 for server hardening and deployment automation at a fraction of the cost of a PaaS
>Unattended-Upgrade::Mail "your-email@example.com";
Interesting. How does this work? Will the emails go to spam?
Yeah that won't work unless you configure postfix or something (which sadly wasn't included in the guide).
Super useful. Makes the Hetzner choice a strong one for me.
This is the best example of documentation I’ve seen posted here in a very long time.
Check out the wide breadth of tuts provided by Digital Ocean. This is just one post, misleadingly titled at that, whereas DO has LOADS of excellent and clearly explained tuts.
You mean this? https://www.digitalocean.com/community/tutorials?q=VPS That's a confusing mess of buzzwords I never heard of.
it is even simpler then that: hetzner has a pre-build coolify/ubuntu image you can use on server setup/buying process
Thanks, I will use that, once I have done it a dozen times per hand.
Does it miss 443 port config on ngnix?
I recommend Kamal or Cloud66
I'd recommend Cosmos Cloud. Only the Constellation service is non-free. I have it running on an OCI free tier 24G ARM64 VM.
https://cosmos-cloud.io/docs/index/
Hetzner is one terrible company to do business with and I wouldn't recommend their shit client service to anyone. I tried to make data backup work with one of their low cost storage boxes only to have them entirely block my nascent account, demand I hand over ID copies for identity verification and even take a photo of myself to make sure my face matches. Really? Who the fuck are you to demand this? Why don't I go to Wasabi or Backblaze B2 and just... pay for shit to receive it reliably, with no further problems.
I have seen that they do this very frequently to many people for all kinds of convoluted reasons, and often block accounts that have years running because they don't please the requirements of such a demand out of the blue (but without clarifying why they didn't comply well enough)
For example, the Reddit page for Hetzner has no shortage of desperate clients suddenly blocked, and trying to read the corporate runes of this company's policies and whatever means of appealing can be improvised, just so they can regain access to some service they'd come to depend on.
Imagine depending on that for your personally important backend infrastructure or data backup. No thanks, fuck them.