I Didn't Need Kubernetes, and You Probably Don't Either

(benhouston3d.com)

164 points | by bhouston 5 hours ago ago

159 comments

  • tombert 2 hours ago

    I’ve come to the conclusion that I hate “cloud shit”, and a small part of me is convinced that literally no one actually likes it, and everyone is playing a joke on me.

    I have set up about a dozen rack mount servers in my life, installing basically every flavor of Unix and Linux and message busses under the sun in the process, but I still get confused by all the Kubectl commands and GCP integration with it.

    I might just be stupid, but it feels like all I ever do with Kubernetes is update and break YAML files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange. I cannot imagine how anyone goes to work and actually enjoys working in Kubernetes, though I guess someone must in terms of “law of large numbers”.

    If I ever start a company, I am going to work my damndest to avoid “cloud integration crap” as possible. Just have a VM or a physical server and let me install everything myself. If I get to tens of millions of users, maybe I’ll worry about it then.

    • voidfunc 2 hours ago

      I'm always kind of blown away by experiences like this. Admittedly, I've been using Kubernetes since the early days and I manage an Infra team that operates a couple thousand self-managed Kubernetes clusters so... expert blindness at work. Before that I did everything from golden images to pushing changes via rsync and kicking a script to deploy.

      Maybe it's because I adopted early and have grown with the technology it all just makes sense? It's not that complicated if you limit yourself to the core stuff. Maybe I need to write a book like "Kubernetes for Greybeards" or something like that.

      What does fucking kill me in the Kubernetes ecosystem is the amount of add-on crap that is pitched as "necessary". Sidecars... so many sidecars. Please stop. There's way too much vendor garbage surrounding the ecosystem and dev's rarely stop to think about whether they should deploy something when it's easy as dropping in some YAML and letting the cluster magically run it.

      • stiray an hour ago

        I would buy the book. Just translate all "new language" concepts into well known concepts from networking and system administration. It would be best seller.

        If I would only have a penny for each time I wasted hours trying to figure out what something in "modern IT" is, just to figure out that I already knew what it is, but it was well hidden under layers of newspeak...

        • radicalbyte 35 minutes ago

          The book I read on K8S written by a core maintainer made is very clear.

      • t-writescode 2 hours ago

        > Admittedly, I've been using Kubernetes since the early days and I manage an Infra team

        I think this is where the big difference is. If you're leading a team and introduced all good practices from the start, then the k8s and Terraform or whatever config files can never get so very complicated that a Gordian knot isn't created.

        Perhaps k8s is nice and easy to use - many of the commands certainly are, in my experience.

        Developers have, over years and decades, learned how to navigate code and hop from definition to definition, climbing the tree and learning the language they're operating in, and most of the languages follow similar-enough patterns that they can crawl around.

        Configuring a k8s cluster has absolutely none of that knowledge built up; and, reading something that has rough practices is not a good place to learn what it should look like.

      • figassis an hour ago

        This, all the sidecars. Use kubernetes to run your app like you would without it, take advantage of the flexibility, avoid the extra complexity. Service discovery sidecars? Why not just use the out of the box dns features?

        • tommica an hour ago

          Because new people don't know better - I've never used k8s, but have seen sidecars being promoted as a good thing, so I might have used them

      • mitjam 29 minutes ago

        Absolutely, a reasonably sophisticated scalable app platform looks like a half-baked and undocumented reimagination of Kubernetes.

        Admittedly: The ecosystem is huge and less is more in most cases, but the foundation is cohesive and sane.

        Would love to read the k8s for greybeards book.

      • theptrk an hour ago

        I would pay for the outline of this book.

      • AtlasBarfed an hour ago

        So you don't run any databases in those thousands of clusters?

        To your point, and I have not used k8s I just started to research it when my former company was thinking about shoehorning cassandra into k8s...

        But there was dogma around not allowing access to VM command exec via kubectl, while I basically needed it in the basic mode for certain one-off diagnosis needs and nodetool stuff...

        And yes, some of the floated stuff was "use sidecars" which also seemed to architect complexity for dogma's sake.

        • voidfunc an hour ago

          > So you don't run any databases in those thousands of clusters?

          We do, but not of the SQL variety (that I am aware of). We have persistent key-value and document store databases hosted in these clusters. SQL databases are off-loaded to managed offering's in the cloud. Admittedly, this does simplify a lot of problems for us.

          • tayo42 an hour ago

            How much data? I keep hearing k8s isn't usable becasue sometimes there is to much data and it can't be moved around.

      • adastra22 2 hours ago

        I would buy that book.

    • stiray an hour ago

      I agree, but what pisses me the most is that today higher level abstractions (like cloud, spring boost,...) are hiding lower level functionality so well, that you are literally forced to use obnoxious amounts of time to study documentation (if you are in luck and it is well written), while everything is decorated with new naming of known concepts that was invented by people who didn't know that the concept already exists and has a name or some marketing guy figured out it will sell better with more "cool" name.

      Like Shakespeare work would be clumsy and half translated to french advertising jargoon and you are forced to read it and make it work on a stage.

    • ozim an hour ago

      I am running VPSes at our small startup-ish company on IaaS cloud.

      Every time we get a new guy I have to explain that we are already „in cloud” there is no need to „move to cloud”.

      • rcleveng 36 minutes ago

        Do they mean PAAS vs IAAS when they say "move to cloud"?

        • ozim 21 minutes ago

          Mostly business guys don’t know the difference but we are running on local cloud provider and they think of it is not on Azure or AWS it is not in cloud - they understand that we run stuff on servers but they also don’t understand VPS is IaaS.

          Developers want to use PaaS and also AWS or Azure so they can put it on their resume for the future.

    • devjab an hour ago

      I don’t mind the cloud, but even in enterprise organisations I fail to see the value of a lot of the more complex tools. I’ve anlways worked with Azure because Denmark is basically Microsoft territory in a lot of non-tech organisations because of the synergy between pricing and IT operations staff.

      I’ve done bicep, terraform and both Kubernetes and the managed (I forgot what azure conteiner apps running on top of what is basically Kubernetes is called). When I can get away with it I always use the Azure CLI through bash scripts in a pipeline however and build directly into Azure App services for contained which is just so much less complicated than what you probably call “cloud shit”. The cool part about the Azure CLI and their app services is that it hasn’t really changed in the past 3 years, and they are almost one size fit any organisation. So all anyone needs to update in the YAML scripts are the variables. By contrast working with Bicep/Terraform, Jenkins and whatever else people use has been absolutely horrible, sometimes requiring full time staff just to keep it updated. I suppose it may be better now that azure co-pilot can probably auto-generate what you need. A complete waste of resources in my opinion. It used to be more expensive, but with the last price hike of 600% on azure container apps it’s usually cheaper. It’s also way more cost efficient in terms of maintaining since it’ll just work after the initial setup pipeline has run. This is the only way I have found that is easier than what it was when organisations ran their own servers. Whether it was in the basement or at some local hardware house (not exactly sure what you call the places where you rent server rack space). Well places like Digital Ocean are even easier but they aren’t used by enterprise.

      I’m fairly certain I’ve ever worked with an organisation that needed anything more than that since basically nothing in Denmark scales beyond what can run on a couple of servers behind a load balancer. One of the few exceptions is the tax system which sees almost 0 usage except for the couple of weeks where the entire adult population logs in in at the same time. When DevOps teams push back, I tend to remind them that StackOverflow ran on a couple of IIS servers for a while and that they don’t have even 10% of the users.

      Eventually the business case for Azure will push people back to renting hardware space or jumping to Hetzner and similar. But that’s a different story.

    • tryauuum 2 hours ago

      I have same thoughts.

      the only form of kubernetes I would be willing to try is the one with kata-containers for having all the security of virual machines.

    • misswaterfairy an hour ago

      I hate "cloud shit" as well, though specifically that there's a vendor-specific 'app', or terminology, or both, for everything that we've had standard terms for, for decades.

      I just want a damn network, a couple of virtual machines, and a database. Why does each <cloud provider> have to create different fancy wrappers over everything, that not even their own sales consultants, and even engineers, understand?(1)

      What I do like about Docker and Kubernetes is that shifting from one cloud provider to another, or even back to on-premises (I'm waiting for the day our organisation says "<cloud-provider> is too damn expensive; those damn management consultants lied to us!!!!") is a lot easier than re-building <cloud provider>'s bespoke shit in <another cloud provider>'s bespoke shit, or back on-premises with real tech (the right option in my opinion for anyone less than a (truly) global presence).

      I do like the feel of, and being able to touch bare metal, though the 180-proof-ether-based container stuff is nice for quick, flexible, and (sometimes) dirty. Especially for experimenting when the Directors for War and Finance (the same person) say "We don't have the budget!! We're not buying another server/more RAM/another disk/newer CPUs! Get fucked!".

      The other thing about Docker specifically I like is I can 'shop' around for boilerplate templates that I can then customise without having to screw around manually building/installing them from scratch. And if I fuck it up, I just delete the container and spin up another one from the image.

      (1) The answer is 'vendor lock-in', kids.

      (I apologise, I've had a looooooong day today.......)

    • cheriot 2 hours ago

      > I might just be stupid, but it feels like all I ever do with ____ is update and break ____ files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange. I cannot imagine how anyone goes to work and actually enjoys working in ____, though I guess someone must in terms of “law of large numbers”.

      I'd make a similar statement about the sys admin stuff you already know well. Give me yaml and a rest api any day.

      I see where you and the article are coming from, though. The article reasonably points out that k8s is heavy for simpler workloads.

    • nyclounge 2 hours ago

      Or if you got a static ip and fast up speed, then just port forward 80, 443 and start hosting your self. Even an old Intel Mac Book pro from 2000's with 4 GB of RAM may not be that hot running MacOS, but install Debian with no X. It is running smooth as a whistle, while running several conduit matrix, haraka, zone-mta, ice cast, nginx with no issues.

      WebRTC/TUN/STUN becomes an issue with the nginx config. May consider looking at pingora. The whole rust -> binary + toml file is super nice to run from system admin perspective.

      • d3Xt3r 34 minutes ago

        > It is running smooth as a whistle

        ... until you get hit by a DDoS attack. Not much you can do about it unless your ISP offers protection, or you end up going for Cloudflare or the like instead of exposing your IP and ports.

    • fragmede an hour ago

      I'll let you in on the joke. The joke is the demand for 100% availability and instant gratification. we're making services where anything less than 4 nines, which is 5 minutes month, is deemed unacceptable. three nines is 10 minutes a week. two nines is 15 minutes a day. there are some things that are important enough that you can't take a coffee break and wait for, but Kubernetes lets you push four nines of availability, no problem. Kubernetes is solving for that level of availability, but my own body doesn't have anything near that level of availability. demanding that from everything and everyone else is what pushes for Kubernetes level of complexity.

      • kitd 15 minutes ago

        Are you the only one waiting on your app if it goes down?

    • tapoxi 2 hours ago

      This read as "old man yells at cloud" to me.

      I've managed a few thousand VMs in the past, and I'm extremely grateful for it. An image is built in CI, service declares what it needs, the scheduler just handles shit. I'm paged significantly less and things are predictable and consistent unlike the world of VMs where even your best attempt at configuration management would result in drift, because the CM system is only enforcing a subset of everything that could go wrong.

      But yes, Kubernetes is configured in YAML, and YAML kind of sucks, but you rarely do that. The thing that changes is your code, and once you've got the boilerplate down CI does the rest.

      • cess11 13 minutes ago

        I'd prefer Ansible if I was running VM:s. Did that at a gig, controlled a vCenter cluster and hundreds of machines in it, much nicer experience than Kubernetes style ops. Easier to do ad hoc troubleshooting and logging for one.

      • catdog an hour ago

        YAML is fine, esp. compared to the huge collection of often 10x worse config formats you have to deal with in the VM world.

    • cess11 25 minutes ago

      Once it gets hard to run the needed services for all the applications in the organisation/team on your development machine it'll start to look more attractive to turn to Docker/Podman and the like, and that's when automatic tests and deploys built on disgusting YAML starts to make more sense.

      I've been at a place where the two main applications were tightly coupled with several support systems and they all were dependent on one or more of Oracle DB, Postgres, Redis, JBoss, Tibco EMS and quite a bit more. Good luck using your development device to boot and run the test suite without containers. Before that team started putting stuff in containers they used the CI/CD environment to run the full set of tests, so they needed to do a PR, get it accepted, maybe wait for someone else's test run to finish, then watch it run, and if something blew, go back to commit, push to PR, &c. all over again.

      Quite the nuisance. A full test suite had a run time of about an hour too. When I left we'd pushed it to forty minutes on our dev machines. They didn't use raw Kubernetes though, they had RedHat buy-in and used OpenShift, which is a bit more sane. But it's still a YAML nightmare that cuts you with a thousand inscrutable error messages.

    • teekert 22 minutes ago

      Sorry to have to tell you this, but you’re old. Your neural plasticity has gone down and you feel like you have seen it al before. As a result you cling to the old and never feel like you grasp the new. The only reasonable thing to is to acknowledge and accept this and try not let it get in your way.

      Our generation has seen many things before, but at the same time the world has completely changed and it’s led to the people growing up in it to be different.

      You and me didn’t fully grasp CPUs anymore. Some people today don’t grasp all the details of the abstractions below K8s anymore and use it when perhaps something simpler (in architecture , not necessarily in use!) could do it better. And yet, they build wonderous things. Without editing php.ini and messing up 2 services to get one working.

      Do I think K8s is the end all? Certainly not, I agree it’s sometimes overkill. But I bet you’ll like it’s follow-up tech even less. It is the way of things.

      • tedk-42 19 minutes ago

        > Is K8s the end all? Certainly not, I agree it’s sometimes overkill. But I bet you’ll like it’s follow-up tech even less. It is the way of things.

        I agree with your analysis.

        People wanna talk up about how good the old days were plugging cables into racks but it's really laborious and can take days to debug that a faulty network switch is the cause of these weird packet drop issues seen sporadically on hot days.

        Same as people saying 'oh yeah calculators are too complicated, pen and paper is what kids should be learning'.

        It's the tide of change

  • paxys 3 hours ago

    > Kubernetes comes with substantial infrastructure costs that go beyond DevOps and management time. The high cost arises from needing to provision a bare-bones cluster with redundant management nodes.

    That's your problem right there. You really don't want to be setting up and managing a cluster from scratch for anything less than a datacenter-scale operation. If you are already on a cloud provider just use their managed Kubernetes offering instead. It will come with a free control plane and abstract away most of the painful parts for you (like etcd, networking, load balancing, ACLs, node provisioning, kubelets, proxies). That way you just bring your own nodes/VMs and can still enjoy the deployment standardization and other powerful features without the operational burden.

    • dikei 3 hours ago

      Even for on-prem scenario, I'd rather maintain a K8S control plane and let developer teams manage their own apps deployment in their own little namespace, than provisioning a bunch of new VMs each time a team need some services deployed.

      • spockz 3 hours ago

        I can imagine. Do you have complete automation setup around maintaining the cluster?

        We are now on-prem using “pet” clusters with namespace as a service automated on it. This causes all kinds of issues with different workloads with different performance characteristics and requirements. They also share ingress and egress nodes so impact on those has a large blast radius. This leads to more rules and requirements.

        Having dedicated and managed clusters where everyone can determine their sizing and granularity of workloads to deploy to which cluster is paradise compared to that.

        • solatic 2 hours ago

          > This causes all kinds of issues with different workloads with different performance characteristics and requirements.

          Most of these issues can be fixed by setting resource requests equal to limits and using integer CPU values to guarantee QoS. You should also have an interface with developers explaining which nodes in your datacenter have which characteristics, using node labels and taints, and force developers to pick specific node groups as such by specifying node affinity and tolerations, by not bringing online nodes without taints.

          > They also share ingress and egress nodes so impact on those has a large blast radius.

          This is true regardless of whether or not you use Kubernetes.

      • rtpg 2 hours ago

        Even as a K8s hater, this is a pretty salient point.

        If you are serious about minimizing ops work, you can make sure people are deploying things in very simple ways, and in that world you are looking at _very easy_ deployment strategies relative to having to wire up VMs over and over again.

        Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app.

        • dikei 2 hours ago

          > Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app.

          Agree.

          To reduce the chance a dev pull some random configs out of nowhere, we maintain a Helm template that can be used to deploy almost all of our services in a sane way, just replace the container image and ports. The deployment is probably not optimal, but further tuning can be done after the service is up and we have gathered enough metrics.

          We've also put all our configs in one place, since we found that devs tend to copy from existing configs in the repo before searching the internet.

        • guitarbill 2 hours ago

          > Just feels like lots of devs will take whatever random configs they find online

          Well it usually isn't a mystery. Requiring a developer team to learn k8s likely with no resources, time, or help is not a recipe for success. You might have minimised someone else's ops work, but at what cost?

          • rtpg 2 hours ago

            I am partly sympathetic to that (and am a person who does this) but I think too many devs are very nihilistic and use this as an excuse to stop thinking. Everyone in a company is busy doing stuff!

            There's a lot of nuance here. I think ops teams are comfortable with what I consider "config spaghetti". Some companies are incentivised to ship stuff that's hard to configure manually. And a lot of other dynamics are involved.

            But at the end of the day if a dev copy-pastes some config into a file, taking a quick look over and asking yourself "how much of this can I actually remove?" is a valuable skill.

            Really you want the ops team to be absorbing this as well, but this is where constant atomization of teams makes things worse! Extra coordination costs + a loss of a holistic view of the system means that the iteration cycles become too high.

            But there are plenty of things where (especially if you are the one integrating something!) you should be able to look over a thing and see, like, an if statement that will always be false for your case and just remove it. So many modern ops tools are garbage and don't accept the idea of running something on your machine, but an if statement is an if statement is an if statement.

    • sbstp an hour ago

      Most control planes are not free anymore, they cost like 70$/mo on AWS & GCP. Used to be a while back.

    • oofbey an hour ago

      If you do find yourself wanting to create a cluster by hand, it's probably because you don't actually need lots of machines in the "cluster". In my experience it's super handy to run tests on a single-node "cluster", and then k3s is super simple. It takes something like 8 seconds to install k3s on a bare CI/CD instance, and then you can install your YAML and see that it works.

      Once you're used to it, the high-level abstractions of k8 are wonderful. I run k3s on raspberry pi's because it takes care of all sorts of stuff for you, and it's easy to port code and design patterns from the big backend service to a little home project.

  • ants_everywhere 4 hours ago

    People talk about Kubernetes as container orchestration, but I think that's kind of backwards.

    Kubernetes is a tool for creating computer clusters. Hence the name "Borg" (Kubernetes's grandpa) referring to assimilating heterogeneous hardware into a collective entity. Containers are an implementation detail.

    Do you need a computer cluster? If so k8s is pretty great. If you don't care about redundancy and can get all the compute you need out of a single machine, then you may not need a cluster.

    Once you're using containers on a bunch of VMs in different geographical regions, then you effectively have hacked together a virtual cluster. You can get by without k8s. You just have to write a lot of glue code to manage VMs, networking, load balancing, etc on the cloud provider you use. The overhead of that is probably larger than just learning Kubernetes in the long run, but it's reasonable to take on that technical debt if you're just trying to move fast and aren't concerned about the long run.

    • stickfigure 3 hours ago

      K8s doesn't help you solve your geographical region problem, because the geographical region problem is not running appserver instances in multiple regions. Almost any PaaS will do that for you out of the box, with way less fuss than k8s. The hard part is distributing your data.

      Less overhead than writing your own glue code, less overhead than learning Kubernetes, is just use a PaaS like Google App Engine, Amazon Elastic Beanstalk, Digital Ocean App Platform, or Heroku. You have access to the same distributed databases you would with k8s.

      Cloud Run is PaaS for people that like Docker. If you don't even want to climb that learning curve, try one of the others.

      • photonthug an hour ago

        > just use a PaaS like Google App Engine, Amazon Elastic Beanstalk, Digital Ocean App Platform, or Heroku.

        This is the right way for web most of the time, but most places will choose k8s anyway. It’s perplexing until you come to terms with the dirty secret of resume driven development, which is that it’s not just junior engs but lots of seniors too and some management that’s all conspiring to basically defraud business owners. I think the unspoken agreement is that Hard work sucks, but easy work that helps you learn no transferable skills might be worse. The way you evaluate this tradeoff predictably depends how close you are to retirement age. Still, since engineers are often disrespected/discarded by business owners and have no job security, oaths of office, professional guilds, or fiduciary responsibility.. it’s no wonder things are pretty mercenary out there.

        Pipelines are as important as web these days but of course there are many options for pipelines as a service also.

        K8s is the obviously correct choice for teams that really must build new kinds of platforms that have many diverse kinds of components, or have lots of components with unique requirements for coupling (like say “scale this thing based on that other thing”, but where you’d have real perf penalties for leaving the k8s ecosystem to parse events or whatever).

        The often mentioned concern about platform lock in is going to happen to you no matter what, and switching clouds completely rarely happens anyway. If you do switch, it will be hard and time consuming no matter what.

        To be fair, k8s also enables brand new architectural possibilities that may or may not be beautiful. But it’s engineering, not art, and beautiful is not the same as cheap, easy, maintainable, etc.

      • vrosas 3 hours ago

        PaaS get such a bad rap from devs in my experience, even though they would solve so many problems. They'd rather keep their k8s clusters scaled to max traffic and spend their nights dealing with odd networking and configuration issues than just throw their app on Cloud Run and call it a day.

    • ashishmax31 an hour ago

      Exactly. I've come to describe k8s as a distributed operating system for servers.

      K8s tries to abstract away individual "servers" and gives you an API to interact with all the compute/storage in the cluster.

    • politelemon 3 hours ago

      I like to describe it similarly, but as a way of building platforms.

    • Spivak 3 hours ago

      This has got to be the most out there k8s take I've read in a while. k8s doesn't save you from learning your cloud providers infrastructure, you have to learn k8s in addition to your cloud provider's infrastructure. It's all ALBs, ASGs, Security Groups, EBS Volumbes and IAM policy underneath and k8s, while very clever, isn't so clever as to abstract much of any of it away from you. On EKS you get to enjoy more odd limitations with your nodes than EC2 would give you on its own.

      You're already building on a cluster, your cloud provider's hypervisor. They'll literally build virtual compute of any size and shape for you on demand out of heterogeneous hardware and the security guarantees are much stronger than colocated containers on k8s nodes.

      There are quite a few steps between single server and k8s.

    • _flux 2 hours ago

      What is the container orchestration tool of choice beyond docker swarm, then?

      • rixed 17 minutes ago

        Is nomad still around?

        • _flux 11 minutes ago

          Thanks, hadn't heard of that.

          Seems pretty active per its commit activity: https://github.com/hashicorp/nomad/graphs/commit-activity

          But the fact that I hadn't heard of it before makes it sound not very popular, at least not for the bubble I live in :).

          Does anyone have any practical experiences to share about it?

  • lkrubner 4 hours ago

    Interesting that the mania for over-investment in devops is beginning to abate. Here on Hacker News I was a steady critic of both Docker and Kubernetes, going to at least 2017, but most of these posts were unpopular. I have to go back to 2019 to find one that sparked a conversation:

    https://news.ycombinator.com/item?id=20371961

    The stuff I posted about Kubernetes did not draw a conversation, but I was simply documenting what I was seeing: vast over-investment in devops even at tiny startups that were just getting going and could have easily dumped everything on a single server, exactly as we used to do things back in 2005.

    • OtomotO 4 hours ago

      It's just the hype moving on.

      Every generation has to make similar mistakes again and again.

      I am sure if we had the opportunity and the hype was there we would've used k8s in 2005 as well.

      The same thing is true for e.g. JavaScript on the frontend.

      I am currently migrating a project from React to HTMX.

      Suddenly there is no build step anymore.

      Some people were like: "That's possible?"

      Yes, yes it is and it turns out for that project it increases stability and makes everything less complex while adding the exact same business value.

      Does that mean that React is always the wrong choice?

      Well, yes, React sucks, but solutions like React? No! It depends on what you need, on the project!

      Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

      • ajayvk 4 hours ago

        Along those lines, I am building https://github.com/claceio/clace for teams to deploy internal tools. It provides a Cloud Run type interface to run containers, including scaling down to zero. It implements an application server than runs containerized apps.

        Since HTMX was mentioned, Clace also makes it easy to build Hypermedia driven apps.

      • esperent 4 hours ago

        > Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job

        I think this is a gross misunderstanding of the complexity of tools available to carpenters. Use a saw. Sure, electric, hand powered? Bandsaw, chop saw, jigsaw, scrollsaw? What about using CAD to control the saw?

        > Suddenly there is no build step anymore

        How do you handle making sure the JS you write works on all the browsers you want to support? Likewise for CSS: do you use something like autoprefixer? Or do you just memorize all the vendor prefixes?

        • OtomotO 4 hours ago

          Htmx works on all browsers I want to support.

          I don't use any prefixed CSS and haven't for many years.

          Last time I did knowingly and voluntarily was about a decade ago.

      • augbog 4 hours ago

        It's actually kinda hilarious how RSC (React Server Components) is pretty much going back to what PHP was but yeah proves your point as hype moves on people begin to realize why certain things were good vs not

      • fud101 2 hours ago

        where does tailwind stand on this? you can use it without a build step but it's strongly recommended in production

        • fer 18 minutes ago

          A build step in your pipeline is fine because, chances are, you already have a build step in there.

    • harrall 2 hours ago

      People gravely miss-understand containerization and Docker.

      All it lets you do is put shell commands into a text file and be able to run it self-contained anywhere. What is there to hate?

      You still use the same local filesystem, the same host networking, still rsync your data dir, still use the same external MySQL server even if you want -- nothing has changed.

      You do NOT need a load balancer, a control plane, networked storage, Kubernetes or any of that. You ADD ON those things when you want them like you add on optional heated seats to your car.

    • valenterry 4 hours ago

      So, let's say you want to deploy server instances. Let's keep it simple and say you want to have 2 instances running. You want to have zero-downtime-deployment. And you want to have these 2 instances be able to access configuration (that contains secrets). You want load balancing, with the option to integrate an external load balancer. And, last, you want to be able to run this setup both locally and also on at least 2 cloud providers. (EDIT: I meant to be able to run it on 2 cloud providers. Meaning, one at a time, not both at the same time. The idea is that it's easy to migrate if necessary)

      This is certainly a small subset of what kubernetes offers, but I'm curious, what would be your goto-solution for those requirements?

      • bruce511 4 hours ago

        That's an interesting set of requirements though. If that is indeed your set of requirements then perhaps Kubernetes is a good choice.

        But the set seems somewhat arbitrary. Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

        Indeed given that you have 4 machines (2 instances, x 2 providers) could a human manage this? Is Kubernetes overkill?

        I ask this merely to wonder. Naturally if you are rolling out hundreds of machines you should, and no doubt by then you have significant revenue (and thus able to pay for dedicated staff) , but where is the cross-over?

        Because to be honest most startups don't have enough traction to need 2 servers, never mind 4, never mind 100.

        I get the aspiration to be large. I get the need to spend that VC cash. But I wonder if Devops is often just premature and that focus would be better spent getting paying customers.

        • valenterry 3 hours ago

          > Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

          I think the "2 cloud providers" criteria is maybe negotiable. Also, maybe there was a misunderstanding: I didn't mean to say I want to run it on two cloud providers. But rather that I run it on one of them but I could easily migrate to the other one if necessary.

          The zero-downtime one isn't. It's not necessarily so much about actually having zero-downtime. It's about that I don't want to think about it. Anything besides zero-downtime actually adds additional complexity to the development process. It has nothing to do with trying to be large actually.

          • AznHisoka 3 hours ago

            I disagree with that last part. By default, having a few seconds downtime is not complex. The easiest thing you could do to a server is restart it. Its literally just a restart!

            • valenterry 3 hours ago

              It's not. Imagine there is a bug that stops the app from starting. It could be anything, from a configuration error (e.g. against the database) to a problem with warmup (if necessary) or any kind of other bug like an exception that only triggers in production for whatever reasons.

              EDIT: and worse, it could be something that just started and would even happen when trying to deploy the old version of the code. Imagine a database configuration change that allows the old connections to stay open until they are closed but prevents new connections from being created. In that case, even an automatic roll back to the previous code version would not resolve the downtime. This is not theory, I had those cases quite a few times in my career.

      • osigurdson 4 hours ago

        "Imagine you are in a rubber raft, you are surrounded by sharks, and the raft just sprung a massive leak - what do you do?". The answer, of course, is to stop imagining.

        Most people on the "just use bash scripts and duct tape" side of things assume that you really don't need these features, that your customers are ok with downtime and generally that the project that you are working on is just your personal cat photo catalog anyway and don't need such features. So, stop pretending that you need anything at all and get a job at the local grocery store.

        The bottom line is there are use cases, that involve real customers, with real money that do need to scale, do need uptime guarantees, do require diverse deployment environments, etc.

        • ozim 26 minutes ago

          You know that you can scale servers just as well, you can use good practices with scripts and deployments in bash and having them documented and in version control.

          Equating bash scripts and running servers to duct taping and poor engineering vs k8s yaml being „proper engineering„ is well wrong.

        • QuiDortDine 3 hours ago

          Yep. I'm one of 2 Devops at an R&D company with about 100 employees. They need these services for development, if an important service goes down you can multiply that downtime by 100, turning hours into man-days and days into man-months. K8 is simply the easiest way to reduce the risk of having to plead for your job.

          I guess most businesses are smaller than this, but at what size do you start to need reliability for your internal services?

      • caseyohara 4 hours ago

        I think you are proving the point; there are very, very few applications that need to run on two cloud providers. If you do, sure, use Kubernetes if that makes your job easier. For the other 99% of applications, it’s overkill.

        Apart from that requirement, all of this is very doable with EC2 instances behind an ALB, each running nginx as a reverse proxy to an application server with hot restarting (e.g. Puma) launched with a systemd unit.

        • valenterry 3 hours ago

          Sorry, that was a misunderstanding. I meant that I want to be able to run it on two cloud providers, but one at a time is fine. It just means that it would be easy to migrate/switch over if necessary.

        • osigurdson 4 hours ago

          To me that sounds harder than just using EKS. Also, other people are more likely to understand how it works, can run it in other environments (e.g. locally), etc.

      • tootubular 4 hours ago

        My personal goto-solution for those requirements -- well 1 cloud provider, I'll follow up on that in a second -- would be using ECS or an equivalent service. I see the OP was a critic of Docker as well, but for me, ECS hits a sweet spot. I know the compute is at a premium, but at least in my use-cases, it's so far been a sensible trade.

        About the 2 cloud providers bit. Is that a common thing? I get wanting migrate away from one for another, but having a need for running on more than 1 cloud simultaneously just seems alien to me.

        • valenterry 3 hours ago

          Actually, I totally agree. ECS (in combination with secret manager) is basically fulfilling all needs, except being not so easy to reproduce/simulate locally and of course with the vendor lock-in.

      • shrubble 4 hours ago

        Do you know of actual (not hypothetical) cases, where you could "flip a switch" and run the exact same Kubernetes setups on 2 different cloud providers?

        • InvaderFizz 3 hours ago

          I run clusters on OKE, EKS, and GKE. Code overlap is like 99% with the only real differences all around ingress load balancers.

          Kubernetes is what has provided us the abstraction layer to do multicloud in our SaaS. Once you are outside the k8s control plane, it is wildly different, but inside is very consistent.

        • threeseed 4 hours ago

          Yes. I've worked on a number of very large banking and telco Kubernetes platforms.

          All used multi-cloud and it was about 95% common code with the other 5% being driver style components for underlying storage, networking, IAM etc. Also using Kind/k3d for local development.

        • devops99 4 hours ago

          Both EKS (Amazon) and GKE (Google Cloud) run Cilium for the networking part of their managed Kubernetes offerings. That's the only real "hard part". From the users' point of view, the S3 buckets, the network-attached block devices, and compute (CRIO container runtime) are all the same.

          You are using some other cloud provider or want uniformity there's https://Talos.dev

        • hi_hi 4 hours ago

          Yes, but it would involve first setting up a server instance and then installing k3s :-)

          • valenterry 2 hours ago

            I actually also think that k3s probably comes closest to that. But I have never used it, and ultimately it also uses k8s.

      • whatever1 4 hours ago

        Why does a startup need zero-downtime-deployment? Who cares if your site is down for 5 seconds? (This is how long it takes to restart my Django instance after updates).

        • valenterry 3 hours ago

          Because it increases development speed. It's maybe okay to be down for 5 seconds. But if I screw up, I might be down until I fix it. With zero-downtime deployment, if I screw up, then the old instances are still running and I can take my time to fix it.

      • kccqzy 3 hours ago

        I've worked at tiny startups before. Tiny startups don't need zero-downtime-deployment. They don't have enough traffic to need load balancing. Especially when you are running locally, you don't need any of these.

        • anon7000 3 hours ago

          Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

          Tiny startups are rarely trying to build projects for small customer bases (eg little scaling required.) They’re trying to be the next unicorn. So they should probably make sure they can easily scale away from tossing everything on the same server

          • lmm 2 hours ago

            > Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

            Having too many (or too big) customers to handle is a nice problem to have, and one you can generally solve when you get there. There are a handful of giant customers that would want you to be giant from day 1, but those customers are very difficult to land and probably not worth the effort.

          • jdlshore 2 hours ago

            Startups need product-market fit before they need scale. It’s incredibly hard to come by and most won’t get it. Their number one priority should be to run as many customer acquisition experiments as possible for as little as possible. Every hour they spend on scale before they need it is an hour less of runway.

            • lkjdsklf an hour ago

              while true, zero downtime deployments is... trivial... even for a tiny startup.. So you might as well do it.

      • rozap 2 hours ago

        A script that installs some dependencies on an Ubuntu vm. A script that rsyncs the build artifact to the machine. The script can drain connections and restart the service using the new build, then onto the next VM. The cloud load balancer points at those VMs and has a health check. It's very simple. Nothing fancy.

        Our small company uses this setup. We migrated from GCP to AWS when our free GCP credits from YC ran out and then we used our free AWS credits. That migration took me about a day of rejiggering scripts and another of stumbling around in the horrible AWS UI and API. Still seems far, far easier than paying the kubernetes tax.

        • valenterry 2 hours ago

          I guess the cloud load balancer is the most custom part. Do you use the alb from aws?

      • amluto 4 hours ago

        For something this simple, multi-cloud seems almost irrelevant to the complexity. If I’m understanding your requirements right, a deployment consists of two instances and a load balancer (which could be another instance or something cloud-soecific). Does this really need to have fancy orchestration to launch everything? It could be done by literally clicking the UI to create the instances on a cloud and by literally running three programs to deploy locally.

      • CharlieDigital 4 hours ago

        Serverless containers.

        Effectively using Google and Azure managed K8s. (Full GKE > GKE Autopilot > Google Cloud Run). The same containers will run locally, in Azure, or AWS.

        It's fantastic for projects but and small. The free monthly grant makes it perfect for weekend projects.

      • lkjdsklf 3 hours ago

        We’ve been deploying software like this for a long ass time before kubernetes.

        There’s shitloads of solutions.

        It’s like minutes of clicking in a ui of any cloud provider to do any of that. So doing it multiple times is a non issue.

        Or automate it with like 30 lines of bash. Or chef. Or puppet. Or salt. Or ansible. Or terraform. Or or or or or.

        Kubernetes brings in a lot of nonsense that isn’t worth the tradeoff for most software.

        If you feel it makes your life better, then great!

        But there’s way simpler solutions that work for most things

        • valenterry 2 hours ago

          I'm actually not using kubernetes because I find it too complex. But I'm looking for a solution for that problem and I haven't found one, so I was wondering what OP uses.

          Sorry, but I don't want to "click in a UI". And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

          • lkjdsklf an hour ago

            > And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

            Maybe not literally 30.. I didn't bother actually writing it. Also bash was just a single example. It's way less terraform code to do the same thing. You just need an ELB backed by an autoscaling group. That's not all that much to setup. That gets you the two loadbalanced servers and zero downtime deploys. When you want to deploy, you just create a new scaling group and launch configuration and attach to the ELB and ramp down the old one.. Easy peasy. For the secrets, you need at least KMS and maybe secret manager if you're feeling fancy.. That's not much to setup. I know for sure AWS and azure provide nice CLIs that would let you do this in not that many commands. or just use terraform

            Personally if I really cared about multi cloud support, I'd go terraform (or whatever it's called now).

            • valenterry 38 minutes ago

              > You just need an ELB backed by an autoscaling group

              Sure, and then you can neither 1.) test your setup locally nor 2.) easily move to another cloud provider. So that doesn't really fit what I asked.

              If they answer is "there is nothing, just accept the vendor lock-in" then fine, but please don't reply with "30 lines of bash" and make me have expectations. :-(

    • sobellian 4 hours ago

      I've worked at a few tiny startups, and I've both manually administered a single server and run small k8s clusters. k8s is way easier. I think I've spent 1, maybe 2 hours on devops this year. It's not a full-time job, it's not a part-time job, it's not even an unpaid internship. Perhaps at a bigger company with more resources and odd requirements...

      • nicce 3 hours ago

        But how much this costs extra? Sounds like you are using cloud-provided k8s.

        • sobellian 2 hours ago

          EKS is priced at $876 / yr / cluster at current rates.

          Negligible for me personally, it's much less than either our EC2 or RDS costs.

          • fer 10 minutes ago

            Yeah, using EKS isn't the same thing as "administering k8s", unless I misread you above. Actual administration is already done for you, it's batteries included, turn-key, and integrated with everything AWS.

            A job ago we had our own k8s cluster in our own DC, and it required a couple of teams to keep running and reasonably integrated with everything else in the rest of the company. It was probably cheaper overall than cloud given the compute capacity we had, but also probably not by much given the amount of people dedicated to it.

            Even my 3-node k3s at home requires more attention than what you described.

    • pclmulqdq 4 hours ago

      The attraction of this stuff is mostly the ability to keep your infrastructure configurations as code. However, I have previously checked in my systemd cofig files for projects and set up a script to pull them on new systems.

      It's not clear that docker-compose or even kubernetes* is that much more complicated if you are only running 3 things.

      * if you are an experienced user

      • honkycat 4 hours ago

        Having done both: running a small Kubernetes cluster is simpler than managing a bunch of systemd files.

        • worldsayshi 6 minutes ago

          Yeah this is my impression as well which makes me not understand the k8s hate.

    • santoshalper 4 hours ago

      As an industry, we spent so much time sharpening our saw that we nearly forgot to cut down the tree.

    • rozap 2 hours ago

      ZIRP is over.

    • honkycat 4 hours ago

      Start-ups that don't need to scale will quickly go away, because how else are you going to make a profit?

      How have you been going since 2005 and still not understand the economics of software?

      • Vespasian 2 hours ago

        Just to make it clear: There are a million use cases that don't involve scaling fast.

        For example B2B businesses where you have very few but extremely high value customers for specialized use cases.

        Another one is building bully hardware. Your software infrastructure does not need to grow any faster than your shop floor is building it.

        Whether you want to call that a "startup" is up for debate (and mostly semanticist if you ask me) but at one point they were all a zero employee company and needed to survive their first 5 years.

        In general you won't find their products on the app store.

      • ndriscoll 3 hours ago

        CPUs are ~300x more powerful and storage offers ~10,000x more IOPS than 2005 hardware. More efficient server code exists today. You can scale very far on one server. If you were bootstrapping a startup, you could probably plan to use a pair of gaming PCs until at least the first 1-10M users.

  • philbo 39 minutes ago

    We migrated to Cloud Run at work last year and there were some gotchas that other people might want to be aware of:

    1. Long running TCP connections. By default, Cloud Run terminates inbound TCP connections after 5 minutes. If you're doing anything that uses a long-running connection (e.g. a websocket), you'll want to change that setting otherwise you will have weird bugs in production that nobody can reproduce in local. The upper limit on connections is 1 hour, so you will need some kind of reconnection logic on clients if you're running longer than that.

    Ref: https://cloud.google.com/run/docs/configuring/request-timeou...

    2. First/second generation. Cloud Run has 2 separate execution environments that come with tradeoffs. First generation emulates Linux (imperfectly) and has faster cold starts. Second generation runs on actual Linux and has faster CPU and faster network throughput. If you don't specify a choice, it defaults to first generation.

    Ref: https://cloud.google.com/run/docs/about-execution-environmen...

    3. Autoscaling. Cloud Run autoscales at 60% CPU and you can't change that parameter. You'll want to monitor your instance count closely to make sure you're not scaling too much or too little. For us it turned out to be more useful to restrict scaling on request count, which you can control in settings.

    Ref: https://cloud.google.com/run/docs/about-instance-autoscaling

  • solatic 2 hours ago

    Cloud Run is fine if you're a small startup and you're thinking about your monthly bill in three-figure or even four-figure terms.

    Like most serverless solutions, it does not permit you to control egress traffic. There are no firewall controls exposed to you, so you can't configure something along the lines of "I know my service needs to connect to a database, that's permitted, all other egress attempts are forbidden", which is a foundational component of security architecture that understands that getting attacked is a matter of time and security is something you build in layers. EDIT: apparently I'm wrong on Cloud Run not being deployable within a VPC! See below.

    GCP and other cloud providers have plenty of storage products that only work inside a VPC. Cloud SQL. Memorystore. MongoDB Atlas (excluding the expensive and unscalable serverless option). Your engineers are probably going to want to use one or some of them.

    Eventually you will need a VPC. You will need to deploy compute inside the VPC. Managed Kubernetes solutions make that much easier. But 90% of startups fail, so 95% of startups will fail before they get to this point. YMMV.

    • jedi3335 2 hours ago

      Cloud Run has had network egress control for a while: https://cloud.google.com/run/docs/configuring/vpc-direct-vpc

      • solatic 2 hours ago

        Nice, I didn't know about this, it wasn't available last time I checked.

        With that said... there are so many limitations on that list, that seriously, I can't imagine it would really be so much easier than Kubernetes.

    • bspammer 2 hours ago

      I’m surprised Cloud Run doesn’t let you do this. You can put an AWS lambda in a VPC no problem.

  • semitones 4 hours ago

    Kubernetes has a steep learning curve, and certainly a lot of complexity, but when used appropriately for the right case, by god it's glorious

    • threeseed 4 hours ago

      Kubernetes has a proportional learning curve.

      If you're used to managing platforms e.g. networking, load balancers, security etc. then it's intuitive and easy.

      If you're used to everything being managed for you then it will feel steep.

      • t-writescode 2 hours ago

        I think this is only true if the original k8s cluster you're operating against was written by an expert and laid out as such.

        If you're entering into k8s land with someone else's very complicated mess across hundreds of files, you're going to be in for a bad time.

        A big problem, I feel, is that if you don't have an expert design the k8s system from the start, it's just going to be a horrible time; and, many people, when they're asked to set up a k8s setup for their startup or whatever, aren't already experts, so the thing that's produced is not maintainable.

        And then everyone is cursed.

      • alienchow 3 hours ago

        That's pretty much it. I think the main issue nowadays is that companies think full stack engineering means OG(FE BE DB) + CICD + Infra + security compliance + SRE.

        If a team of 5-10 SWEs have to do all of that while only graded on feature releases, k8s would massively suck.

        I also agree that experienced platform/infra engineers tend to whine less about k8s.

      • ikiris 2 hours ago

        Nah the difference between managing k8 and the system it was based on is VASTLY different. K8 is much harder than it needs to be because there wasn't tooling for a long time to manage it well. Going from google internal to K8 is incredibly painful.

    • jauntywundrkind 3 hours ago

      And there's very few investment points below it.

      You can cobble together your own unique special combination of services to run apps on! It's an open ended adventure into itself!

      I'm all for folks doing less, if it makes sense! But there's basically nothing except slapping together the bits yourself & convincing yourself your unique home-made system is fine. You'll be going it alone, & figuring out on the fly, all to save yourself from getting good at the one effort that has a broad community, practitioners, and massive extensibility via CRD & operators.

  • czhu12 4 hours ago

    I'm not sure google cloud run can be considered a fair comparison to Kubernetes. It would be like saying AWS Lambda is a lot easier to use than EC2. I've used both Kubernetes and GCR at the current company I cofounded, and theres pros and cons to both. (Team of about 10 engineers)

    GCR was simple to run simple workloads, but, an out of the box Postgres database can't just handle unlimited connections and so connecting to it from GCR without having a DB connection proxy like PG bouncer risks exhausting the connection pool. For a traditional web app at any moderate scale, you typically need some fine grained control over per process, per server and per DB connection pools, which you'd lose with GCR.

    Also, to take advantage of GCR's fine grained CPU pricing, you'd have to have an app that boots extremely quickly, so it can be turned off during periods of inactivity, and rescheduled when a request comes in.

    Most of our heaviest workloads run on Kubernetes for those reasons.

    The other thing thats changed since this author probably explored Kubernetes is that there are a ton of providers now that offer a Kubernetes control plane for no cost. The ones that I know of are Digital Ocean and Linode, where the pricing for a Kubernetes cluster is the same as their droplet pricing for the same resources. That didn't use to be the case. [1] The cheapest you can get is a $12 / month, fully featured cluster on Linode.

    I've been building, in my spare time, a platform that tries to make Kubernetes more usable for single developers: https://canine.sh, based on my learnings that the good parts of Kubernetes are actually quite simple to use and manage.

    [1] Digital oceans pricing page references its free control plane offering https://www.digitalocean.com/pricing

    • igor47 3 hours ago

      Why are GCR and pgbouncer incompatible? Could you run a pgbouncer instance in GCR?

      • seabrookmx 2 hours ago

        GCR assumes it's workload is HTTP, or a "job" (container that exits once it's task has completed). It scales on request volume and CPU, and the load balancer is integrated into the service. It's not obvious to me how you'd even run a "raw" TCP service like pgbouncer on it.

      • czhu12 3 hours ago

        I’m not an expert, but from what I understand, the standard set up is like:

        4x(Web processes) -> 1x(pgbouncer) -> database

        This ensures that the pgbouncer instance is effectively multiplexing all the connections across your whole fleet.

        In each individual web process, you can have another shared connection pool.

        This is how we set it up

  • OtomotO 4 hours ago

    Right.

    Depending on my client's needs we do it oldschool and just rent a beefy server.

    Using your brain to actually assess a situation and decide without any emotional or monetary attachment to a specific solution actually works like a charm most of the time.

    I also had customers who run their own cloud based on k8s.

    And I heard some people have customers that are on a public cloud ;-)

    Choose the right solution for the problem at hand.

  • gavindean90 4 hours ago

    K8s always seems like the tool that people choose to avoid cloud vendor lock in but there is something to be said for C k8s lock in as well as the article points out.

    If you end up with exotic networking or file system mounts you can just be stuck maintaining k8s forever and some updates aren’t so stable so you have to be more vigilant that windows updates.

    • osigurdson 4 hours ago

      I don't think it makes sense to conflate vendor lock-in with taking a dependency on a given technology. Do we then have "Linux lock-in" and "Postgres lock-in"? The term "lock-in" shouldn't be stretched to cover this concept imo.

  • ribadeo an hour ago

    One can avoid container orchestration by avoiding the trend of containerizing your app. It wastes system resources, provides half-baked replicas of OS services, reduces overall security while simultaneously making networking a total pitas.

    Your cloud provider is already divvying up a racked server into your VPS's, via a hypervisor, then you install an OS on your pretend computer.

    While i can see how containerized apps provide a streamlined devops solution for rare hard to configure software that needs to run on Acorn OS 0.2.3 only, it should never be the deployment solution for a public facing production web service.

    Horses for courses.

  • osigurdson 4 hours ago

    The "You Probably Don't Either" is a little presumptuous. Many projects probably don't need cloud run either. Certainly, many projects shouldn't even be started in the first place.

    • stitched2gethr 4 hours ago

      I work with Kubernetes enough that I would answer to the title "kubernetes developer" and I would recommend you don't use kubernetes. In the same way I would recommend you don't use a car when you can walk.

      Your friend lives 1/8 miles away. You go to see them every day so why wouldn't you drive? Well, cars are expensive and you should avoid them if you don't need them. There are a TON of downsides to driving a car 1/4 of a mile every day. And there are a TON of benefits to using a car to drive 25 miles every day.

      I hate to quash a good debate but this all falls under the predictable but often forgotten "it depends". Usually do you need kubernetes == do you have a lot of shit to run.

  • figmert 15 minutes ago

    This article kinda reads like "I didn't need HA, and you probably don't either". HA isn't necessary, but if you want a reliable system that will be online without being woken up at 3am (assuming you even have alerts at that point), you're better off with HA.

    Similarly, you don't need Kubernetes, but if you want something that makes developer's life's easier, makes it easy to use a singular API, has many, many integrations and tooling, then you're better off with K8s. Sure, you can go with VMs, but now you have to scale and manage your application on a per VM level instead of per container. You have to think about a lot of cloud specific services, network policies, IAMs, I don't know what else, scaling.

    I guess what I'm saying, you always have the option of writing in Assembly, but why would you when you can have a higher level language that abstracts most of it away. Yes, the maintenance burden on the devops/platform team is higher, but it's so much easier for users of the platform to use said platform.

  • threeseed 4 hours ago

    Kubernetes lock-in: bad.

    Google CloudRun, Database, PubSub, Cloud Storage, VPC, IAM, Artifact Registry etc lock-in: good.

  • oron 2 hours ago

    I just use a single k3s install on a single bare metal from Hetzner or OVH, works like charm, very clean deployments, much more stable than docker-compose and 1/10 of the cost of AWS or similar.

    • usrme 5 minutes ago

      Do you have a write-up about this that you have to share, even if it's someone else's? I'd be curious to try this out.

  • rcleveng 38 minutes ago

    To me the most important quote in the whole writeup is this "Our new stack is boring.". When you are creating a solution to a problem, boring is good. Strive to be boring. Only be exciting when you must.

  • hi_hi 2 hours ago

    I'm running kubernetes (actually k3s with helm, but that counts, right) on a ludicrously old and underpowered Ubuntu thinclient thats about 10 years old

    - https://www.parkytowers.me.uk/thin/hp/t620/

    I didn't _need_ to, and it was a learning curve to setup that had me crying into my whisky some nights, but its been rock solidly running my various media server and development services for the past few years with no issues.

    Sure, its basically a fancy wrapper around a bunch of docker containers, and I use hardly any of the features which k8s brings to the party, but your cold hard logic won't win over the warm and fuzzy feelings I get knowing I did something stupid and it works!

  • jeswin 4 hours ago

    Docker swarm would have worked for 98.5% of all users (how k8s won over swarm should be a case study). And kamal, or some thing like it, would work for 88.25% of all users.

  • seabrookmx 2 hours ago

    We dabbled with Cloud Run and Cloud Functions (which as of v2 are just a thin layer over Cloud Run anyways).

    While they worked fine for HTTP workloads, we wanted to use them to consume from Pub/Sub and unfortunately the "EventArc" integrations are all HTTP-push based. This means there's no back pressure, so if you want the subscription to buffer incoming requests while your daemons work away there's no graceful way to do this. The push subscription will just ramp up attempting to DoS your Cloud Run service.

    GKE is more initial overhead (helm and all that junk) but using a vanilla, horizontally scaled Kubernetes deployment with a _pull_ subscription solves this problem completely.

    For us, Cloud Run is just a bit too opinionated and GKE Autopilot seems to be the sweet spot.

  • thefz 37 minutes ago

    5 years ago everyone and their uncle would swear that kubernetes was the new way of doing things and by not jumping on it you were missing out and potentially harming your career.

  • wetpaste an hour ago

    RE: slow autoscaling

    Maybe the cloud companies could do something here by always keeping a small subset of machines online and ready to join the cluster. Provided there is some compromise in what the configuration is for the end user. I guess it doesn't solve image pulling. Pre-warming nodes is an annoying problem to solve.

    Best solution I've been able to come up with is: Spegel (lightweight p2p image caching) + Karpenter (dynamic node autoscaling) + pods with low priority to hold onto some extra nodes. It's not perfect though

  • JohnMakin 3 hours ago

    I don't understand how these posts exist when much of my consulting and career for the last few years has been on companies that have basically set up a bare bones out of the manual EKS/GCP solution and just essentially let it sit for 3+ years untouched until it got to a crisis. That to me as a systems engineer is nuts and a testament to how good this stuff is when you get it even kind of right. Of course, I'm referring to managed systems. Doing Kubernetes from scratch I would not dream of doing.

  • mitjam 36 minutes ago

    I believe CloudRun is based on KNative which runs on Kubernetes. Thus, you’re still running on Kubernetes, it’s just abstracted away from you.

  • ChrisArchitect 4 hours ago

    Related:

    Dear friend, you have built a Kubernetes

    https://news.ycombinator.com/item?id=42226005

    • doctorpangloss 4 hours ago

      People don’t want solutions, they want holistic experiences. Kubernetes feels bad and a pile of scripts feels good. Proudly declaring how much you buy into feelings feels even better!

  • siliconc0w an hour ago

    The downside to cloud run is you don't get a disk. If I could get a pd attached at runtime to each instance, it'd be a lot more compelling.

  • lmm 2 hours ago

    So how does this person link up the different parts that go into deploying a service? Like, yes, you can have a managed database, and you can have a managed application deployment (via cloud run), and you can have a pub-sub messaging queue, and you can have a domain name. But where's the part where you tie these all together and say that service A is these and service B is those? Just manually?

  • devops99 4 hours ago

    These anti-Kubernetes articles are a major signal that the competency crisis is very real.

  • PohaJalebi an hour ago

    I like how the author says that Kubernetes has a vendor lock in, yet suggests a GCP managed service as their preferred alternative.

  • hinkley 4 hours ago

    I’m having to learn kubernetes to improve my job hunt. It’s really too much for most projects. Cosplaying doesn’t make you money.

  • tracerbulletx 3 hours ago

    Cool you left Kubernetes for a more locked in abstraction around Kubernetes like automation?

  • politelemon 3 hours ago

    If you're looking to run a few containers you may also want to look at docker swarm itself. You h get some of the benefits of orchestration and a small manageable overhead. And it's just part of docker.

  • gatnoodle 4 hours ago

    "In practice, few companies switch providers unless politics are involved, as the differences between major cloud services are minimal."

    This is not always true.

  • ofrzeta 2 hours ago

    so, what do you all think about CloudFoundry? :)

  • sneak 4 hours ago

    Except that all of this subjects you and all of your workloads to warrantless US government surveillance due to running in the big public clouds.

    I personally don’t want the federal government being able to peek into my files and data at any time, even though I’ve done nothing wrong. It’s the innocent people who have most to lose from government intrusion.

    It seems insane to me to just throw up one’s hands and store private data in Google or Amazon clouds, especially when not doing so is so much cheaper.

    • shlomo_z 2 hours ago

      I've never heard of this. Can you share a link of what you are referring to? I care about this.

  • coding123 2 hours ago

    Cloud Run and K8S are not in the same space. One is to make the infra generic, Cloud Run ONLY works on GCP.

  • honkycat 4 hours ago

    are we really still doing this lol?

    >> Kubernetes is feature-rich, yet these “enterprise” capabilities turned even simple tasks into protracted processes.

    I don't agree. After learning the basics I would never go back. It doesn't turn simple tasks into a long process. It massively simplifies a ton functionality. And you really only need to learn 4 or 5 new concepts to get it off the ground.

    If you have a simple website you don't need Kubernetes, but 99% of devs are working in medium sized shops where they have multiple teams working across multiple functionalities and Kubernetes helps this out.

    Karpenter is not hard to set up at all. It solves the problem about over-provisioning out of the box and has for almost 5 years.

    It's like writing an article: "I didn't need redis, and you probably don't either" and then talking about how Redis isn't good for relational data.

    • osigurdson 3 hours ago

      The "...You probably don't either" part is where the argument loses all of its weight. How do they know what I or anyone else needs?

  • dhorthy 4 hours ago

    i mean control loops are good but you don't need hundreds of them