Dear friend, you have built a Kubernetes (2024)

(macchaffee.com)

90 points | by Wingy 3 days ago ago

120 comments

  • cortesoft 9 hours ago

    This is obviously slightly exaggerated, but I do feel like this whenever people dismiss Kubernetes as either too complicated or not needed.

    The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes"

    Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload.

    However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.

    • chillfox 5 minutes ago

      lol, the big problem with kubernetes is that none of the choices have been made, it's not opinionated at all, there's no conventions. It's all configuration and choices all the way down. There's way too much yaml, and way to many choices for ever tiny component, it's just too much.

      I do run a k3s cluster for home stuff... But I really wish I could get what it provides in a much simpler solution.

      My dream solution would effectively do the same as k3s + storage, but with a much simpler config, zero yaml, zero choices for components, very limited configuration options, it should just do the right ting by default. Storage (both volume and s3), networking, scale to zero, functions, jobs, ingress, etc... should all just be built in.

    • 28304283409234 2 hours ago

      Kubernetes is a complicated solution to a complicated problem. A lot of companies have different problems and should look for different solutions. But if you are facing this particular problem, Kubernetes is the way to go. The trick is to understand which problem you are facing.

      • analyte123 11 minutes ago

        Kubernetes can be a sign that are you making things more complicated than they should be, too early. But if you actually have made things complicated enough (whether through essential or accidental complexity) that you have problems that k8s is good at solving, I really hope you have it instead of some hand rolled solution.

        I feel the same way about commercial APM tools. Obviously in a perfect world, you would have software so simple and fast that they’re unnecessary. Maybe every month or two someone has to grep some logs that are already in place. Once you’ve gotten yourself in situation where this is obviously not true, having Datadog, New Relic or similar set up (or using k8s instead of 100 unversioned shell scripts by someone who doesn’t work there anymore) will make your inevitable distributed microservice snafu get resolved in hours rather than a longer business-risking period.

    • throwaway041207 9 hours ago

      Agree. For years I had developed my own preferred way of deploying Rails apps large and small on VMs: haproxy, nginx, supervisord, ufw, the actual deploy tooling (capistrano and other alternatives) and so on... and if those tools are old or defunct now it's because my knowledge of that world basically halted 8 years ago because I've never had to configure anything but k8s since then.

      I've used it every day since then so I have the luxury of knowing it well. So the frustrations that the new or casual user may have are not the same for me.

    • baby_souffle 8 hours ago

      > However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.

      Yep! I am now using k8s even for small / 'single purpose' clusters just so I can keep renovate/argo/flux in the loop. Yes, I _could_ wire renovate up to some variables in a salt state or chef cookbook and merge that to `main` and then have the chef agent / salt minion pick up the new version(s) and roll them out gradually... but I don't need to, now!

    • zmmmmm 3 hours ago

      > all of those choices have been made and agreed upon

      Have they really? I have a few apps deployed on k8s and I feel like every time I need something, it turns out it doesn't do that and I'm into some exotic extension or plugin type ecosystem.

      Something as simple as service autoscaling (this was a few years ago) was an adventure into DIY. Moving from google cloud to AWS was a complete writeoff almost - just build it again.

      I'm sure it captures some layer of abstraction that's useful but my personal experience is it seems very thin and elusive.

      • chaos_emergent 3 hours ago

        I wouldn’t really call it “DIY” per se, k8s has the resource API and you can create whatever scaling policies you want to with it, but I do see how that’s not obvious when it’s advertised as ‘batteries included’

    • ohNoe5 3 hours ago

      Ephemeral user accounts were agreed upon before that. The OG container

      Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user.

      We may be headed back there. Have seen some experiments leveraging Linux kernels BPF and sched_ext to fire off just the right sized compute schedule in response to sequences of specific BPF events.

      Future "containers" may just be kernel processes and threads... again. Especially if enough human agency looks away from software as AI makes employment for enough people untenable. Why would those who remain want to manage kernels and k8s complexity?

      Imo its less we agreed on k8s specifically and more we agreed to let people use all the free money to develop whatever was believed to make the job easier; but if the jobs go away then it's just more work for the few left

      • majormajor an hour ago

        > Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user.

        Docker, yes, but kubernetes is way more than that the instant you have more than one physical machine node. (If you only have one node in any deploy, sure, it's likely overkill, but that seems like a weird enough case to not be worth too much ink.)

        If you silently replaced all my container images with VM images and nodes running containers with nodes running VMs, I think the vast majority of all my Kubernetes setup would be essentially unchanged. Heck, replace it all with people with hands on keyboard in a datacenter running around frantically bringing up new physical servers, slapping hard drives in them, and re-configuring the network, and I don't think the user POV of how to describe it would change that much.

        • foobarian an hour ago

          > nodes running VMs,

          huh, but how would bursting work then? Do VMs support it nowadays?

          • majormajor an hour ago

            I've seen some places advertise it but I have not tried it.

            But, honestly, more generally in my head I wasn't thinking much about it since I consider that as a "cost optimization" thing than a "core kubernetes function." E.g. the addition (or not) of limits is just a couple lines, compared to all the rest of the stuff that I'd be managing specification of (replicas, environment, resource baseline, scheduling constraints, deployment mode...) that would translate seamlessly.

            (And there are a lot of parts of kubernetes that annoy me, especially around the hoops it puts up to customize certain things if you reaalllly actually need to, but it would never cross my mind in a hundred years to characterize it as just a wrapper around cgroups etc like the OP.)

      • xyzzy_plugh 2 hours ago

        Something often underappreciated is that, in the possible future you're describing, you can use all of these new fangled "what's old is new again" approaches by continuing to just use Kubernetes. Kubernetes is, in a way, designed to replace itself.

        • ohNoe5 an hour ago

          Kubernetes is software. It cannot do anything "itself" let alone "replace itself". Don't anthropomorphize software

          Inevitably it will be a human replacing it with what they define is the best method

    • ablob 9 hours ago

      I just feel like "you can do this with Kubernetes" is a slippery slope. "You can do X with Y, so use Y" is a great way to add a dependency, especially if it is "community vetted" already. Sometimes simple is better - you don't need to add anything that implements some of you logic as a dependency to stay DRY or whatever you want to call it.

      It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer. Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved.

      • cortesoft 7 hours ago

        Kubernetes handles so many layers you are going to need for every app, though… deployments, networking, cert management, monitoring, logging, server maintenance, horizontal scaling… this isn’t a slippery slope, it is just what you need.

      • echelon 8 hours ago

        You can use k8s on $2/mo digital ocean projects. It probably even works on the free tier of a lot of providers.

        And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control.

        I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx.

        Kube is the new LAMP stack. It's easier too. And portable.

        If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE.

        • throwawaypath 8 hours ago

          >And portable.

          This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.

          • cortesoft 6 hours ago

            It is if you stick to standard Kubernetes resources, and it has gotten even easier with better storage class and load balancer support. All of the cloud providers now give you default storage classes and ingresses when you provision a cluster on them, so you can use the exact same deployment on any of them an automatically get those things provisioned in the right way out of the box.

            • throwawaypath 6 hours ago

              >It is if you stick to standard Kubernetes resources

              "If you stick to standard C..."

              No one does, that's the issue. Helm charts that only support certain cloud providers, operators and annotations that end up being platform specific, etc.

              >now give you default storage classes and ingresses

              Ingress is being deprecated, it's Gateway now! Welcome to hell, er, Kubernetes.

              • cortesoft 5 hours ago

                I use Kubernetes every day, and have worked with dozens of helm charts, and have yet to encounter cloud specific helm charts. Are these internal helm charts for your company?

                Obviously you can lock yourself in if you choose, but I have yet to see third party tools that assume a specific provider (unless you are using tools created BY that provider).

                At my previous spot, we were running dozens of clusters, with some on prem and some in the cloud. It was easy to move workloads between the two, the only issue was ACLs, but that was our own choice.

                I know they are pushing the new gateway api, but ingresses still work just fine.

              • vbezhenar 4 hours ago

                > Ingress is being deprecated

                Do you have any links about Ingress being deprecated?

                Official docs here: https://kubernetes.io/docs/reference/kubernetes-api/service-...

                There are no mentions about this API being deprecated.

              • chuckadams 3 hours ago

                Ingress is frozen, not deprecated. Gateway does more, but Ingress isn’t going anywhere. It’s a stable API, which is the opposite of churn.

        • subhobroto 8 hours ago

          > Suddenly you have to become Sysadmin/SRE

          I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE?

          • 0x457 4 hours ago

            Frees you from being a sysadmin, but burndens you with being a k8s operator, still an SRE.

    • Kinrany 7 hours ago

      If you can solve the same problem in a simpler way without using k8s, that means k8s is not a zero cost abstraction.

      It's not obligated to be, but it's also obvious why people would want it to be.

      • cortesoft 6 hours ago

        > If you can solve the same problem in a simpler way without using k8s

        I think I disagree with this, or at least the implication. I think it is true you can solve EACH OF THISE PROBLEMS INDIVIDUALLY in a simpler way than Kubernetes, the fact that you are going to have to solve at least 5-10 of those problems individually makes the sum total more complicated than Kubernetes, not to mention bespoke. The Kubernetes solutions are all designed to work together, and when they fail to work together, you are more likely to find answers when you search for it because everyone is using the same thing.

        I think it is fair to say k8s is not a zero cost abstraction, but nothing you use instead is going to be, either, and when you do run into a situation where that abstraction breaks, it will be easier to find a solution for kubernetes than it will for the random 5 solutions you pieced together yourself.

    • vbezhenar 4 hours ago

      Yeah, I spent quite a bit of time learning Kubernetes, but now I'd use it to host a static webpage on a single server, over alternatives. It's so awesome.

      • zmmmmm 3 hours ago

        The question is, how do we outsiders differentiate Stockholm syndrome from something truly being awesome?

      • actionfromafar 4 hours ago

        This is truly interesting to me. Why?

        • cortesoft 3 hours ago

          I am not the person you asked this question to, but I would probably do the same so I will answer:

          Once you get used to it, it just makes managing things simple if you always use it for everything. I have a personal harbor service that I run on my local cluster that has all my helm charts and images, and i can run a single script that sets up my one node cluster, then run a helm install that installs cert-manager and my external-dns, and now I can deploy my app with whatever subdomain I want and I immediately get DNS set up and certs automatically provisioned and rotated. It will just work.

    • PunchyHamster 9 hours ago

      Honestly the main problem is people using k8s for something that's like... a database, and an app, and maybe a second app, that all could be containers or just a systemd service.

      And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple

      • nazcan 8 hours ago

        But if you want some redundancy, k8s let's you just say run 4 of this, 6 of this on these 3 machines. At least I find it quite straight forward.

        The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated.

        • tarkin2 8 hours ago

          Most of the time you don't need redundancy. You need regular backups for exceptional circumstances. And k8s gives you more complexity, and more problems through more moving parts, to give you the possibility of using a feature you'll never need, and if you do start to use it it'll probably be instead of fixing performance problems downstream

          • cortesoft 7 hours ago

            Are we talking for personal projects where there are no expectations, or small startups where you don’t have much scale but you still care about down time and data loss?

            Personal projects are one thing, but even the smallest startup wants to be able to avoid data loss and downtime. If you are running everything on one server, how do you do kernel patches? You need to be able to move your workload to another server to reboot for that, even if you don’t want redundancy. Kubernetes does this for you. Bring in another node, drain one (which will start up new instances on the new node and shift traffic before bringing down the other instance, all automatically for you out of the box), and then reboot the old one.

            Again, you could do all of this with other tech, but it is just standard with Kubernetes.

      • jmalicki 9 hours ago

        Luckily since I met this guy named Claude most of that complexity has gone away.

        • andai 6 hours ago

          A while back when the agents got hyped I was looking into the whole "give it a VM / docker container" I realized the safest and simplest option was just to give it its own machine.

          Then I realized giving it root on a $3 VPS is functionally equivalent. If it blows it up, you just reset the VM.

          It sounds bad but I can't see an actual difference.

    • subhobroto 8 hours ago

      > This is VERY powerful

      No argument there. The Toyota 5S-FE non-interference engine is a near indestructible 4 cylinder engine that's well documented, popular and you can purchase parts for pennies. It has powered 10 models of Camrys and Lexus and battle proven. You can expect any mechanic who has been a professional mechanic for the last 3 years know exactly what to do when it starts acting up. 1 out of 4 cars on the road have this engine or a close clone of it.

      It's not what any reasonable person would use for a weedwhacker, lawnmower, pool pump or an air compressor.

      • cortesoft 7 hours ago

        Sure, but to extend your metaphor, Kubernetes HAS smaller engine models that you can use in those situations, and still gain all the benefits of being in the same ecosystem. You can use K3s, for example, and get all the benefits without having a giant engine in your weedwhacker.

  • drdaeman 4 hours ago

    They have built an orchestrator, not Kubernetes. There is one key difference: they know this thing, end-to-end, down to every single bolt and piece of duct tape (with possible exception for Docker internals)

    And that's a very important distinction when it comes to maintaining complex systems. This could've changed with LLMs (I'm still adjusting to what new capabilities mean for various decision-making logic), but before machine intelligence debugging an issue with Kubernetes could've been a whole world of pain.

    • chuckadams 3 hours ago

      And chances are only they know it. If my role has enough cluster access, I can muddle through pretty much any helm chart (with lots of cursing, yes) but it might take me days to set up whatever elaborate bespoke environment and script invocations are needed to replicate the current production setup maybe.

  • zdw 9 hours ago

    IMO, Kubernetes isn't inevitable, and this seems to paint it as such.

    K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.

    There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/

    Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.

    • zbentley 9 hours ago

      > K8s is well suited to dynamically scaling a SaaS product delivered over the web

      It’s well suited to other things as well, people are just in denial about some of them.

      “I need to run more than two containers and have a googleable way to manage their behavior” is a very common need.

    • antonvs 8 hours ago

      Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.

      What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.

      > Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.

      Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?

      I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.

      • subhobroto 8 hours ago

        > Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.

        What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?

        Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)

        • antonvs 8 hours ago

          We sell to enterprise customers. All of them deploy our solution on internal cloud-style VM clusters. We use the Traefik ingress controller by default.

          There really weren't any particular surprises or gotchas at that level.

          In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.

  • et1337 9 hours ago

    The saddest part about Kubernetes is… after you set it all up, you still need a hacky deploy.sh to sed in the image tag to deploy! And pretty soon you’re back to “my dear friend you have built a Helm”. And so the configuration clock continues ticking…

    • throwaway041207 9 hours ago

      Claude Code has essentially fixed this perpetual annoyance for me. Doesn't matter if it's a hacked up deploy.sh that mixes sed, envsubst and god knows what or a non-idiomatic Helm chart that was perpetually on my backlog to fix... today I just say "make this do this thing and also fix any bash bugs along the way" and it just does it. Its effectiveness for these thousand-little-cuts type DevOps tasks is underrated IMO.

      Now the actual CI/CD/thing-doers tools that all suck... I'm still stuck with those.

      • mettamage 9 hours ago

        I agree, I'm not great at devops, but my setup.sh and deploy.py have been game changers. Just vibe coding those was good enough.

        Same with build.sh and doing it in such a way that I can use all the build.sh in my ci.yml for Github Actions.

    • cassianoleal 9 hours ago

      I have been using Kubernetes for 7 or 8 years now, and have nearly 100% stayed away from Helm.

      Some Kustomize, a little bit of envsubst and we're good to go thank you very much.

      • supriyo-biswas 9 hours ago

        How do you handle cleanups and hooks? The best way to do helm, at least for me, seems to be about limiting its use to simple templating use cases; if you end up needing an if, you've probably done something terribly wrong.

        • cassianoleal 5 hours ago

          That's my main gripe with Helm.

          For the simple use case you're describing, Helm is not required. Plenty of other solutions around.

          For use cases where it starts getting useful, we both agree that something has gone terribly wrong.

          I still don't know why Helm exists. It's a solution that created lots of problems that didn't exist.

        • arkeros 8 hours ago

          You can rely purely on kubectl with something like:

          cat manifests.yaml | kubectl apply -f - --server-side --field-manager "$FIELD_MANAGER" --prune --applyset "$APPLYSET" --namespace "$NAMESPACE"

        • srcreigh 9 hours ago

          Seems to be a case of the XY problem. What do you need cleanups and hooks for?

          • 0xbadcafebee 2 hours ago

            There are a multitude of cases of operations which need to be performed before and after specific actions in K8s. It depends on the resource, operator, operational changes, state, bugs, order of operations, and more.

          • supriyo-biswas 9 hours ago

            Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.

            Hooks: I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster (at places I've worked, the CI cluster and K8s cluster were completely separate).

            • vbezhenar 4 hours ago

              Regarding cleanups: I'm using flux CD with kustomize. It tracks resources that it created. If I delete manifest from my repository, flux will delete resources that were created from these manifests. For me that's pretty much the ideal workflow.

              Regarding hooks: I don't know. All applications that I've used, implemented migrations internally (it's usually Java with Flyway), so I don't need to think about it. One possible approach could be to use flux CD with Job definition. I think that Flux will re-create Job when it changes. So if you change image tag, it'll re-create Job and it'll trigger Pod execution. But I didn't try this approach, so not sure if that would work for you.

            • cassianoleal 6 hours ago

              > Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.

                  kubectl delete -f <manifests.yaml>
              
                  kubectl delete -k <kustomization_directory>
              
              > I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster

              A Job feels like a good fit for this. CI deployes the Job without connecting to DB, Job runs migrations using the same connectivity as the application.

      • bavell 9 hours ago

        Going on 10 years now for me, tried Helm a bit and yep - all I've really needed was a package.json deploy script with sed to bump the image version.

    • vbezhenar 4 hours ago

      I don't understand you.

      For very simple deployments, you don't need anything at all. Just write manifests and use `kubectl apply`. You can write `deploy.sh` but it'll be trivial.

      If you want templating, there are many options. You can use `sed` for the most simple templating needs. You can use `cpp`, `m4`, `helm` or `kustomize`. I, personally, like `kustomize`, but `helm` probably not the worst template engine out there.

      Kustomize is even somewhat included into basic kubernetes tooling, so if you want something "opinionated", it is there for you. It works.

    • btown 9 hours ago

      And if you want your Helm to run on certain deploys, and maintain a declarative set of the variables given to charts over time, thinking you can use Helmfile and some custom GitHub Actions… “my dear friend you have built a GitOps.”

      (I tend to think this one is acceptable in the beginning, but certainly doesn’t scale.)

    • jeffrallen 8 hours ago

      Or if your colleagues are "smarter" than you they make it in Clojoure instead, with an EDN-but-with-subroutines config language, so that not only yaml-aware editors are useless, but EDN-aware editors cannot make heads or tails of the macros.

      Fun times.

    • PunchyHamster 9 hours ago

      If few lines of scripting is your problem you shouldn't be programming

    • esafak 9 hours ago

      Use a CD solution like Spinnaker, BunnyShell, or Kargo.

  • stego-tech 8 hours ago

    As someone rolling their self-hosted stuff via Compose and shell scripts instead of K8s specifically for the simplicity of the experience, this is 100% why you need to understand what Kubernetes solves before writing it off entirely.

    I'm not doing overlay networks, I'm using a single bare-metal host, and I value the hands-on Linux administration experience versus the K8s cluster admin experience. All of these are reasons I specifically chose not to use Kubernetes.

    The second I want HA, or want to shift from local VLANs to multi-cloud overlays, or I don't need the local Linux sysadmin experience anymore? Yeah, it's K8s at the top of the list. Until then, my solution works for exactly what I need.

  • tptacek 2 hours ago

    All it would take to make this post actually good would be to replace "Kubernetes" with "orchestrator"; that would also keep the symmetry with the post it's riffing on, about building compilers (it's not "Dear friend you have built a GHC").

  • Dedime 9 hours ago

    PREACH!

    I run K8s at home. I used to do docker-compose - and I'd still recommend that to most people - but even for my 1 little NUC with 4vcpu / 16Gi Homelab, I still love deploying with K8s. It's genuinely simpler for me.

    If anyone's looking for inspiration, my setup:

    * ArgoCD pointed to my GitLab repos

    * GitLab repos contain Helm charts

    * Most of the Helm charts contain open-source charts as subcharts, with versions set like (e.g.) `version: ~0` - meaning I automatically receive updates for all major version until `1`

    * Updating my apps usually consists of logging into the UI, reviewing the infrastructure and image tag updates, and manually clicking sync. I do this once every few months

    My next little side project: Autoscaling into the cloud (via a secure WireGuard tunnel) when I want to expand past my current hardware limitations

    • wernerb 9 hours ago

      A reason not to run k8s is if you want your server to reach C10 idle states. The k8s control plain with its polling and checking are quite heavy on the mostly idle server. I have reverted to just use Nixos and oci podman containers. Everything is declarative and reproducible

      • r_lee 4 hours ago

        another one is swap. UnlimitedSwap was deprecated and you can now only use LimitedSwap which restricts how much swap you can use, so you can't take full advantage of zram, which sucks for those looking to run lean

  • kube-system 8 hours ago

    Kubernetes is a powerful tool for complicated problems. If if it seems complicated, you probably don’t have a complicated deployment problem.

    But really this applies to any powerful tool. If you need to measure a voltage, an 4 channel oscilloscope also probably seems too complicated.

  • Havoc 2 hours ago

    Literally just finished building a personal orchestrator system I wanted and had this very much in back of my mind.

    Ended up doing a mix. Built on compose for now but in a manner that’ll lift and shift to k8s easily enough. Its containers talking over network either way

  • dang 5 hours ago

    Discussed at the time:

    Dear friend, you have built a Kubernetes - https://news.ycombinator.com/item?id=42226005 - Nov 2024 (277 comments)

  • oddurmagnusson 9 hours ago

    Found this the same day I published this: https://github.com/oddur/yoink

    Kubernetes was overkill (I do that all day, 5 days a week); Kamal was too restrictive, so I found myself rolling out Yoink. Just what I need from k8s, but simple enough I can point it to a baremetal machine on Hertzner that can easily run all my workloads.

    • subhobroto 8 hours ago

      > found myself rolling out Yoink

      - using Tailscale SSH is brilliant

      - using caddy-docker-proxy for ingress is brilliant

      What do you use for:

      - service discovery

      - secret store (EDIT: Crap you use Infisical. No shade, I just have this horrible foreboding it will end up like Hashicorp. I use Conjur Secretless Broker but am tracking: https://news.ycombinator.com/item?id=47903690)

      - backing up and restoring state like in a DB

      PS: Have you been having issues with Hetzner the last few weeks?

      • oddurmagnusson 8 hours ago

        Service discovery is basically just Docker's internal DNS. Caddy-docker-proxy can use it to find healthy upstreams.

        For secrets, I self-host Infisical on the box -- easy to plug in whatever secret manager, should make it pair nicely with https://github.com/tellerops/teller or something similar

        Had no problems with Hertzner so far, just enjoying the raw CPU power of bare metal. The plan is to roll out more boxes across different providers, using Tailscale for the backplane network and Cloudflare to load-balance between them. All in due time What issues have you been having ?

        • subhobroto 5 hours ago

          I have a suspicion you're using Headscale? If so, I urge you to consider Ionscale. I use it with Authentik as the IdP.

          Personally commiting to using Tailscale as a core foundation of my infrastructure and Ionscale is my hedge against getting Hashicorped.

          > Service discovery is basically just Docker's internal DNS. Caddy-docker-proxy can use it to find healthy upstreams

          Do you have a writeup of this somewhere? I'm unaware of being able to manage Docker's internal DNS over some kind of an API (would appreciate if you know a way to). The only way I know is to manipulate network aliases via Docker Engine API. As a result I use Hickory DNS with RFC 2136. That coupled with Caddy-docker-proxy gets me extremely close.

  • shrubble 8 hours ago

    I can tell you how vendors deliver a software solution that runs on Kubernetes: very poorly.

    The needed tweaks, the ability to customize things, basically goes to zero because the support staff is technical about the software, but NOT about Kubernetes.

    I am not joking: a recent deployment required 3x VMs for Kubernetes, each VM having 256 gigabytes of RAM; then a separate 3x VMs for a different piece. 1.5TB of RAM to manage less than 1200 network devices (routers etc. that run BGP).

    No one knew, for instance, how to lower the MongoDB (because of course you need it!) resource usage, despite the fact that the clustered VMware install is using a very fast SSD storage solution and thus MongoDB is unlikely accelerate anything; so over 128GB RAM is being burned on caching the results coming back from SSDs that are running at many-GB/s throughput.

    • throwaway041207 4 hours ago

      Whether this is deployed via Helm charts or a native controller, there's almost certainly some overlay where you can override resource values, unless this is just a very crappy vendor.

      • shrubble 2 hours ago

        The setting is there, somewhere, I agree; but no one can tell you how to change it or what the implications might be.

        So unless I want to dig into their YAML files that are not documented…

  • hasyimibhar 8 hours ago

    I've experienced something like this at work but with data warehouse instead, and it happened multiple times (to be fair, data engineering is still fairly new where I'm from).

    One example was an engineer wanted to build an API that accepts large CSV (GBs of credit reports) to extract some data and perform some aggregations. He was in the process of discussing with SREs on the best way to process the huge CSV file without using k8s stateful set, and the solution he was about to build was basically writing to S3 and having a worker asynchronously load and process the CSV in chunks, then finally writing the aggregation to db.

    I stepped in and told him he was about to build a data warehouse. :P

    • jeffrallen 8 hours ago

      If it was less than 100 gb, he probably should have just loaded the whole thing in RAM on a single machine, and processed it all in a single shot. No S3, no network round trips, no chunking, no data warehouse.

  • isodev 9 hours ago

    Unless you’re in Erlang world (Elixir, Gleam..) and all that is already baked into OTP and the BEAM. You can go on holiday knowing it will be a while longer before you need to break out the pods (and at that scale, you will be able to afford a colleague or two to help you).

    • kirici 8 hours ago

      How does Elixir renew my certificates, mount networked storage mounts and create a VIP and internal DNS entries for my valkey instance?

      Scaling is a sidenote, that it becomes easy is a result of hoisting everything else onto one control plane and a set of coherent APIs.

  • supriyo-biswas 9 hours ago

    Criticisms of Kubernetes generally come from a few places:

    - People who would prefer their way of doing this, whether that's deployments on VMs, or use some sort of simpler cloud provider.

    I had the same opinion a few years ago, but have kind of come to like it, because I can cleanly deploy multiple applications on a cluster in a declarative fashion. I still don't buy the "everything on K8s", and my personal setup is to have a set of VMs bought from a infrastructure provider, setup a primary/replica database on two of them, and use the rest as Kubernetes nodes.

    - People who run Kubernetes at larger scales and have had issues with them.

    This usually needs some custom scaling work; the best way to work around this if you're managing your own infra[1] is to split the cluster into many small independent clusters, akin to "cellular deployments"[2]/"bulkhead pattern"[3]. Alternatively, if you are at the point where you have a 500+ node cluster, it may not be a bad idea to start using a hyperscaler's service as they have typically done some of the scaling work for you, typically in form of replacing etcd and the RPC layer through something more stable.

    - People who need a deep level of orchestration

    Examples of such use cases may be to run a CI system or a container service like fly.io; for such use cases, I agree that K8s is often overkill, as you need to keep the two datastores in sync and generate huge loads on the kube-apiserver and the cluster datastore in the process, and it might be often better to just bring up Firecracker MicroVMs or similar yourself.

    Although, I should say that teams writing their first orchestration process almost always run to Kubernetes without realizing this pitfall, though I have learned to keep my mouth shut as I started a small religious war recently at my current workplace by raising this exact point.

    [1] Notice how I don't say "on-prem", because the hyperscaler marketing teams would rather have you believe in two extremes of either using their service or running around in a datacenter with racks, whereas you can often get bog-standard VMs from Hetzner or Vultr or DigitalOcean and build around that.

    [2] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

    [3] https://learn.microsoft.com/en-us/azure/architecture/pattern...

    • rfoo 9 hours ago

      Another case: People who want to run workloads that are inherently incompatible with Kubernetes networking model.

      For example:

      * For some cursed reasons you want to make sure every single one instance of a large batch job see just one NIC in its container and they are all the same IP and you NAT to the outside world. Ingress? What ingress? This is a batch job!

      * Like the previous point, except that your "batch job" somehow has multiple containers in one instance now, and they should be able to reach each other by domain.

      • zbentley 9 hours ago

        That is indeed a weirdly cursed requirement. Why? Black box of legacy stuff? A system that was never designed to be run in multiple does so if all the nodes think they’re the same machine? Defeating a license restriction?

      • PunchyHamster 9 hours ago

        You can configure k8s so pod to pod networking works just fine so I'm not even sure what complaint here is

  • blindlobstar 8 hours ago

    Why both posts mention docker compose and not mentioning docker swarm. Being using it for my projects for long time. And it's so nice. Similar syntax, easy networking, rollout strategy, easy to add nodes to cluster.

    You can have one template docker-compose.yaml file and separate deployment files for different envs, like: docker-compose.dev.yaml, docker-compose.prod.yaml

    I think swarm is really underrated

    • Havoc 2 hours ago

      It’s in a death spiral - not widely used so people aren’t incentivised to put time into it so not widely used

    • vbezhenar 4 hours ago

      How do you solve persistence with swarm? Can I deploy postgres with network storage that will mount automatically on node where container is launched?

    • PufPufPuf 8 hours ago

      I've been there. We still ended up with messy deploy scripts written in Ruby and the only debugging solution was "just comment out everything then run line by line".

      • blindlobstar 6 hours ago

        'docker stack deploy' covers most of the cases. But yeah, there is still some problems like: "update a config or a secret", that require manually invoking additional commands (or via scripts)

  • whynotmaybe 9 hours ago

    I need clarifications.

    I see docker as a way to avoid having a standard dev platform for everyone in the company so that the infra team don't have to worry about patch xyz for library abc, only run docker.

    But, with all the effort put in place to coordinate docker, k8s and all the shebang, isn't it finally easier to force a platform and let it slowly evolve over time?

    Is docker another technical tool that tries to solve a non-technical problem?

    • 0xbadcafebee 2 hours ago

      Docker is a solution to one specific problem: the need for "a user" to run 10 different potentially conflicting apps, all at the same time, on one machine, and abstract away anything which might make those apps conflict if they ran on a single OS. It provides a dozen different solutions in one package.

      K8s is a way to take that and make it scale up for a large number of applications on a large number of hosts in a production business in a way that's automated and resilient to failure.

    • whycombinetor 9 hours ago

      Porque no los dos. Force a platform. Deploy non containerized. Build a dockerized version of the forced platform for cross-platform local dev.

    • esafak 9 hours ago

      I do not follow you. Every app has different needs. Containers encode them in a shareable way. You can evolve the image over time. So what more do you want?

  • perfunctory 9 hours ago

    I somehow feel that it's actually the opposite. It should be "Dear kubernetes user, you have just built a shell".

  • whycombinetor 8 hours ago

    After reading this and remembering an old hobby project, I decided to switch the deploy from a systemd service to PM2, which apparently has rolling deployments without needing Docker engine (for those of us minmaxing instance RAM).

  • jFriedensreich 8 hours ago

    im just about giving OPs premise another go. compose just feels so much better as abstraction especially with small and medium setups looking close to the optimum of expressiveness without boilerplate to describe what is needed. The missing pieces seem to also be in the compose compatible “docker stack” aka new docker swarm, which i ignored for probably too long as i assumed it was the discontinued old swarm. Even if new swarm mode sucks how hard can it be to make something compose shaped vs running k8?

  • heyitsdaad 9 hours ago

    Kubernetes networking is an inefficient mess.

    • wmf 9 hours ago

      Some CNIs are definitely better than others. Unfortunately it seems 99% of people want to work against the k8s networking model.

      • zbentley 8 hours ago

        Shit just gets really weird when your network isn’t split for k8s in an equivalent way to what GCP/AWS expect. Like, if you have other services running on the nodes that you want things inside k8s to talk to, or if the nodes are in a flat subnet with other stuff in it, things get annoying. Those are worst practices for a reason, but pretty common in environments with home rolled k8s clusters.

  • dodu_ 3 hours ago

    NOOO you have to use my shitpile of nested yaml with the same dependency sprawl cancer as modern javascript. You can't just upload a binary to your own servers and host it there you need to overthink everything and make an extremely simple process overcomplicated just install one more side car and fifty more dependencies on your helm chart bro and then we can move on to figuring out CSI it should only take like a month to get it working properly I promise!!!!!

  • waterTanuki 3 hours ago

    I'll just say the quiet part out loud: A pile of shell scripts that no one else understands is job security.

    If your work is easily googleable/parseable by an AI, why would anyone pay you?

  • jeffrallen 8 hours ago

    Some days, it would be better to build a working not-kubernetes than debugging the not-working kubernetes.

  • jstanley 9 hours ago

    > I know you wanted to "choose boring tech" to just run some containers.

    The people advocating for boring tech generally aren't interested in containers.

    You can just run programs.

    • PunchyHamster 9 hours ago

      Legacy apps are far nicer if they are containerized.

      If your app is just a blob that can be run it is fine, but many languages make it more complicate.

      I wonder if just putting app into .appimage + using systemd for some of the separation would be a sweet spot ?

    • stavros 9 hours ago

      Containers are just statically-linked programs for the rest of us.

      • subhobroto 8 hours ago

        I am a big fan which is why I am saying this: you're dismissing the kernel and ABI surface is a huge assumption that must hold true for your comment to hold stavros.

        If you had said "unikernels" I would have had no arguments to make.

        • stavros 8 hours ago

          What do you mean? Statically linked programs depend on the kernel too.

          • subhobroto 5 hours ago

            right - that's precisely what I meant. I read your comment "Containers are just statically-linked programs for the rest of us." as "containers can be replaced by statically-linked programs".

            If you didn't imply that, I apologize.

            If you did mean that, I disagree with you precisely because your point works if you only care about dependency management - it falls apart on system state. A static binary is a process on the host and shares the same process space, network stack, and filesystem.

            OTOH, a container is a jail (the primary usecase): I can't cgroup a static binary's memory usage or give it a virtual network interface without reimplementing "container lite". Containers aren't just 'statically linked programs' - they allow me to use the kernel as a hypervisor for isolated environments.

            What they are though, a messy but practical compromise to Unikernels - which was my last point in our GP.

            • stavros 5 hours ago

              Oh, no, I meant that containers serve the same purpose as statically linked programs for languages that can't do that. Eg if you want to deploy a Python codebase, a container is a good way to include all dependencies.

              I didn't mean "containers don't have any advantages compared to statically linked programs".

              • r_lee 4 hours ago

                I 100% thought you were yelling a the clouds lol. see it way too often on HN

  • lowbloodsugar 4 hours ago

    See also Greenspuns tenth law:

    > Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of CommonLisp.

    https://wiki.c2.com/?GreenspunsTenthRuleOfProgramming

  • justsomehnguy 8 hours ago

    > Ah, but wait! Inevitably, you find a reason to expand to a second server

    >> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

    -- Donald Knuth, Computer Programming as an Art (1974)

    EDIT:

    > Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?

    Honestly? I don't care. There is a reason why I quit and 99% of time it's the pay. And if the company doesn't pay me enough to bother then why should I? Why should I bother about some company future in the first place?

  • winton 9 hours ago

    "Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?" LLMs can reason about and fix them quite well.

    • zbentley 8 hours ago

      Not as well as they can reason (or others can google) something as standardized as kubernetes. There’s just less context (in both senses of the term) needed to understand something running on a common substrate versus something bespoke, even if the bespoke thing is itself comprised of standardized parts.

      • winton 8 hours ago

        For a project set up by a qualified engineer, there would be little difference to the end user in practice. The LLM would work out a solution with a negligible difference in speed. Maybe debugging would also be faster for the LLM without the abstraction layers and low level access?