After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.
> CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.
> Also, so much of the k8s ecosystem is in Go that it was a natural choice.
That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.
Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.
I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.
> I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
Have you tried a Makefile to run cue? There should be no need to write code to do this
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.
In my book the main problem with Helm charts is that every customization option needs to be implemented by the chart that way. There is no way for chart consumer to change anything the chart author did not allow to be changed. That leads to these overly complex and config heavy charts people publish - just to make sure everything is customizable for consumers.
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
fluxCD brings a really nice helm-controller that will allow to change manifests via a postRenderers stub while still allowing to use regular helm tooling against the cluster.
Yeah, but then it is yet another layer of configuration slapped on top of the previous layer of configuration. That can't be the best solution, can it? Same thing for piping helm template through Kustomize.
That's generally what I try to push for in my company.
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
I've personally boiled down the Helm vs. Kubernetes to the following:
Does your Kubernetes configuration need to be installed by a stranger? Use Helm.
Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.
It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.
I'm ashamed to say it but I cannot for the life of me understand how kustomize works. I could not ever figure out how to do things outside the "hello world" tutorials they walk you through. I'm not a stupid person (citation needed lol), but trying to understand the kustomize docs made me feel incredibly stupid. That's why we didn't go with that instead of Helm.
Helm requires you to write a template and you need to know (or guess) up front which values you want to be configurable. Then you set sane defaults for those values. If you find a user needs to change something else you have to edit the chart to add it.
With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.
Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).
From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.
Yes, this is the key. Helm charts should basically be manifests with some light customization.
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.
Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.
In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.
Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want.
Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.
Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.
If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
The thing i would add to this is that in most cases, you need to manually provide config values to the install.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
I wrote Go and Python programs that constructed the manifests using the native Kubernetes types and piped them into kubectl apply. Had to write my own libraries for doing migrations too. But after that bootstrapping it worked great.
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
> Update any automation that uses these renamed CLI flags.
I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?
It doesn't sound like a big deal but in practice it's often a massive pain in the ass.
nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose
Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
I think what Charts v3 will be is still an open question. According to the current accepted HIPs[0], there is some groundwork to in general enable a new generation of a chart format via HIP-0020, and most HIPs after that contain some parts that are planned to make it into Charts v3 (e.g. resource creation sequencing via HIP-0025).
After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.
Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.
Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it
Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth
shudders.. `| nindent 12`..
I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.
Holos[1] is an interesting project I’ve been looking at trying out.
1. https://holos.run/
cue and argocd here. it is pretty neat.
the tf is still in hcl form for now.
Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.
> seems really short-sighted that it is implemented in Go
CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Also, so much of the k8s ecosystem is in Go that it was a natural choice.
> CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)
Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.
> Also, so much of the k8s ecosystem is in Go that it was a natural choice.
That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.
Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.
I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.
what language would you have chosen?
> I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.
Have you tried a Makefile to run cue? There should be no need to write code to do this
RIP Ksonnet, we hardly knew what we were missing
jsonnet is the main DX issue therein
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.
In my book the main problem with Helm charts is that every customization option needs to be implemented by the chart that way. There is no way for chart consumer to change anything the chart author did not allow to be changed. That leads to these overly complex and config heavy charts people publish - just to make sure everything is customizable for consumers.
I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.
fluxCD brings a really nice helm-controller that will allow to change manifests via a postRenderers stub while still allowing to use regular helm tooling against the cluster.
https://fluxcd.io/flux/components/helm/helmreleases/#post-re...
Yeah, but then it is yet another layer of configuration slapped on top of the previous layer of configuration. That can't be the best solution, can it? Same thing for piping helm template through Kustomize.
That's generally what I try to push for in my company.
A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.
I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.
I've personally boiled down the Helm vs. Kubernetes to the following:
Does your Kubernetes configuration need to be installed by a stranger? Use Helm.
Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.
It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.
I'm ashamed to say it but I cannot for the life of me understand how kustomize works. I could not ever figure out how to do things outside the "hello world" tutorials they walk you through. I'm not a stupid person (citation needed lol), but trying to understand the kustomize docs made me feel incredibly stupid. That's why we didn't go with that instead of Helm.
Helm requires you to write a template and you need to know (or guess) up front which values you want to be configurable. Then you set sane defaults for those values. If you find a user needs to change something else you have to edit the chart to add it.
With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.
Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).
From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.
Yes, this is the key. Helm charts should basically be manifests with some light customization.
Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.
Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.
What did you move to?
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.
I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.
I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.
Kustomize with ArgoCD is my go to
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
Docker Compose still takes you 95% of what you need. I wish Docker Swarm survived.
> I wish Docker Swarm survived.
I heard good things about Nomad (albeit from before Hashicorp changed their licenses): https://developer.hashicorp.com/nomad
I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.
It's rare that I see it mentioned though, so I'm not sure how big the community is.
For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.
Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.
In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.
Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want. Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.
What happened to it?
I'm still using it with not a single issue (except when is messes up the iptables rules)
I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.
Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.
For some reason I assumed it was unsupported. That doesn't seem to be the case.
The original iteration of Docker Swarm, now known as Classic, is deprecated. Maybe you were thinking of that?
As I read more about it, yes, that is indeed the case.
If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.
K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
Could you explain this a bit? Is helm optional part of the k8s stack?
The way I understand, helm is the npm of k8s.
You can install, update, and remove an app in your k8s cluster using helm.
And you release a new version of your app to a helm repository.
The thing i would add to this is that in most cases, you need to manually provide config values to the install.
This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.
Helm is not official or blessed or anything, just another third party tool people install after install k8s.
Yes, you really don't need to use helm if you have terraform. Just use https://registry.terraform.io/providers/hashicorp/kubernetes... .
If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).
Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.
It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.
Do you have any resources regarding using tf to handle deployments ?
I’d love to dig a bit.
The kubernetes provider mostly just works exactly as you expect
Just use https://registry.terraform.io/providers/hashicorp/kubernetes... instead of helm...
…but how do you install helm charts via terraform?
Is there a helm provider?
If not, what would be the right way to install messy stuff like nginx ingress, cert-manager, etc.?
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.
Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.
Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.
Imho, anyone who thought putting 'templating language' and 'significant whitespace' together is a good idea deserves to be in the Hague
Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.
Yaml wouldn't be so bad if they made the templates and editors indent-aware.
Which is a thing with some Python IDEs, but it's maddening to work on anything that can't do this.
we use cue straight to k8s resources. it made life way better.
but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
Amazing how people are complaining while proposing shit solutions. Seems like nobody is doing infra seriously there.
Can I hear from those of you who have had a good IAC experience? What tools worked well?
Like the others, I'm using a programming language except it is Javascript because we're a Node.js company. It actually works well enough
Probably an unpopular opinion, but it’s been a couple of jobs that I write “just python” to generate k8s manifests, and it works really, really well.
There’s packages. You can write functions. You can write tests trivially (the output is basically a giant map that you just write out as yaml)…
I’m applying this to other areas too with great success, for example our snowflake IaC is “just python” that generates SQL. It’s great.
I wrote Go and Python programs that constructed the manifests using the native Kubernetes types and piped them into kubectl apply. Had to write my own libraries for doing migrations too. But after that bootstrapping it worked great.
Reminds me of cdk8s if one is looking for a framework if it can be called that
cdk8s.io
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.
A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...
Helm sucks.
Helm, and a lot of devops tooling, is fundamentally broken.
The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.
This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.
Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.
I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.
You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.
Maybe the answer is a CDK that outputs helm charts.
> CLI Flags renamed
> Some common CLI flags are renamed:
> --atomic → --rollback-on-failure > --force → --force-replace
> Update any automation that uses these renamed CLI flags.
I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?
It doesn't sound like a big deal but in practice it's often a massive pain in the ass.
nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose
Can you recommend any articles about minimum scale necessary to make Kubernetes worth it?
Now that you'll are here, has anyone tried timoni as an alternative to helm? I have it in my to-try-tools.
https://github.com/stefanprodan/timoni
No commits in 3 months.
Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.
I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.
Ugh, can we all just agree to stop using helm
would be nice, but we would also have to reimplement all of the charts we use, big ask/lift
DevOps has more friction for tooling changes because of the large blast radius
What do you prefer?
Just straight raw manifest files.
How do you have anything dynamic? How do you handle any differences at all between your infrastructure and what the authors built it for.
Sorry, raw manifests and kustomize and a soupçon of regret.
What is Charts v3? Please tell me it is LUA support.
I think what Charts v3 will be is still an open question. According to the current accepted HIPs[0], there is some groundwork to in general enable a new generation of a chart format via HIP-0020, and most HIPs after that contain some parts that are planned to make it into Charts v3 (e.g. resource creation sequencing via HIP-0025).
[0]: https://github.com/helm/community/tree/main/hips