> This idea proves to be surprisingly powerful when it comes to expressing constraints on generic functions and types.
Disagree. IMHO, this idea is the root cause of why Go generics is so complicate but also restrictive at the same time. And it introduces significant challenges in implementation and design: https://go101.org/generics/888-the-status-quo-of-go-custom-g...
I find C++ templates simpler than Go generics. With C++, you can at-least get to a design solution. With Go generics: oops this is not possible, oops that is not possible - all because of strange language limitations.
Go's Generics are a crippled implementation - they don't really deserve the feature title of 'generics'. (Its like saying you support regex, but don't support groups and repeat operators and you can only match them to special types of strings.)
I don't disagree that Go's generics are pretty limited. But I find it a strange complaint, when contrasted with C++ templates. Which, as I understand, are literally not part of the type system and thus there seems to be a far stronger case, that they can not be called generics.
The main difference between Go's generics and C++ templates (and where some of the restrictions come from) is that Go insists that you can type-check both the body of a generic function and the call to it, without one having to know about the other. My understanding is, that with C++ templates (even including concepts), the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.
And this decision leads to most of the complaints I've heard about C++ generics. The long compile times, the verbose error messages and the hard to debug type-errors.
So, if you prefer C++ templates, that's fair enough. But the limitations are there to address complaints many other people had about C++ templates specifically. And that seems a reasonable decision to me, as well.
> the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.
No longer true after C++ 20. When you leverage C++20 concepts in templates, type-checking happens in the template body more precisely and earlier than with unconstrained templates.
In the below, a C++ 20+ compliant compiler tries to verify that T satisfies HasBar<T> during template argument substitution, before trying to instantiate the body
As I said in the other comment, I'm not a C++ user, so I'm relying on cargo-culting and copy-paste. But I think gcc disagrees - otherwise this would not compile, as line 14 is provably invalid: https://godbolt.org/z/P8sWKbEGP
Just to clarify why this is a problem: it’s possible for foo and bar to be defined in different libraries maintained by different people. Potentially several layers deep. And the author of the foo library tests their code and it compiles and all of their tests pass as and everything is great.
But it turns out that’s because they only ever tested it with types for which there is no conflict (obviously the conflicts can be more subtle than my example). And now a user instantiates it with a type that does trigger the conflict. And they get an error message, for code in a library they neither maintain nor even (directly) import. And they are expected to find that code and figure out why it breaks with this type to fix their build.
Or maybe someone changes one of the constraints deep down. In a way that seems backwards compatible to them. And they test everything and it all works fine. But then one of the users upgrades to a new version of the library which is considered compatible, but the build suddenly breaks.
These kind of situations are unacceptable to the Go project. We want to ensure that they categorically can’t happen. If your library code compiles, then the constraints are correct, full stop. As long as you don’t change your external API it doesn’t matter what your dependencies do - if your library builds, so will your users.
This doesn’t have to be important to you. But it is to the Go project and that seems valid too. And it explains a lot of the limitations we added.
> You need to have something that uses those templates.
Exactly. That is what I said:
> because you need to know the actual type arguments used, regardless of what the constraints might say.
It is because type-checking concept code is NP complete - it is trivial to check that a particular concrete type satisfies constraints, but you can not efficiently prove or disprove that all types which satisfy one constraint also satisfy another. Which you must do to type-check code like that (and give the user a helpful error message such as “this is fundamentally not satisfiable, your constraints are broken”).
And it’s one of the shortcomings of C++ templates that Go was consciously trying to avoid. Go’s generics are intentionally limited so you can only express constraints for which you can efficiently do such proofs.
There are common solutions for the library issue. Authors of libraries for example can force instantiations for a dummy type that checks their concepts.
template void foo(Dummy);
This can be done at the consumer side as well. I don't see a big deal of this. Dummy checks are common in Go too. For example, to check if a type satisfies an interface.
var _ MyInterface = (*MyType)(nil)
var _ SomeInterface = GenericType[ConcreteType]{}
After all, Go checks that a type implements an interface only at the point where you assign or use it as that interface type.
Thanks for your blog post. Unfortunately, the intentional limitations make the design space a massive headache and many times lead to very convoluted API. I would actually make the argument that it explodes complexity - for the developer, instead of constraining it.
This is not really true after C++ 20. C++ templates can leverage concepts that specify compile-time constraints and type checking on template parameters during template argument substitution
"Compile time" is not the right distinction. This is about "instantiation time". Go's implementation specifically allows to type-check the body and the call separately. That is, if you import a third-party package and call a generic function, all the type checker needs to look at to prove correctness is the signature of the function. It can ignore the body.
This is especially relevant, if you call a generic function from a generic function. For C++, proving that such a call is correct is, in general, NP-complete (it directly maps to the SAT problem, you need to prove that every solution to one arbitrary boolean formula satisfies a different boolean formula). So the designers made the conscious decision to just not do that, instead delaying that check to the point at which the concrete type used to instantiate the generic function is known (because checking that a specific assignment satisfies a boolean formula is trivial). But that also means that you have to (recursively) type-check a generic function again and again for every type argument provided, which can drive up compilation time.
A demonstration is this program, which makes gcc consume functionally infinite amount of memory and time: https://godbolt.org/z/crK89TW9G (clang is a little bit more clever, but can ultimately be defeated using a similar mechanism).
Avoiding these problems is a specific cause for a lot of the limitations with Go's generics.
> Look at Haskell type classes or Rust's traits for some classic examples of how to 'type' your generics. (And compare to what Go and C++ are doing.)
Yes, those are a different beasts altogether and the differences between what Go is doing and what Haskell and Rust are doing requires different explanations.
Though it's illustrative, because it turns out Rust also intentionally limited their generics implementation, to solve the kinds of performance problems Go is worried about. Specifically, Rust has the concept of "Dyn compatibility" (formerly "Object safety") which exists because otherwise Rusts goal of zero-cost abstractions would be broken. Haskell doesn't have this problem and will happily allow you to use the less efficient but more powerful types.
(All of this should have the caveat that I'm not an expert in or even a user of any of these languages. It's half-knowledge and I might be wrong or things might have changed since I last looked)
>There is an idea that is not obvious until you hear about it for the first time: as interfaces are types themselves, they too can have type parameters
Not obvious???? Go language designers and programmers are living in another world
>Syntax highlighting is juvenile. When I was a child, I was taught
arithmetic using colored rods (http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I
use monochromatic numerals.
The language creator really hates it (and most modern editor tooling).
Ultimately he's fine with _some_ syntax highlighting, especially the kind that uses whitespace to highlight parts of the syntax, as evidenced by the existence of `go fmt`. He just hasn't taken into consideration that colour is just one typographical tool among many, including the use of whitespace, as well as italics, bold, size, typeface, etc. Switching inks has been somewhat tedious in printing, but these days most publications seem to support it just fine, and obsessive note-takers also use various pens and highlighters in different colours. For the rest of us it's mostly about the toil of switching pens that's holding us back I think, rather than some real preference for monochromatic notes. We generally have eyes that can discern colours and brains that can process that signal in parallel to other stuff, which along with our innate selective attention means we can filter out the background or have our attention drawn to stuff like red lights. Intentionally not using that built-in hardware feature is ultimately just making stuff harder on oneself with no particular benefit.
There's also some google groups quote from him about iterators which is also pretty funny given how modern Go uses them, but I don't have the link at hand. Several google groups quotes from the original language creators (not just Pike) tell an unfortunate story about how the language came to be the way it is.
I'm a fan of Rob Pike, but not of Go. Rob Pike contributed a lot of thought to editor tooling through the years, albeit not in the direction the industry seems to be going -- for example, Sam and Acme are two editors he developed. Acme UI design is inspired by Oberon and is based on tiling, but 3rd party tooling integration is entirely different and leverages Plan9 concepts to enable a whole lot of extensibility with practically zero complexity overhead due to integration -- without any true plugin architecture. There are limits to what can be accomplished this way, but it is surprisingly powerful and I can see why a community might gravitate to his views. Unfortunately he takes this minimalist approach too far when it comes to languages IMO -- a language with no coproducts in 2025 is either a niche language or unnecessarily underpowered (how they do error handling is atrocious). Over the last decade Go went from the former to the latter.
How does that matter, if it's more _easily_ comprehended (faster, with less effort, with fewer mistakes in comprehension) with the highlighting, for any level of complexity?
Not choosing to use syntax highlighting is just wrong on every level. It has exactly zero drawbacks.
Yes. And there should be studies that show that the number of people who are hampered by syntax highlighting is probably so vanishingly small sompared to those that are either helped or not helped (unhelped, but not hampered)
Syntax highlighting studies usually don't report on whether some subjects perform worse with syntax highlighting - usually only that they as a group perform better. But even with that evidence, it should be obvious that syntax highlighting should be either on for everyone, or on initially and off as an option for the rare individual.
one wonders why colors exists after all. why, we should know all about vegetation, streams, living, and non-living organisms so that their chromatic attributes are very unnecessary. monochrome for the win! i propose dark gray btw /s
on a more serious note: somehow nature choose to let us see colors, and this sense has been immensely useful to our existence and pleasure. maybe go could learn a thing or two from nature?
> At this point, you might feel pretty overwhelmed. This is rather complicated and it seems unreasonable to expect every Go programmer to understand what is going on in this function signature. We also had to introduce yet more names into our API. When people cautioned against adding generics to Go in the first place, this is one of the things they were worried about.
One of the key benefits of Go, at least for me, was not having to think about any of this at all ever.
Whenever I touch generics, I find myself engrossed in the possibility of cleverly implementing something. Hours will pass as I try to solve the fun puzzle of how to do the thing using generics, rather than just solve the problem at hand.
It exchanges it for code-generation pain. Which one is worse is on a case-by-case basis.
I imagine that people who prefer code-generation just like the idea of it having a higher skill/investment floor to add it to a project so most projects instinctively avoid it.
While people who prefer generics jump at it even when it is not necessary or doesn't bring a lot of benefits.
But those are human problems, not so much shortcomings of those two techniques themselves.
If I'm being honest, the magic of Go was lost when generics were introduced. It now feels akin to Java, which I guess was inevitable and for anyone to really take it seriously maybe it needed to get here. But I am not a fan of generics. While that level of abstraction and composability is clever, it also lends itself to more complexity and systems that can be harder to concretely understand. Just an opinion that I know many will not agree with but I come from the systems side rather than pure software engineering. It's probably ironic considering go-micro leans heavily on interfaces for abstraction but in that there are many hard learned lessons.
Coming from C#, whose generics are first class, I struggled to obtain any real value from Go's generics. It's not possible to execute on ideas that fit nicely in your head, and you instead end up fighting tooth and nail to wrangle what feels like an afterthought into something concrete that fits in your head.
Generics works well as a replacement for liberally using interface{} everywhere, making programs more readable, but as class and interface level I tend to avoid it as I find I don't really understand what is going on. I just needed it to work so I could move on
The difference with Java is that in Java, generics are everywhere and they make up half the Java spec - I bookmarked a page ages ago (from a HN comment) that highlights it, see [0], and that's just one page.
With Go, at least initially, it was an addition, not a core aspect of it - any code written in Go before generics will still work. Granted, I only have one real project but I never had a use case for generics - the built-in generic structures (map and arrays/slices) were enough for me. Maybe when you have code that works with the `interface{}` a lot (e.g. unknown JSON data) you'll have a use case for it.
> Maybe when you have code that works with the `interface{}` a lot (e.g. unknown JSON data) you'll have a use case for it.
I think in those cases, generics are specifically kind of pointless. Because you will inherently need to use `reflect` anyways. Generics are only helpful if you do know things about your types.
Generics are most useful for people who write special-purpose data structures. And hence for people who need such special-purpose data structures but don't want to implement them themself. The prototypical example is a lock-free map, which you only need, if you really need to solve performance problems and which specific kind of lock-free map you need depends very heavily on your workload. `sync.Map` is famously only really useful for mostly write-once caches, because that's what its optimized for.
The vast majority of people don't need such special-purpose data structures and can get by just fine with a `map` and a mutex. But Go has reach the level of adoption, where it can only really grow further, if it can also address the kinds of use-cases which do need something more specific.
I was so excited when generics were going to release, and, tbh, I've barely used them. It's made some code easier to express correctly in the type system in rare cases.
I don't think I'd agree it's made the language "Java-like". That sounds like more of an indictment of the author of the code you're reviewing ;)
No one is forcing you to use the full scope of language features for every project.
This kind of argument comes up every time a new C# language version rolls out - as if it's a breaking change and now everyone is going to be forced to refactor for it.
The only other way I can read this is in terms of wishing others would use tools in the way you prefer, which is clearly a waste of energy.
> I didn't realize how important order was to type inference.
I was unclear, I'm afraid. You can reorder the type parameters, it just changes which of them you need to specify: https://go.dev/play/p/oDIFl3fZiPl
The point is that you can only leave off elements from the end of the list, to have them automatically inferred.
> Are there any real packages out there using these techniques?
I think so far, the usage of generics for containers in Go is still relatively sparse, in public code. I think in part that is because the documentation of how to do that is relatively sparse. That is part of the motivation for the post, to have a bit of somewhat official documentation for these things, so they become more widely known.
That being said, I have used the pointer receiver thing in my dayjob. One example is protobuf. We have a generic helper to set a protobuf enum from the environment. Because of how the API was designed, that required a pointer receiver constraint.
In practice, currently, that depends on inlining decisions. If the function taking the function (say `node.insert`) is inlined, then yes. There are also other optimizations, like escape analysis, that matter here: the compiler can prove that the arguments to `node.insert` only escape into the `cmp` passed in. That decision is kept as metadata on `node.insert` even if it is not inlined. So if you pass a method expression to it, it can actually look at that and decide that the arguments don't escape from it either and hence that they don't escape overall. Whereas if you pass a `func` field, it can make no assumptions.
My larger point though, is that with the `func` field the compiler can't optimize things even in principle. A user could always reassign this field (if nothing else using `*t = *new(FuncTree)`). So the compiler has to treat it as a dynamic call categorically. If the `func` is passed as a function, then at least in principle, it can prove that this function can't get modified during the call so can make optimization decisions based on what is being passed to it. For example, even without inlining, a future compiler might decide to compile two versions of `node.insert`, one with general dynamic calls and one specific one for a specific static function.
My philosophy when it comes to API decisions that impact performance is, not to make them too dependent on what the compiler is doing today, but just to take care there is enough information there, that the compiler can do an optimization in principle - which means it either will do it today, or we can make it smarter in the future, if it becomes a problem.
Like many other people, I tried my hand at a generic container library. It worked, but was surprisingly impractical. For example, debugging was hell - there are no custom type renderers in delve.
Preaching to the choir here, but this is why a lot of the Go community was against generics.
Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little, while the upside of avoiding all this complexity is unmeasurable.
> the downside of writing out explicit types and repetition matters very little
That’s not the main reason. You can have a library by author X that provides a container type Heap[T] and you can use it with your type T which is unknown by X and requires no coordination. If the proto-generic maps and slices did not exist in Go it would not be a useful language at all.
This pain point was glaring in Sort and Heap in std. The argument was whether the complexity was worth it and compile time speed could remain so fast. Even the improved expressivity isn’t obviously good (famously the removal of goto was good because it reduced expressivity).
Just stating the arguments, I still haven’t made up my mind whether these limited generics was the right call. Leaning yes, but it’s important to be humble. It takes a lot of time to evaluate second order effects.
> Especially in the era of AI assistants
As an aside, I really don’t appreciate this argument without extremely strong merits, which we can’t possibly have. Not everyone is using AI assistants, nor do people use it in the same way. But most importantly it changes very little since code is not bottlenecked by writing anyway. Code is read more often than written, and still needs to be reviewed, understood and maintained.
As far as I've seen, a heap implementation using generics is not any shorter or simpler than the old `heap.Interface` - what it gained is reusability.
> Code is read more often than written, and still needs to be reviewed, understood and maintained.
Which takes us back to the points above. AI is really good at generating repetitive patterns, like plain types, or code that implements a certain interface. If you reduce the cost of creating the verbose code [at write time] we can all enjoy the benefit of reduced complexity [at read time] without resorting to generics.
Also not saying this as an absolute truth, it is more nuanced than that for sure. But in the big picture, generics reduces the amount of code you have to write, at the cost of increased layers of abstraction, and steering away from the simplicity that make Go popular in the first place. Overall I'm not convinced it was a net positive, yet.
Generic is not about reducing how many keys you press but how you abstract your logic from the type. In go, it reduces a lot code making it safer and faster. Handling interface{} was just painful.
This is an extreme example and I hardly think anyone writing go code on a daily bases will need anything close to this. I haven't and I have not seen any lib that does anything remotely similar to that. To be honest, hardly anything beyond the stdlib will need to handle generics. They aren't widely used but quite useful when needed, which I think it is sweet-spot for generics.
I don't share the same animosity against generics. I like the recent language addition to the stdlib and am also waiting for them to add some sugar to reduce the boilerplate in error handling.
> Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little
Yeah, let's design languages based on the capabilities of code assistance /s
> Generic is not about reducing how many keys you press but how you abstract your logic from the type. In go, it reduces a lot code making it safer and faster.
No, this is not true for Go, at least for the current Go generics.
At runtime, Go generics can't be faster than generated repetitive code. Often, generic code is a little slower. Because sometimes values of type parameters are treated as interface values by the Go compiler, even if they are not.
> Handling interface{} was just painful.
Go generics are often helpless for this. Most use cases of interface{} are for reflection purpose and can't be re-implemented by Go generics. Some non-reflection use cases can't be also re-implemented by Go generics, because Go generics don't support type unions.
>> Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little
> Yeah, let's design languages based on the capabilities of code assistance /s
I mean, that _is_ essentially the Go team's take these days, c.f. their previous blog post about error handling: https://go.dev/blog/error-syntax
> Writing repeated error checks can be tedious, but today’s IDEs provide powerful, even LLM-assisted code completion. Writing basic error checks is straightforward for these tools. The verbosity is most obvious when reading code, but tools might help here as well; for instance an IDE with a Go language setting could provide a toggle switch to hide error handling code.
Personally I expect that getting an LLM to write error handling and then have the IDE hide it sounds like a recipe for surprises, but I guess things work out differently if the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar.
- an LLM can help you write a boilerplate `if (err != nil) { return fmt.Errorf(...) }` that actually matches the conventions for code base you're in;
- your IDE can "hide" those additional lines of code to reduce cognitive load while reading code;
- it's actually useful that those "hidden" lines are there when you're debugging and want a place to add a breakpoint, or some additional logging, etc.
This is very different from saying you should have an LLM auto generate half a dozen indentical copies of sync.Map, container.List, my.Set or whatever.Tree based on the types you want to put in your container.
I'm actually fine with an LLM as a more powerful auto complete, that generates half a dozen lines of code at a time (or slightly tweaks code I paste) based on context.
I would have a problem with a LLM generating thousands of lines of code based on a prompt "this, but for ints" and then it's a fork of the original, with god knows how many subtle details lost, and a duplicated maintenance burden going forward.
> that _is_ essentially the Go team's take these days
It is not "essentially their take". It is one of the point (a weak one for what my opinion is worth) but far from their main point. Their main point from the text is the same point they always make in these cases:
> Coming up with a new syntax idea for error handling is cheap; hence the proliferation of a multitude of proposals from the community. Coming up with a good solution that holds up to scrutiny: not so much.
> the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar
I share the same concern here. I don't have a solid opinion on how that will turn out but I'm not too optimistic.
If Go generics support typeclasses, things will be much better now. At least custom generics and built-in generics will be unified harmoniously. Now, the manners of type argument passing with the built-in `new` and `make` function and custom generic functions are different. The inconsistency increases the load of cognition burden in Go programming.
It is pity that Go generics designer never expressed the intention to unify custom generics and built-in generics.
It was also promoted as a language that prioritizes explicitness. But just look at the changes made in Go 1.22 (3-clause for-loop semantic change, [1]) and 1.23 (iterators, [2]). Magic implicitness was introduced in the two versions.
Even worse, it was also promoted to keep backboard-compatibility seriously. But Go 1.22 broke the backward-compatibility so badly ([3] [1]). Despite this, the Go 1.22 release notes still claims "As always, the release maintains the Go 1 promise of compatibility".
Horrorshow. Longterm Go user and open source maintainer here.. looking at this code makes me want to puke. The whole thing is a crime against semantics. I thought the whole point was to do generics when they could do them well, right? This is unwell..
When I first learned about Go I thought the idea was to have a simple C-like language with a frozen feature set. A language that would look the same today and ten years from now. And I thought that boringness was a wonderful feature, actually.
If they're going to be adding features to the language, albeit at a slower pace than Java/C#, what's the point really? On a long enough timeline Go is going to be indistinguishable from these more feature-rich languages.
C11 added generics, multi-threading, unicode support, static assertions. It broken compatibility with earlier versions by removing `gets` function.
C23 added `nullptr`, very fundamental change. typeof operator. auto keyword for type inference. Lots of breaking changes by introducing new keywords. Another breaking change is empty brackets `()` now mean as function taking no arguments.
So lots of new features and breaking changes with every new iteration. Thankfully, compilers support sane standards, so you can just use `-ansi` and live happy life, I guess...
I think of it less as hybris and more of a (failed) experiment. It was deliberately built as a 'stupid' language for fresh undergrads, lacking design experience.
It is a technical solution for a people problem. It is better to guide and to mentor people in designing the right abstractions. What we should learn from this experiment is that this is the wrong approach.
> It is a technical solution for a people problem. It is better to guide and to mentor people in designing the right abstractions. What we should learn from this experiment is that this is the wrong approach.
Nah, it was just the wrong solution.
People problems are basically intractable in the grand scheme of things. Whenever you can turn a people problem into a technical problem, that's an opportunity for progress.
Imagine telling everyone to be a professional and being careful not to break our program when they edit the code? Sounds like a big people problem!
Instead, we give everyone their own copy to muck around with (instead of a shared folder), and we only allow changes to be integrated into the 'master copy', if they pass automated tests.
A good manager and really motivated and professional workers can help cope with people problems. But there's a limit to their ability. So the more we can offload to technological solutions, the more 'professionalism' (for lack of a better word) we can spare for other task that aren't feasible to be solved via technology, yet.
And I agree that not all technical solutions work! You need to experiment, and make judgement calls.
I agree it was the wrong solution. The problem is that Go is quite popular and lots of code has been written in a language that cannot be fixed. And that is also a people problem, because living daily with a programming language feels like a marriage.
People keep fixing the unfix-able rather than moving on. I see the same happening with Python.
> This idea proves to be surprisingly powerful when it comes to expressing constraints on generic functions and types.
Disagree. IMHO, this idea is the root cause of why Go generics is so complicate but also restrictive at the same time. And it introduces significant challenges in implementation and design: https://go101.org/generics/888-the-status-quo-of-go-custom-g...
I find C++ templates simpler than Go generics. With C++, you can at-least get to a design solution. With Go generics: oops this is not possible, oops that is not possible - all because of strange language limitations.
Go's Generics are a crippled implementation - they don't really deserve the feature title of 'generics'. (Its like saying you support regex, but don't support groups and repeat operators and you can only match them to special types of strings.)
I don't disagree that Go's generics are pretty limited. But I find it a strange complaint, when contrasted with C++ templates. Which, as I understand, are literally not part of the type system and thus there seems to be a far stronger case, that they can not be called generics.
The main difference between Go's generics and C++ templates (and where some of the restrictions come from) is that Go insists that you can type-check both the body of a generic function and the call to it, without one having to know about the other. My understanding is, that with C++ templates (even including concepts), the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.
And this decision leads to most of the complaints I've heard about C++ generics. The long compile times, the verbose error messages and the hard to debug type-errors.
So, if you prefer C++ templates, that's fair enough. But the limitations are there to address complaints many other people had about C++ templates specifically. And that seems a reasonable decision to me, as well.
> the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.
No longer true after C++ 20. When you leverage C++20 concepts in templates, type-checking happens in the template body more precisely and earlier than with unconstrained templates.
In the below, a C++ 20+ compliant compiler tries to verify that T satisfies HasBar<T> during template argument substitution, before trying to instantiate the body
The error messages when you use concepts are also more precise and helpfully informative - like Rust genericsAs I said in the other comment, I'm not a C++ user, so I'm relying on cargo-culting and copy-paste. But I think gcc disagrees - otherwise this would not compile, as line 14 is provably invalid: https://godbolt.org/z/P8sWKbEGP
Or am I grossly holding this wrong?
You need to have something that uses those templates. In your godbolt example, add a struct S
Now you will get compile errors saying that the constraint is not satisfied and that there is no matching function for call to 'bar(S&)' at line 14.Just to clarify why this is a problem: it’s possible for foo and bar to be defined in different libraries maintained by different people. Potentially several layers deep. And the author of the foo library tests their code and it compiles and all of their tests pass as and everything is great.
But it turns out that’s because they only ever tested it with types for which there is no conflict (obviously the conflicts can be more subtle than my example). And now a user instantiates it with a type that does trigger the conflict. And they get an error message, for code in a library they neither maintain nor even (directly) import. And they are expected to find that code and figure out why it breaks with this type to fix their build.
Or maybe someone changes one of the constraints deep down. In a way that seems backwards compatible to them. And they test everything and it all works fine. But then one of the users upgrades to a new version of the library which is considered compatible, but the build suddenly breaks.
These kind of situations are unacceptable to the Go project. We want to ensure that they categorically can’t happen. If your library code compiles, then the constraints are correct, full stop. As long as you don’t change your external API it doesn’t matter what your dependencies do - if your library builds, so will your users.
This doesn’t have to be important to you. But it is to the Go project and that seems valid too. And it explains a lot of the limitations we added.
> You need to have something that uses those templates.
Exactly. That is what I said:
> because you need to know the actual type arguments used, regardless of what the constraints might say.
It is because type-checking concept code is NP complete - it is trivial to check that a particular concrete type satisfies constraints, but you can not efficiently prove or disprove that all types which satisfy one constraint also satisfy another. Which you must do to type-check code like that (and give the user a helpful error message such as “this is fundamentally not satisfiable, your constraints are broken”).
And it’s one of the shortcomings of C++ templates that Go was consciously trying to avoid. Go’s generics are intentionally limited so you can only express constraints for which you can efficiently do such proofs.
I described the details a while back: https://blog.merovius.de/posts/2024-01-05_constraining_compl...
There are common solutions for the library issue. Authors of libraries for example can force instantiations for a dummy type that checks their concepts.
This can be done at the consumer side as well. I don't see a big deal of this. Dummy checks are common in Go too. For example, to check if a type satisfies an interface. After all, Go checks that a type implements an interface only at the point where you assign or use it as that interface type.Thanks for your blog post. Unfortunately, the intentional limitations make the design space a massive headache and many times lead to very convoluted API. I would actually make the argument that it explodes complexity - for the developer, instead of constraining it.
C++ templates are duck typed at compile time.
Look at Haskell type classes or Rust's traits for some classic examples of how to 'type' your generics. (And compare to what Go and C++ are doing.)
> C++ templates are duck typed at compile time.
This is not really true after C++ 20. C++ templates can leverage concepts that specify compile-time constraints and type checking on template parameters during template argument substitution
> C++ templates are duck typed at compile time.
"Compile time" is not the right distinction. This is about "instantiation time". Go's implementation specifically allows to type-check the body and the call separately. That is, if you import a third-party package and call a generic function, all the type checker needs to look at to prove correctness is the signature of the function. It can ignore the body.
This is especially relevant, if you call a generic function from a generic function. For C++, proving that such a call is correct is, in general, NP-complete (it directly maps to the SAT problem, you need to prove that every solution to one arbitrary boolean formula satisfies a different boolean formula). So the designers made the conscious decision to just not do that, instead delaying that check to the point at which the concrete type used to instantiate the generic function is known (because checking that a specific assignment satisfies a boolean formula is trivial). But that also means that you have to (recursively) type-check a generic function again and again for every type argument provided, which can drive up compilation time.
A demonstration is this program, which makes gcc consume functionally infinite amount of memory and time: https://godbolt.org/z/crK89TW9G (clang is a little bit more clever, but can ultimately be defeated using a similar mechanism).
Avoiding these problems is a specific cause for a lot of the limitations with Go's generics.
> Look at Haskell type classes or Rust's traits for some classic examples of how to 'type' your generics. (And compare to what Go and C++ are doing.)
Yes, those are a different beasts altogether and the differences between what Go is doing and what Haskell and Rust are doing requires different explanations.
Though it's illustrative, because it turns out Rust also intentionally limited their generics implementation, to solve the kinds of performance problems Go is worried about. Specifically, Rust has the concept of "Dyn compatibility" (formerly "Object safety") which exists because otherwise Rusts goal of zero-cost abstractions would be broken. Haskell doesn't have this problem and will happily allow you to use the less efficient but more powerful types.
(All of this should have the caveat that I'm not an expert in or even a user of any of these languages. It's half-knowledge and I might be wrong or things might have changed since I last looked)
>There is an idea that is not obvious until you hear about it for the first time: as interfaces are types themselves, they too can have type parameters
Not obvious???? Go language designers and programmers are living in another world
Sort of wild that the Go blog doesn't have Go syntax highlighting...
It makes more sense if you know about Rob Pike:
https://groups.google.com/g/golang-nuts/c/hJHCAaiL0so/m/kG3B...
>Syntax highlighting is juvenile. When I was a child, I was taught arithmetic using colored rods (http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I use monochromatic numerals.
The language creator really hates it (and most modern editor tooling).
Which also makes more sense if you take into consideration that he has a form of colour blindness: https://commandcenter.blogspot.com/2020/09/color-blindness-i...
Ultimately he's fine with _some_ syntax highlighting, especially the kind that uses whitespace to highlight parts of the syntax, as evidenced by the existence of `go fmt`. He just hasn't taken into consideration that colour is just one typographical tool among many, including the use of whitespace, as well as italics, bold, size, typeface, etc. Switching inks has been somewhat tedious in printing, but these days most publications seem to support it just fine, and obsessive note-takers also use various pens and highlighters in different colours. For the rest of us it's mostly about the toil of switching pens that's holding us back I think, rather than some real preference for monochromatic notes. We generally have eyes that can discern colours and brains that can process that signal in parallel to other stuff, which along with our innate selective attention means we can filter out the background or have our attention drawn to stuff like red lights. Intentionally not using that built-in hardware feature is ultimately just making stuff harder on oneself with no particular benefit.
There's also some google groups quote from him about iterators which is also pretty funny given how modern Go uses them, but I don't have the link at hand. Several google groups quotes from the original language creators (not just Pike) tell an unfortunate story about how the language came to be the way it is.
“I don’t like this so let’s force every one who disagrees with me to do it my way if they want to read my stuff”. How very mature
Might release an extension just to spite him
https://chromewebstore.google.com/detail/go-docs-syntax-high...
I'm a fan of Rob Pike, but not of Go. Rob Pike contributed a lot of thought to editor tooling through the years, albeit not in the direction the industry seems to be going -- for example, Sam and Acme are two editors he developed. Acme UI design is inspired by Oberon and is based on tiling, but 3rd party tooling integration is entirely different and leverages Plan9 concepts to enable a whole lot of extensibility with practically zero complexity overhead due to integration -- without any true plugin architecture. There are limits to what can be accomplished this way, but it is surprisingly powerful and I can see why a community might gravitate to his views. Unfortunately he takes this minimalist approach too far when it comes to languages IMO -- a language with no coproducts in 2025 is either a niche language or unnecessarily underpowered (how they do error handling is atrocious). Over the last decade Go went from the former to the latter.
To save others a google: coproducts = sum types AKA tagged unions.
This is hilarious to me
Smugness 101, or how to convey a personal preference in the most insufferable way imaginable.
Imagine the brain cycles rob pike is wasting. Good on him for having so many to spare
It's definitely weird in this day and age, but in the Go code examples... I don't miss it.
Paraphrasing, but if you need syntax highlighting to comprehend code, maybe your code is too complicated.
How does that matter, if it's more _easily_ comprehended (faster, with less effort, with fewer mistakes in comprehension) with the highlighting, for any level of complexity?
Not choosing to use syntax highlighting is just wrong on every level. It has exactly zero drawbacks.
> if it's more _easily_ comprehended (faster, with less effort, with fewer mistakes in comprehension) with the highlighting
But this is completely relevant to the person reading. It may be for you easier with highlighting but someone else it may not be
Yes. And there should be studies that show that the number of people who are hampered by syntax highlighting is probably so vanishingly small sompared to those that are either helped or not helped (unhelped, but not hampered)
Syntax highlighting studies usually don't report on whether some subjects perform worse with syntax highlighting - usually only that they as a group perform better. But even with that evidence, it should be obvious that syntax highlighting should be either on for everyone, or on initially and off as an option for the rare individual.
https://ppig.org/files/2015-PPIG-26th-Sarkar1.pdf
That's just suffering for suffering sake with no fathomable benefit. Why not reduce cognitive overhead if you can get it for free?
one wonders why colors exists after all. why, we should know all about vegetation, streams, living, and non-living organisms so that their chromatic attributes are very unnecessary. monochrome for the win! i propose dark gray btw /s
on a more serious note: somehow nature choose to let us see colors, and this sense has been immensely useful to our existence and pleasure. maybe go could learn a thing or two from nature?
Go code is outrageously ugly, and they'd rather you not highlight it.
> At this point, you might feel pretty overwhelmed. This is rather complicated and it seems unreasonable to expect every Go programmer to understand what is going on in this function signature. We also had to introduce yet more names into our API. When people cautioned against adding generics to Go in the first place, this is one of the things they were worried about.
One of the key benefits of Go, at least for me, was not having to think about any of this at all ever.
Whenever I touch generics, I find myself engrossed in the possibility of cleverly implementing something. Hours will pass as I try to solve the fun puzzle of how to do the thing using generics, rather than just solve the problem at hand.
It exchanges it for code-generation pain. Which one is worse is on a case-by-case basis.
I imagine that people who prefer code-generation just like the idea of it having a higher skill/investment floor to add it to a project so most projects instinctively avoid it.
While people who prefer generics jump at it even when it is not necessary or doesn't bring a lot of benefits.
But those are human problems, not so much shortcomings of those two techniques themselves.
If I'm being honest, the magic of Go was lost when generics were introduced. It now feels akin to Java, which I guess was inevitable and for anyone to really take it seriously maybe it needed to get here. But I am not a fan of generics. While that level of abstraction and composability is clever, it also lends itself to more complexity and systems that can be harder to concretely understand. Just an opinion that I know many will not agree with but I come from the systems side rather than pure software engineering. It's probably ironic considering go-micro leans heavily on interfaces for abstraction but in that there are many hard learned lessons.
Interesting perspective.
Coming from C#, whose generics are first class, I struggled to obtain any real value from Go's generics. It's not possible to execute on ideas that fit nicely in your head, and you instead end up fighting tooth and nail to wrangle what feels like an afterthought into something concrete that fits in your head.
Generics works well as a replacement for liberally using interface{} everywhere, making programs more readable, but as class and interface level I tend to avoid it as I find I don't really understand what is going on. I just needed it to work so I could move on
The difference with Java is that in Java, generics are everywhere and they make up half the Java spec - I bookmarked a page ages ago (from a HN comment) that highlights it, see [0], and that's just one page.
With Go, at least initially, it was an addition, not a core aspect of it - any code written in Go before generics will still work. Granted, I only have one real project but I never had a use case for generics - the built-in generic structures (map and arrays/slices) were enough for me. Maybe when you have code that works with the `interface{}` a lot (e.g. unknown JSON data) you'll have a use case for it.
[0] https://angelikalanger.com/GenericsFAQ/FAQSections/TypeParam...
> Maybe when you have code that works with the `interface{}` a lot (e.g. unknown JSON data) you'll have a use case for it.
I think in those cases, generics are specifically kind of pointless. Because you will inherently need to use `reflect` anyways. Generics are only helpful if you do know things about your types.
Generics are most useful for people who write special-purpose data structures. And hence for people who need such special-purpose data structures but don't want to implement them themself. The prototypical example is a lock-free map, which you only need, if you really need to solve performance problems and which specific kind of lock-free map you need depends very heavily on your workload. `sync.Map` is famously only really useful for mostly write-once caches, because that's what its optimized for.
The vast majority of people don't need such special-purpose data structures and can get by just fine with a `map` and a mutex. But Go has reach the level of adoption, where it can only really grow further, if it can also address the kinds of use-cases which do need something more specific.
> If I'm being honest, the magic of Go was lost when generics were introduced. It now feels akin to Java, [...]
Funnily enough, Java didn't use to have generics. I wonder whether it didn't feel like Java back then?
I was so excited when generics were going to release, and, tbh, I've barely used them. It's made some code easier to express correctly in the type system in rare cases.
I don't think I'd agree it's made the language "Java-like". That sounds like more of an indictment of the author of the code you're reviewing ;)
No one is forcing you to use the full scope of language features for every project.
This kind of argument comes up every time a new C# language version rolls out - as if it's a breaking change and now everyone is going to be forced to refactor for it.
The only other way I can read this is in terms of wishing others would use tools in the way you prefer, which is clearly a waste of energy.
How is this better than rewriting containers for specific element types? Go was supposed to be simple and I can't understand any of this rubbish.
It seems fairly clear to me, that it is preferable to import `rsc.io/omap` over having to implement a self-balancing binary search tree?
I didn't realize how important order was to type inference.
Are there any real packages out there using these techniques?
> I didn't realize how important order was to type inference.
I was unclear, I'm afraid. You can reorder the type parameters, it just changes which of them you need to specify: https://go.dev/play/p/oDIFl3fZiPl
The point is that you can only leave off elements from the end of the list, to have them automatically inferred.
> Are there any real packages out there using these techniques?
I think so far, the usage of generics for containers in Go is still relatively sparse, in public code. I think in part that is because the documentation of how to do that is relatively sparse. That is part of the motivation for the post, to have a bit of somewhat official documentation for these things, so they become more widely known.
The standard library is just starting to add generic containers: https://github.com/golang/go/issues/69559 And part of that is discussing how we want to do things like this: https://github.com/golang/go/issues/70471
That being said, I have used the pointer receiver thing in my dayjob. One example is protobuf. We have a generic helper to set a protobuf enum from the environment. Because of how the API was designed, that required a pointer receiver constraint.
The automatic part was what I was referring to, yes. I didn't realize you wrote the article, thanks!
The article mentions using the function version to implement all others, but also that the method version would be optimized better.
Would the compiler be able to inline MethodTree's compare even though it's passed in as a function variable to node.insert?
In practice, currently, that depends on inlining decisions. If the function taking the function (say `node.insert`) is inlined, then yes. There are also other optimizations, like escape analysis, that matter here: the compiler can prove that the arguments to `node.insert` only escape into the `cmp` passed in. That decision is kept as metadata on `node.insert` even if it is not inlined. So if you pass a method expression to it, it can actually look at that and decide that the arguments don't escape from it either and hence that they don't escape overall. Whereas if you pass a `func` field, it can make no assumptions.
My larger point though, is that with the `func` field the compiler can't optimize things even in principle. A user could always reassign this field (if nothing else using `*t = *new(FuncTree)`). So the compiler has to treat it as a dynamic call categorically. If the `func` is passed as a function, then at least in principle, it can prove that this function can't get modified during the call so can make optimization decisions based on what is being passed to it. For example, even without inlining, a future compiler might decide to compile two versions of `node.insert`, one with general dynamic calls and one specific one for a specific static function.
My philosophy when it comes to API decisions that impact performance is, not to make them too dependent on what the compiler is doing today, but just to take care there is enough information there, that the compiler can do an optimization in principle - which means it either will do it today, or we can make it smarter in the future, if it becomes a problem.
Like many other people, I tried my hand at a generic container library. It worked, but was surprisingly impractical. For example, debugging was hell - there are no custom type renderers in delve.
Preaching to the choir here, but this is why a lot of the Go community was against generics.
Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little, while the upside of avoiding all this complexity is unmeasurable.
> the downside of writing out explicit types and repetition matters very little
That’s not the main reason. You can have a library by author X that provides a container type Heap[T] and you can use it with your type T which is unknown by X and requires no coordination. If the proto-generic maps and slices did not exist in Go it would not be a useful language at all.
This pain point was glaring in Sort and Heap in std. The argument was whether the complexity was worth it and compile time speed could remain so fast. Even the improved expressivity isn’t obviously good (famously the removal of goto was good because it reduced expressivity).
Just stating the arguments, I still haven’t made up my mind whether these limited generics was the right call. Leaning yes, but it’s important to be humble. It takes a lot of time to evaluate second order effects.
> Especially in the era of AI assistants
As an aside, I really don’t appreciate this argument without extremely strong merits, which we can’t possibly have. Not everyone is using AI assistants, nor do people use it in the same way. But most importantly it changes very little since code is not bottlenecked by writing anyway. Code is read more often than written, and still needs to be reviewed, understood and maintained.
As far as I've seen, a heap implementation using generics is not any shorter or simpler than the old `heap.Interface` - what it gained is reusability.
> Code is read more often than written, and still needs to be reviewed, understood and maintained.
Which takes us back to the points above. AI is really good at generating repetitive patterns, like plain types, or code that implements a certain interface. If you reduce the cost of creating the verbose code [at write time] we can all enjoy the benefit of reduced complexity [at read time] without resorting to generics.
Also not saying this as an absolute truth, it is more nuanced than that for sure. But in the big picture, generics reduces the amount of code you have to write, at the cost of increased layers of abstraction, and steering away from the simplicity that make Go popular in the first place. Overall I'm not convinced it was a net positive, yet.
Generic is not about reducing how many keys you press but how you abstract your logic from the type. In go, it reduces a lot code making it safer and faster. Handling interface{} was just painful.
This is an extreme example and I hardly think anyone writing go code on a daily bases will need anything close to this. I haven't and I have not seen any lib that does anything remotely similar to that. To be honest, hardly anything beyond the stdlib will need to handle generics. They aren't widely used but quite useful when needed, which I think it is sweet-spot for generics.
I don't share the same animosity against generics. I like the recent language addition to the stdlib and am also waiting for them to add some sugar to reduce the boilerplate in error handling.
> Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little
Yeah, let's design languages based on the capabilities of code assistance /s
> Generic is not about reducing how many keys you press but how you abstract your logic from the type. In go, it reduces a lot code making it safer and faster.
No, this is not true for Go, at least for the current Go generics.
At runtime, Go generics can't be faster than generated repetitive code. Often, generic code is a little slower. Because sometimes values of type parameters are treated as interface values by the Go compiler, even if they are not.
> Handling interface{} was just painful.
Go generics are often helpless for this. Most use cases of interface{} are for reflection purpose and can't be re-implemented by Go generics. Some non-reflection use cases can't be also re-implemented by Go generics, because Go generics don't support type unions.
>> Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little
> Yeah, let's design languages based on the capabilities of code assistance /s
I mean, that _is_ essentially the Go team's take these days, c.f. their previous blog post about error handling: https://go.dev/blog/error-syntax
> Writing repeated error checks can be tedious, but today’s IDEs provide powerful, even LLM-assisted code completion. Writing basic error checks is straightforward for these tools. The verbosity is most obvious when reading code, but tools might help here as well; for instance an IDE with a Go language setting could provide a toggle switch to hide error handling code.
Personally I expect that getting an LLM to write error handling and then have the IDE hide it sounds like a recipe for surprises, but I guess things work out differently if the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar.
It's one thing to say that:
- an LLM can help you write a boilerplate `if (err != nil) { return fmt.Errorf(...) }` that actually matches the conventions for code base you're in;
- your IDE can "hide" those additional lines of code to reduce cognitive load while reading code;
- it's actually useful that those "hidden" lines are there when you're debugging and want a place to add a breakpoint, or some additional logging, etc.
This is very different from saying you should have an LLM auto generate half a dozen indentical copies of sync.Map, container.List, my.Set or whatever.Tree based on the types you want to put in your container.
I'm actually fine with an LLM as a more powerful auto complete, that generates half a dozen lines of code at a time (or slightly tweaks code I paste) based on context.
I would have a problem with a LLM generating thousands of lines of code based on a prompt "this, but for ints" and then it's a fork of the original, with god knows how many subtle details lost, and a duplicated maintenance burden going forward.
> that _is_ essentially the Go team's take these days
It is not "essentially their take". It is one of the point (a weak one for what my opinion is worth) but far from their main point. Their main point from the text is the same point they always make in these cases:
> Coming up with a new syntax idea for error handling is cheap; hence the proliferation of a multitude of proposals from the community. Coming up with a good solution that holds up to scrutiny: not so much.
> the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar
I share the same concern here. I don't have a solid opinion on how that will turn out but I'm not too optimistic.
Wait till they hear about typeclasses!
If Go generics support typeclasses, things will be much better now. At least custom generics and built-in generics will be unified harmoniously. Now, the manners of type argument passing with the built-in `new` and `make` function and custom generic functions are different. The inconsistency increases the load of cognition burden in Go programming.
It is pity that Go generics designer never expressed the intention to unify custom generics and built-in generics.
Remember when Go proposed as a simple language?
What a shitshow. Seems like Go's designers didn't know about interfaces, generics, and iterators when decided to make a language...
It was also promoted as a language that prioritizes explicitness. But just look at the changes made in Go 1.22 (3-clause for-loop semantic change, [1]) and 1.23 (iterators, [2]). Magic implicitness was introduced in the two versions.
Even worse, it was also promoted to keep backboard-compatibility seriously. But Go 1.22 broke the backward-compatibility so badly ([3] [1]). Despite this, the Go 1.22 release notes still claims "As always, the release maintains the Go 1 promise of compatibility".
[1]: https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...
[2]: https://go101.org/blog/2025-03-15-some-facts-about-iterators...
[3]: https://go101.org/bugs/go-build-directive-not-work.html
And the change makers even have no interests to fix the problems caused by the changes:
* https://github.com/golang/go/issues/66070#issuecomment-19816...
* https://github.com/golang/go/issues/71830
* https://github.com/spq/pkappa2/issues/238
* https://github.com/golang/go/issues/66388
* https://github.com/golang/go/issues/71685
Market for simple language opened. Someone will fill it. Go failed expectations, by turning into C++.
Horrorshow. Longterm Go user and open source maintainer here.. looking at this code makes me want to puke. The whole thing is a crime against semantics. I thought the whole point was to do generics when they could do them well, right? This is unwell..
When I first learned about Go I thought the idea was to have a simple C-like language with a frozen feature set. A language that would look the same today and ten years from now. And I thought that boringness was a wonderful feature, actually.
If they're going to be adding features to the language, albeit at a slower pace than Java/C#, what's the point really? On a long enough timeline Go is going to be indistinguishable from these more feature-rich languages.
> When I first learned about Go I thought the idea was to have a simple C-like language with a frozen feature set.
C is a C-like language with a mostly frozen feature set. (If you want something less insane than C, there's also Pascal.)
C is not frozen.
C11 added generics, multi-threading, unicode support, static assertions. It broken compatibility with earlier versions by removing `gets` function.
C23 added `nullptr`, very fundamental change. typeof operator. auto keyword for type inference. Lots of breaking changes by introducing new keywords. Another breaking change is empty brackets `()` now mean as function taking no arguments.
So lots of new features and breaking changes with every new iteration. Thankfully, compilers support sane standards, so you can just use `-ansi` and live happy life, I guess...
“Maybe generics like in java and C# actually made sense after all.“
Welcome to civilization, golang. Were there ever any language developers with more hybris?
I think of it less as hybris and more of a (failed) experiment. It was deliberately built as a 'stupid' language for fresh undergrads, lacking design experience.
It is a technical solution for a people problem. It is better to guide and to mentor people in designing the right abstractions. What we should learn from this experiment is that this is the wrong approach.
> It is a technical solution for a people problem. It is better to guide and to mentor people in designing the right abstractions. What we should learn from this experiment is that this is the wrong approach.
Nah, it was just the wrong solution.
People problems are basically intractable in the grand scheme of things. Whenever you can turn a people problem into a technical problem, that's an opportunity for progress.
Imagine telling everyone to be a professional and being careful not to break our program when they edit the code? Sounds like a big people problem!
Instead, we give everyone their own copy to muck around with (instead of a shared folder), and we only allow changes to be integrated into the 'master copy', if they pass automated tests.
A good manager and really motivated and professional workers can help cope with people problems. But there's a limit to their ability. So the more we can offload to technological solutions, the more 'professionalism' (for lack of a better word) we can spare for other task that aren't feasible to be solved via technology, yet.
And I agree that not all technical solutions work! You need to experiment, and make judgement calls.
I agree it was the wrong solution. The problem is that Go is quite popular and lots of code has been written in a language that cannot be fixed. And that is also a people problem, because living daily with a programming language feels like a marriage.
People keep fixing the unfix-able rather than moving on. I see the same happening with Python.
To me it's very useful. Whenever someone tells them go is their favourite language I know they can't be trusted.