I’ve been transitioning to Go after years in other ecosystems, and kept running into the same problem:
I could write correct Go code, but not idiomatic Go.
Most material focuses on syntax or algorithms. In practice, what caused friction were production mismatches: context cancellation and goroutine leaks, errgroup vs WaitGroup tradeoffs, HTTP client hygiene, error wrapping semantics, allocation control, embed/io/fs for dev–prod parity, etc.
I started collecting small, constraint-driven katas that isolate one such mismatch at a time. Each kata defines explicit pass/fail idiomatic constraints, rather than providing solutions. The goal is deliberate practice, not “best practices” or tutorials.
This repo is curated by someone transitioning to Go, for others doing the same. It’s not meant to be authoritative. If you’re experienced with Go and spot incorrect, unsafe, or misleading constraints, issues and PRs with rationale and references are explicitly encouraged.
I’m especially interested in feedback from people using Go in production on where these constraints are wrong, incomplete, or missing important edge cases.
The instructions mention "Reflect: Compare your solution with the provided "Reference Implementation" (if available)" but not a single line of code is present.
Is this an artifact of it all being ai generated, or work in progress?
If the idea is to have devs implement each kata, wouldn't it be more effective to provide not only automated tests, but also code which should be used as a basis for each challenge?
For example, if supporting a dev tag to serve assets from the filesystem, why not include a simple webserver which embeds the contents?
This would allow aspiring gophers to go straight to the high value modification of a project rather than effectively spend most of the time writing scaffolding and tests.
I flagged because perennial complaining about the license is boring, adds nothing of value to the conversation, and just leads to the same tired series of arguments.
you may have issues running in Docker; when i last touched this i needed to modify docker.sh:
-docker run --rm -ti -v "$PWD":/usr/src/koans -w /usr/src/koans golang:1.6.0-alpine go test
+docker run --rm -ti -e GO111MODULE=off -v "$PWD":/usr/src/koans -w /usr/src/koans golang:1.18-alpine go test
This honestly just looks like a bunch of ChatGPT output. There’s almost no code (I checked maybe half a dozen topics). Not sure how useful this is for anyone besides the author. Why would I look at this instead of asking an LLM?
I would guess that most folks would consider a bunch of problem prompts with no reference solutions to be not so useful. How would you check your understanding? How would you know if you were writing go code which would be frowned upon by industry veterans?
I’ve been transitioning to Go after years in other ecosystems, and kept running into the same problem: I could write correct Go code, but not idiomatic Go.
Most material focuses on syntax or algorithms. In practice, what caused friction were production mismatches: context cancellation and goroutine leaks, errgroup vs WaitGroup tradeoffs, HTTP client hygiene, error wrapping semantics, allocation control, embed/io/fs for dev–prod parity, etc.
I started collecting small, constraint-driven katas that isolate one such mismatch at a time. Each kata defines explicit pass/fail idiomatic constraints, rather than providing solutions. The goal is deliberate practice, not “best practices” or tutorials.
This repo is curated by someone transitioning to Go, for others doing the same. It’s not meant to be authoritative. If you’re experienced with Go and spot incorrect, unsafe, or misleading constraints, issues and PRs with rationale and references are explicitly encouraged.
I’m especially interested in feedback from people using Go in production on where these constraints are wrong, incomplete, or missing important edge cases.
What is the difference between a "prod grade kata" and a practice project?
I fail to see the value to someone transitioning to Go if there aren't any reference solutions.
How would one know if one's code is idiomatic, without either a reference or someone to ask?
And if you have someone to ask, what's the point of this repo?
The instructions mention "Reflect: Compare your solution with the provided "Reference Implementation" (if available)" but not a single line of code is present.
Is this an artifact of it all being ai generated, or work in progress?
If the idea is to have devs implement each kata, wouldn't it be more effective to provide not only automated tests, but also code which should be used as a basis for each challenge?
For example, if supporting a dev tag to serve assets from the filesystem, why not include a simple webserver which embeds the contents?
This would allow aspiring gophers to go straight to the high value modification of a project rather than effectively spend most of the time writing scaffolding and tests.
Could you please add a LICENSE file to the repository? MIT or any Creative Commons licenses would be a great choice.
I'm not sure why this has been flagged - its not possible to copy code from the repo, as it defaults to All-Rights-Reserved (aka proprietary)
Unless that's explicitly the intent, in which case that's fair
I flagged because perennial complaining about the license is boring, adds nothing of value to the conversation, and just leads to the same tired series of arguments.
there's also https://github.com/cdarwin/go-koans for small go exercises.
you may have issues running in Docker; when i last touched this i needed to modify docker.sh:
This honestly just looks like a bunch of ChatGPT output. There’s almost no code (I checked maybe half a dozen topics). Not sure how useful this is for anyone besides the author. Why would I look at this instead of asking an LLM?
I’m sure this is useful for me who is a staff+ engineer building a team who are transitioning to go.
Including these Kata in AGENTS.md is extremely useful.
Why was this downvoted?
I would guess that most folks would consider a bunch of problem prompts with no reference solutions to be not so useful. How would you check your understanding? How would you know if you were writing go code which would be frowned upon by industry veterans?
I wish it had solutions.
And automatic tests in some cases.