This is actually really cool. I just tried it out using an AI studio API key and was pretty impressed. One issue I noticed was that the output was a little too much "for dummies". Spending paragraphs to explain what an API is through restaurant analogies is a little unnecessary. And then followed up with more paragraphs on what GraphQL is. Every chapter seems to suffer from this. The generated documentation seems more suited for a slightly technical PM moreso than a software engineer. This can probably be mitigated by refining the prompt.
The prompt would also maybe be better if it encouraged variety in diagrams. For somethings, a flow chart would fit better than a sequence diagram (e.g., a durable state machine workflow written using AWS Step Functions).
Answers like this are sort of what makes me wonder what most engineers are smoking when they think AI isn’t valuable.
I don’t think the outright dismissal of AI is smart. (And, OP, I don’t mean to imply that you are doing that. I mean this generally.)
I also suspect people who level these criticisms have never really used a frontier LLM.
Feeding in a whole codebase that I’m familiar with, and hearing the LLM give good answers about its purpose and implementation from a completely cold read is very impressive.
Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.
exactly it is. I'd rather impressive but at the same time the audience is always going to be engineers, so perhaps it can be curated to still be technical to a degree? I can't imagine a scenario where I have to explain to the VP my ETL pipeline
Ensure the tone is welcoming and easy for a newcomer to understand{tone_note}.
- Output only the Markdown content for this chapter.
Now, directly provide a super beginner-friendly Markdown output (DON'T need ```markdown``` tags)
So just a change here might do the trick if you’re interested.
But I wonder how Gemini would manage different levels.
From my take (mostly edtech and not in English) it’s really hard to tone the answer properly and not just have a black and white (5 year old vs expert talk) answer.
Anyone has advice on that?
"Write simple, rigorous statements, starting from first principles, and making sure to take things to their logical conclusion. Write in straightforward prose, no bullet points and summaries. Avoid truisms and overly high-level statements. (Optionally) Assume that the reader {now put your original prompt whatever you had e.g 5 yo}"
Sometimes I write a few more lines with the same meaning as above, and sometimes less, they all work more or less OK. Randomly I get better results sometimes with small tweaks but nothing to make a pattern out of -- a useless endeavour anyway since these models change in minute ways every release, and in neural nets the blast radius of a small change is huge.
Woah, this is really neat.
My first step for many new libraries is to clone the repo, launch Claude code, and ask it to write good documentation for me. This would save a lot of steps for me!
I built browser use. Dayum, the results for our lib are really impressive, you didn’t touch outputs at all?
One problem we have is maintaining the docs with current codebase (code examples break sometimes). Wonder if I could use parts of Pocket to help with that.
Thank you! And correct, I didn't modify the outputs. For small changes, you can just feed the commit history and ask an LLM to modify the docs. If there are lots of architecture-level changes, it would be easier to just feed the old docs and rewrite - it usually takes <10 minutes.
As a maintainer of a different library, I think there’s something here. A revised version of this tool that also gets fed the docs and asked to find inaccuracies could be great. Even if false positives and false negatives are let’s say 20% each, it would still be better than before as final decisions are made by a human.
At the top are some neat high-level stuffs, but, below that, it quickly turns into code-written-in-human-language.
I think it should be possible to extract some more useful usage patterns by poking into related unit tests. How to use should be what matters to most tutorial readers.
Isn't that overly optimistic? The postgres source code is really complex, and reading a dummy tutorial isn't going to make you a database engine ninja. If a simple tutorial can, imagine what a book on the topic could do.
Really nice work and thank you for sharing. These are great demonstrations of the value of LLMs which help to go against the negative view on the impacts to junior engineers.
This helps bridge the gap of most projects lacking updated documentation.
do you have plans to expand this to include more advanced topics like architecture-level reasoning, refactoring patterns, or onboarding workflows for large-scale repositories?
Yes! This is an initial prototype. Good to see the interest, and I'm considering digging deeper by creating more tailored tutorials for different types of projects. E.g., if we know it's web dev, we could generate tutorials based more on request flows, API endpoints, database interactions, etc. If we know it's a more long-term maintained projects, we can focus on identifying refactoring patterns.
Have you ever seen komment.ai? Is so did you have any issues with the limitation of the product?
I haven't used it, but it looks like it's in the same space and I've been curious about it for a while.
I've tried my own homebrew solutions, creating embedding databases by having something like aider or simonw's llm make an ingests json from every function, then using it as a rag in qdrant to do an architecture document, then using that to do contextual inline function commenting and make a doxygen then using all of that once again as an mcp with playwright to hook that up through roo.
It's a weird pipeline and it's been ok, not great but ok.
I'm looking into perplexica as part of the chain, mostly as a negation tool
One thing to note is that the tutorial generation depends largely on Gemini 2.5 Pro. Its code understanding ability is very good, combined with its large 1M context window for a holistic understanding of the code. This leads to very satisfactory tutorial results.
However, Gemini 2.5 Pro was released just late last month. Since Komment.ai launched earlier this year, I don't think models at that time could generate results of that quality.
I've been using llama 4 Maverick through openrouter. Gemini was my go to but I switched basically the day it came out to try it out.
I haven't switched back. At least for my use cases it's been meeting my expectations.
I haven't tried Microsoft's new 1.58 bit model but it may be a great swap out for sentencellm, the legendary all-MiniLM-L6-v2.
I found that if I'm unfamiliar with the knowledge domain I'm mostly using AI but then as I dive in the ratio of AI to human changes to the point where it's AI at 0 and it's all human.
Basically AI wins at day 1 but isn't any better at day 50. If this can change then it's the next step
Yeah, I'd recommend trying Gemini 2.5 Pro. I know early Gemini weren't great, but the recent one is really impressive in terms of coding ability. This project is kind of designed around the recent breakthrough.
I hate this language: "built an AI", did you train a new model to do this? Or are you in fact calling ChatGPT 4o, or Sonnet 3.7 with some specific prompts?
If you trained a model from scratch to do this I would say you "built an AI", but if you're just calling existing models in a loop then you didn't build an AI. You just wrote some prompts and loops and did some RAG. Which isn't building an AI and isn't particularly novel.
Doesn’t claim it isn’t useful just it’s not as useful as they thought.
For instance to me AI is useful because I don’t have to write boilerplate code but that’s rarely the case. For other things it still useful to write code but I am not faster because the time I save writing the code I need to fix the prompt, audit and fix the code.
Exactly, kudos to the author because AI didn’t came up with that.
But that’s what they sell, that AI could do what the author did with AI.
The question is, is it worth to put all that money and energy in AI. MS sacrificed its CO2 goals for email summaries and better autocomplete not to mention all the useless things we do with AI
The AI companies sell it like the AI could do it by itself and developers are obsolete but in reality it‘s a tool that still needs developers to make something useful
Some people already said it’s useless to learn to program because AI will do, that‘s the hype of AI not that AI isn’t useful as such like parent comment suggested.
They push AI into everything like it’s the ultimate solution but it is not instead is has serious limitations.
You don't point this tool at the documentation though. You point it at a repo.
Granted, this example (and others) have plenty of inline documentation. And, public documentation is likely in the training data for LLMs.
But, this is more than just a prompt. The tool generates really nicely structured and readable tutorials that let you understand codebases at a conceptual level easier than reading docstrings and code.
Even if it's only useful for public repos with documentation, that's still useful, and flippant dismissals are counterproductive.
I am keen to try this with one of my own (private, badly documented) codebases and see how it fares. I've actually found LLMs quite useful at explaining code, so I have high hopes.
Fair, there are tons of docstrings. I have had the opposite experience with LLMs explaining code, so I am biased towards assuming this works. I'm keen to try it and see.
I suppose I'm just a little bit bothered by your saying you "built an AI" when all the heavy lifting is done by a pretrained LLM. Saying you made an AI-based program or hell, even saying you made an AI agent, would be more genuine than saying you "built an AI" which is such an all-encompassing thing that I don't even know what it means. At the very least it should imply use of some sort of training via gradient descent though.
The Linux repository has ~50M tokens, which goes beyond the 1M token limit for Gemini 2.5 Pro.
I think there are two paths forward: (1) decompose the repository into smaller parts (e.g., kernel, shell, file system, etc.), or (2) wait for larger-context models with a 50M+ input limit.
Some huge percentage of that is just drivers. The kernel is likely what would be of interest to someone in this regard; moreover, much of that is architecture specific. IIRC the x86 kernel is <1M lines, though probably not <1M tokens.
The AMDGPU driver alone is 5 million lines - out of about 37 million lines total. Over 10% of the codebase is a driver for a single vendor, although most of it is auto generated per-product headers.
This is actually really cool. I just tried it out using an AI studio API key and was pretty impressed. One issue I noticed was that the output was a little too much "for dummies". Spending paragraphs to explain what an API is through restaurant analogies is a little unnecessary. And then followed up with more paragraphs on what GraphQL is. Every chapter seems to suffer from this. The generated documentation seems more suited for a slightly technical PM moreso than a software engineer. This can probably be mitigated by refining the prompt.
The prompt would also maybe be better if it encouraged variety in diagrams. For somethings, a flow chart would fit better than a sequence diagram (e.g., a durable state machine workflow written using AWS Step Functions).
Answers like this are sort of what makes me wonder what most engineers are smoking when they think AI isn’t valuable.
I don’t think the outright dismissal of AI is smart. (And, OP, I don’t mean to imply that you are doing that. I mean this generally.)
I also suspect people who level these criticisms have never really used a frontier LLM.
Feeding in a whole codebase that I’m familiar with, and hearing the LLM give good answers about its purpose and implementation from a completely cold read is very impressive.
Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.
exactly it is. I'd rather impressive but at the same time the audience is always going to be engineers, so perhaps it can be curated to still be technical to a degree? I can't imagine a scenario where I have to explain to the VP my ETL pipeline
From flow.py
Ensure the tone is welcoming and easy for a newcomer to understand{tone_note}.
- Output only the Markdown content for this chapter.
Now, directly provide a super beginner-friendly Markdown output (DON'T need ```markdown``` tags)
So just a change here might do the trick if you’re interested.
But I wonder how Gemini would manage different levels. From my take (mostly edtech and not in English) it’s really hard to tone the answer properly and not just have a black and white (5 year old vs expert talk) answer. Anyone has advice on that?
This has given me decent success:
"Write simple, rigorous statements, starting from first principles, and making sure to take things to their logical conclusion. Write in straightforward prose, no bullet points and summaries. Avoid truisms and overly high-level statements. (Optionally) Assume that the reader {now put your original prompt whatever you had e.g 5 yo}"
Sometimes I write a few more lines with the same meaning as above, and sometimes less, they all work more or less OK. Randomly I get better results sometimes with small tweaks but nothing to make a pattern out of -- a useless endeavour anyway since these models change in minute ways every release, and in neural nets the blast radius of a small change is huge.
Thanks I’ll try that!
I love it! I effectively achieve similar results by asking Cursor lots of questions!
Like at least one other person in the comments mentioned, I would like a slightly different tone.
Perhaps good feature would be a "style template", that can be chosen to match your preferred writing style.
I may submit a PR though not if it takes a lot of time.
Thanks—would really appreciate your PR!
Woah, this is really neat. My first step for many new libraries is to clone the repo, launch Claude code, and ask it to write good documentation for me. This would save a lot of steps for me!
I built browser use. Dayum, the results for our lib are really impressive, you didn’t touch outputs at all? One problem we have is maintaining the docs with current codebase (code examples break sometimes). Wonder if I could use parts of Pocket to help with that.
Thank you! And correct, I didn't modify the outputs. For small changes, you can just feed the commit history and ask an LLM to modify the docs. If there are lots of architecture-level changes, it would be easier to just feed the old docs and rewrite - it usually takes <10 minutes.
As a maintainer of a different library, I think there’s something here. A revised version of this tool that also gets fed the docs and asked to find inaccuracies could be great. Even if false positives and false negatives are let’s say 20% each, it would still be better than before as final decisions are made by a human.
A company (mutable ai) was acquired by Google last year for essentially doing this but outputting a wiki instead of a tutorial.
Their site seems to be down. I can't find their results.
Were they acquired? Or did they give up and the CEO found work at Google?
https://news.ycombinator.com/item?id=42542512
The latter is what this thread claims ^
It sounds like it'd be perfect for Google's NotebookLM portfolio -- at least if they wanted to scale it up.
This is really cool and very practical. definitely will try it out for some projects soon.
Can see some finetuning after generation being required, but assuming you know your own codebase that's not an issue anyway.
Can this work with Codeium Enterprise?
Did you measure how much it cost to run it against your examples? Trying to gauge how much it would cost to run this against my repos.
Looks like there are 4 prompts and the last one can run up to 10 times for the chapter content.
You might get two or three tutorials built for yourself inside the free 25/day limit, depending on how many chapters it needs.
Interesting..would you like to share some technical details? it did not seem you have used RAG here?
Yeah, RAG is not the best option here. Check out the design doc: https://github.com/The-Pocket/Tutorial-Codebase-Knowledge/bl... I also have a YouTube Dev Tutorial. The link is on the repo.
At the top are some neat high-level stuffs, but, below that, it quickly turns into code-written-in-human-language.
I think it should be possible to extract some more useful usage patterns by poking into related unit tests. How to use should be what matters to most tutorial readers.
The dspy tutorial is amazing. I think dspy is super difficult to understand conceptually, but the tutorial explained it really well
That's a game changer for a new Open source contributor's onboarding.
Put in postgres or redis codebase, get a good understanding and get going to contribute.
Isn't that overly optimistic? The postgres source code is really complex, and reading a dummy tutorial isn't going to make you a database engine ninja. If a simple tutorial can, imagine what a book on the topic could do.
Love this kind of stuff on HN
Does it use the docs in the repository or only the code?
By default we use both based on regex:
DEFAULT_INCLUDE_PATTERNS = { ".py", ".js", ".jsx", ".ts", ".tsx", ".go", ".java", ".pyi", ".pyx", ".c", ".cc", ".cpp", ".h", ".md", ".rst", "Dockerfile", "Makefile", ".yaml", ".yml", } DEFAULT_EXCLUDE_PATTERNS = { "test", "tests/", "docs/", "examples/", "v1/", "dist/", "build/", "experimental/", "deprecated/", "legacy/", ".git/", ".github/", ".next/", ".vscode/", "obj/", "bin/", "node_modules/", ".log" }
Really nice work and thank you for sharing. These are great demonstrations of the value of LLMs which help to go against the negative view on the impacts to junior engineers. This helps bridge the gap of most projects lacking updated documentation.
Love this. These are the kind of AI applications we need which aid our learning and discovery.
do you have plans to expand this to include more advanced topics like architecture-level reasoning, refactoring patterns, or onboarding workflows for large-scale repositories?
Yes! This is an initial prototype. Good to see the interest, and I'm considering digging deeper by creating more tailored tutorials for different types of projects. E.g., if we know it's web dev, we could generate tutorials based more on request flows, API endpoints, database interactions, etc. If we know it's a more long-term maintained projects, we can focus on identifying refactoring patterns.
Have you ever seen komment.ai? Is so did you have any issues with the limitation of the product?
I haven't used it, but it looks like it's in the same space and I've been curious about it for a while.
I've tried my own homebrew solutions, creating embedding databases by having something like aider or simonw's llm make an ingests json from every function, then using it as a rag in qdrant to do an architecture document, then using that to do contextual inline function commenting and make a doxygen then using all of that once again as an mcp with playwright to hook that up through roo.
It's a weird pipeline and it's been ok, not great but ok.
I'm looking into perplexica as part of the chain, mostly as a negation tool
No, I haven't, but I will check it out!
One thing to note is that the tutorial generation depends largely on Gemini 2.5 Pro. Its code understanding ability is very good, combined with its large 1M context window for a holistic understanding of the code. This leads to very satisfactory tutorial results.
However, Gemini 2.5 Pro was released just late last month. Since Komment.ai launched earlier this year, I don't think models at that time could generate results of that quality.
I've been using llama 4 Maverick through openrouter. Gemini was my go to but I switched basically the day it came out to try it out.
I haven't switched back. At least for my use cases it's been meeting my expectations.
I haven't tried Microsoft's new 1.58 bit model but it may be a great swap out for sentencellm, the legendary all-MiniLM-L6-v2.
I found that if I'm unfamiliar with the knowledge domain I'm mostly using AI but then as I dive in the ratio of AI to human changes to the point where it's AI at 0 and it's all human.
Basically AI wins at day 1 but isn't any better at day 50. If this can change then it's the next step
Yeah, I'd recommend trying Gemini 2.5 Pro. I know early Gemini weren't great, but the recent one is really impressive in terms of coding ability. This project is kind of designed around the recent breakthrough.
I've used it, I used to be a huge booster! Give llama 4 maverick a try, really.
The overview diagrams it creates are pretty interesting, but the tone/style of the AI-generated text is insufferable to me - e.g. https://the-pocket.github.io/Tutorial-Codebase-Knowledge/Req...
Haha. The project is fully open-sourced, so you can tune the prompt for the tone/style you prefer: https://github.com/The-Pocket/Tutorial-Codebase-Knowledge/bl...
mind explaining what exactly was insufferable here?
If you don't feel the same way from reading it, I'm not sure it can be explained.
This is brilliant. I would make great use of this.
I hate this language: "built an AI", did you train a new model to do this? Or are you in fact calling ChatGPT 4o, or Sonnet 3.7 with some specific prompts?
If you trained a model from scratch to do this I would say you "built an AI", but if you're just calling existing models in a loop then you didn't build an AI. You just wrote some prompts and loops and did some RAG. Which isn't building an AI and isn't particularly novel.
> “I built an AI”
> look inside
> it’s a ChatGPT wrapper
Do one for LLVM and I'll definitely look at it.
For anyone doubting AI as pure hype, this is the counter example of its usefulness
Nobody said AI isn’t useful.
The hype is that AI isn’t a tool but the developer.
Nobody?
https://news.ycombinator.com/item?id=41542497
Doesn’t claim it isn’t useful just it’s not as useful as they thought.
For instance to me AI is useful because I don’t have to write boilerplate code but that’s rarely the case. For other things it still useful to write code but I am not faster because the time I save writing the code I need to fix the prompt, audit and fix the code.
I've seen a lot of developers that are absolute tools. But I've yet to see such a succinct use of AI. Kudos to the author.
Exactly, kudos to the author because AI didn’t came up with that.
But that’s what they sell, that AI could do what the author did with AI.
The question is, is it worth to put all that money and energy in AI. MS sacrificed its CO2 goals for email summaries and better autocomplete not to mention all the useless things we do with AI
> But that’s what they sell, that AI could do what the author did with AI.
Can you give an example of what you meant here? The author did use AI. What does "AI coming up with that" mean?
It’s about the AI hype.
The AI companies sell it like the AI could do it by itself and developers are obsolete but in reality it‘s a tool that still needs developers to make something useful
GP commenter complains that it’s not AI that came up with an idea and implemented it, but a human did.
In the few years we will see complaints that it’s not AI that built a power station and a datacenter, so it doesn’t count as well.
Some people already said it’s useless to learn to program because AI will do, that‘s the hype of AI not that AI isn’t useful as such like parent comment suggested.
They push AI into everything like it’s the ultimate solution but it is not instead is has serious limitations.
You didn't "build an AI". It's more like you wrote a prompt.
I wonder why all examples are from projects with great docs already so it doesn't even need to read the actual code.
> You didn't "build an AI".
True
> It's more like you wrote a prompt.
False
> I wonder why all examples are from projects with great docs already so it doesn't even need to read the actual code.
False.
This: https://github.com/browser-use/browser-use/tree/main/browser...
Became this: https://the-pocket.github.io/Tutorial-Codebase-Knowledge/Bro...
The example you made has, in fact, a documentation
https://docs.browser-use.com/introduction
You don't point this tool at the documentation though. You point it at a repo.
Granted, this example (and others) have plenty of inline documentation. And, public documentation is likely in the training data for LLMs.
But, this is more than just a prompt. The tool generates really nicely structured and readable tutorials that let you understand codebases at a conceptual level easier than reading docstrings and code.
Even if it's only useful for public repos with documentation, that's still useful, and flippant dismissals are counterproductive.
I am keen to try this with one of my own (private, badly documented) codebases and see how it fares. I've actually found LLMs quite useful at explaining code, so I have high hopes.
That's good, though there's tons of docstrings. In my experience LLM completely make no sense from undocumented code.
Fair, there are tons of docstrings. I have had the opposite experience with LLMs explaining code, so I am biased towards assuming this works. I'm keen to try it and see.
Impressive work.
With the rise of AI understanding software will become relatively easy
I suppose I'm just a little bit bothered by your saying you "built an AI" when all the heavy lifting is done by a pretrained LLM. Saying you made an AI-based program or hell, even saying you made an AI agent, would be more genuine than saying you "built an AI" which is such an all-encompassing thing that I don't even know what it means. At the very least it should imply use of some sort of training via gradient descent though.
It is an application of AI which is just software, and applying it to solve a problem or need.
>:( :3
I would find this more interesting if it made tutorials out if the Linux, LLVM, OpenZFS and FreeBSD codebases.
I would find this comment more interesting if it didn’t dismiss the project just because you didn’t find it valuable.
So what is the problem with raising an opinion ?
The Linux repository has ~50M tokens, which goes beyond the 1M token limit for Gemini 2.5 Pro. I think there are two paths forward: (1) decompose the repository into smaller parts (e.g., kernel, shell, file system, etc.), or (2) wait for larger-context models with a 50M+ input limit.
Some huge percentage of that is just drivers. The kernel is likely what would be of interest to someone in this regard; moreover, much of that is architecture specific. IIRC the x86 kernel is <1M lines, though probably not <1M tokens.
The AMDGPU driver alone is 5 million lines - out of about 37 million lines total. Over 10% of the codebase is a driver for a single vendor, although most of it is auto generated per-product headers.
You can use the AST for some languages to identify modular components that are smaller and can fit into the 1M window