It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
'You must write docs. Docs must be in your repo. You must write tests. You must document your architecture. Etc. Etc.'
These were all best practices before LLMs existed and they remain so even now. I have been writing extensive documentation for all my software for something like twenty years now, whether it was for software I wrote for myself, for my tiny open source projects or for businesses. I will obviously continue to do so and it has nothing to do with:
> AI changes the game
The reason is simply that tests and documentation are useful to humans working on the codebase. They help people understand the system and maintain it over time. If these practices also benefit LLMs then that is certainly a bonus, but these practices were valuable long before LLMs existed and they remain valuable even now regardless of how AI may have changed the game.
It is also a bit funny that these considerations did not seem very common when the beneficiaries were fellow human collaborators, but are now being portrayed as very important once LLMs are involved. I'd argue that fellow humans and your future self deserved these considerations even more in the first place. Still, if LLMs are what finally motivate people to write good documentation and good tests, I suppose that is a good outcome since humans will end up benefiting from it too.
> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
Maybe it's the speed of LLM iteration that makes the benefit more immediately obvious, vs seeing it unfold with a team of people over a longer time? It's almost like running a study?
I have a similar reaction to strong static types being advocated to help LLMs understanding/debugging code, catching bugs, refactoring... when it's obvious to me this helps humans as well.
Curious how "this practice helps LLMs be more productive" relates to studies that try to show this with human programmers, where running convincing human studies is really difficult. Besides problems with context sizes, are there best practices that help LLMs a lot but not humans?
AI means that you cannot defer software design until you've written half code; you cannot defer documentation to random notes at the end.
It has the effect of finally forcing people to think about the software they're making, assuming they care about quality. If they didn't, then it's not practically different from an insecure low-code app or something copy-pasted from 15 year old StackOverflow answers.
> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
About 95% of the work needed to make LLMs happy is just general purpose better engineering. Units tests? Integration tests? CI? API documentation? Good example? All great for humans too!
I consider this largely a good thing. It would be much worse if the changes needed for Happy LLMs were completely different than what you want for Happy Humans! Even worse would be if they were mutually exclusive.
Well, it's timely because there's a docs platform that has surged in popularity, and it really is not a good idea for most of those who need technical docs to be using a SaaS that approximates Squarespace.
Lately I have seen a lot of things coming full circle like this in a way that always seems positive for humans as well.
Many doomers are running around saying the future is grim because everything will be made for AI agents to use rather than humans. But so far everything done to push that agenda has looked more like a big de-enshittification.
Another one is Model Context Protocol, which brings forth the cutting edge (for 1970) idea of using a standard text based interface so that separate programs can interoperate through it.
If the cost of having non-user-hostile software is to let AI bros run around thinking they invented things like stdin and documentation, I'm all for it at this point.
If any AI bros are reading this here's another idea. Web pages that use a mostly static layout and a simple structure would probably be a lot easier for AI to parse. And google, it would be really beneficial to AI agents if their web searches weren't being interfered with by clickjacking sites such as Pinterest.
There's a pattern where people create AI-specific infrastructure for coding agents which is essentially instantly obsolete because it's pointless. Stuff like most MCPs (instead of just using a CLI), agent-specific files (CLAUDE.md, AGENTS.MD, github-instructions.NET etc.) etc.
> You should have a good, concise introduction to the codebase that allows anyone to write and test a simple patch in under 15 minutes.
I agree and would go one step further. The way people are now talking to LLMs to write code is the way we need them to plan and discuss in meetings with humans.
Everything regarding AI-assisted development is basically training wheels for the young people coming into the workplace.
We just did this the other week and it's such a great setup using AI. Monorepos in general are better for coding agents since it's a single location to search. But now we have the ability to say "Add xyz optional param to our API" and claude adds the code + updates the documentation. I was also able to quickly ask "look at our API and our docs, find anything out of date".
Our set up is:
packages/
↳ server
↳ app
↳ docs
Using mintlify for the docs, just points to the markdown files in the docs folder. And then a line in the claude.md to always check /docs for updates after adding new code.
Yes it's awesome! I'm creating a lot of CLIs with Claude Code to interact with external services. Yesterday made a CLI for the Google Search Console so I can prompt "get all problems from indexing in Google Search Console and fix them".
Same with Sentry bugs. Same with the customer support "Use the the customer support cli skill to get recent conversations from customers and rank bug reporting and features requests and suggest things to work on"
Sentry MCP is great, “find out top 10 issues by users affected, check what it would take to fix and if you think it’s a low risk fix, apply it. Open a PR that links to the issues and explain the issue and the fix in three sentences max”.
The one thing I hate about monorepos is nothing ever gets versioned, packaged, and shipped.
Polyrepos are workable, the way to do it is to actually version, ship, and document every subcomponent. When I mean ship, I really mean ship, as in a .deb package or python wheel with a version number, not a commit hash. AI can work with this as well, as long as it has access to the docs (which can also be AI-generated).
The best thing about monorepos is nothing ever gets versioned and packaged.
That means, a subcomponent can just make a needed change in the supercomponent as well, and test and the ship the subcomponent without excess ceremonies and releases.
The monorepo make it easier to ship the overall product but harder to ship parts of it.
I've used a monorepo for the past 13 years and I got all shared packages with version 0.0.0 and I still haven't figured out a simple way to share just some parts of it like a CLI.
Does anyone have a monorepo and publishes NPM packages with source code of only that folder? Sub-gits required to pull in multiple places...
I've got about ~15 repos for a project and I just start Claude Code in the parent directory of all of them, so it has clear visibility everything and cross-reference whatever it needs.... super handy.
It's a good saying but the literal meaning is not entirely correct anymore.
Climate change has changed the math on tree planting in a few ways.
For example tree planting in your area today may backfire vs 30 years ago:
https://www.scientificamerican.com/article/forest-preservati...
The best time to move your docs to your repo was 30 years ago. But now that they are written by LLMs, tomorrow's LLM will be able to write an even better doc than today's LLM. Nothing is gained in caching them now.
that's true. Take care because in the YCombinator there is "Don't be snarky".
Ask yourself how you could have provided the same useful insight without being snarky:
https://news.ycombinator.com/newsguidelines.html
I don't see anything snarky about their comment. That rule is for cases where people are overly sarcastic and argumentative, not for comments like the above.
The biggest win for me with docs-in-repo isn't the AI angle, it's that pull requests can't land without updating the relevant docs. When your support pages, privacy policy, and README all live in the same repo, they naturally stay in sync with the code.
GitHub Pages serving directly from a /docs folder makes it even simpler, no separate deploy, no separate CMS, no drift. The less infrastructure between writing and publishing, the more likely docs actually get maintained.
There's a lot of things that we mean when we say 'docs'.
The great talk "No Vibes Allowed" put me to the far end of the other extreme - persistent long term state on disk is bad. Always force agents to rebuild, aggressively sub agent or use tools to compress context. The code should be self documenting as much as possible and structured in a way such that it's easy to grep through it. No inline docs trying to describe the structure of the tree (okay, maybe like, 3 at most).
I don't have the time to build such an elaborate testing harness as they do though. So instead I check in a markdown jungle in ROOT/docs/* . And garbage collect them aggressively. Most of these are not "look for where the code is", they are plans of varying length, ADRs, bug reports, etc. and they all can and *will" get GC'ed.
I still use persistent docs but they're very spare and often completely contractual. "Yes, I can enumerate the exact 97 cases I need to support, and we are tracking each of these in a markdown doc". That is fine IMO. Not "here let me explain what this code does". Or even ADRs - I love ADRs, but at least for my use case, I've thrown out the project and rewritten from scratch when too many of them got cluttered up... Lol.
I'm also re-implementing an open source project (with the intent of genuinely making it better as a daily user, licensed under the same license, and not just clean rooming it), which makes markdown spam less appealing to me. I kind of wish there was yet another git wrapper like jujutsu which easily layered and kept commits unified on the same branch but had multi-level purposes like this. Persistent History for some things is not needed, but git as a wrapper for everything is so convenient. Maybe I just submodule the notes....
Note: my approach isn't the best, heck, 1 month ago OpenAI wrote an article on harness engineering where they had many parallel agents working, including some which aggressively garbage collected. They garbage collected in the sense that yes, prolific docs point agents to places XYZ, but if something goes out of date, sync the docs. Again, That works if you have a huge compute basin. But for my use cases, my approach is how I combatted markdown spam.
One of the better ways to maintain docs I've seen is with tests that let you describe what the inputs and outputs were for an API, and from it the framework generated your docs. (This was Spring Rest Docs) We included aggressive checks to have every input and output tested, it meant we had one truth about what fields existed: The code was aligned with the tests, and the tests were also the docs. I really liked this idea; Just one record of the truth. Granted it doesn't capture the intent of the code perfectly, but it solves a lot of the garbage collection.
There’s an irresistible, almost demoralizing irony in the fact that developers are discovering docs and accessibility only now due to AI. They needed docs and didn’t know it until they had at their disposal an ersatz user in the form of an LLM that asked for context.
Strongly agreed. However, some developers have trouble writing clearly and reading lots of text, and therefore prefer oral and interactive + real-time transmission of the information. Those developers, I suppose and hope, are discovering that they can talk out loud to their agents, explain everything interactively, and then the agent can create whatever longer-term artifact it wants to record the understanding. Multi-modal interfaces FTW?
Out-of-band docs have always been a constant source of frustration and discrepancies. It's really difficult to keep readme.com docs updated with actual code releases because there's no hard constraint preventing one from updating without the other. It just relies on "convention".
In the R Markdown you write an R function to parse all snippets, then refer to snippets by name. If the snippet can't be found, building the documentation fails, and noisily breaks a CI/CD pipeline.
What's nice is that you can then use this to parse C++ definitions into Markdown tables to render nicely formatted content.
The general idea is that you can have "living" documentation reference source code and break on mismatch. Whether you use knitr/pandoc or python or KeenWrite/R Markdown[1] is an implementation detail.
In the Elixir ecosystem (where documentation is considered a "first-class citizen" in the language), you can run code examples as part of your test suite in a similar fashion ("doctest"): https://elixir-recipes.github.io/testing/doctests/
We have been on this path at work. But I challenge everyone to consider what you lose with MD vs Confluence (et al). It is NOT easier to author, comment on, label, view history of, move without breaking links, etc. markdown docs vs Confluence. If I am the sole author plus my AI and the scope is narrow (a library), I go for MD. But for a big org, process docs, fast iteration… I’m not convinced, until someone builds equally powerful editing UI on top of MD files.
When I start a new project with a team I start off with asking 'how we will work' and part of that is 'how we will communicate'. Less is more in that world. Jira, confluence, github, slack, email, standup, ad-hock meetings, bongo drums, etc etc. The more places you communicate the harder it is to keep everyone on the same page. I have always been a fan of putting docs next to code for this exact reason and, as far as I can tell, it has been the right decisions every time.
With AI code assistants I personally spend 90% of time/tokens on design and understanding and that means creating docs that represent the feature and the changes needed to implement it so I can really see the value growing over time to this approach. Software engineering is evolving to be less about writing the code and more about designing the system and this is supporting that trend.
In the end I don't think AI hasn't fundamentally changed the benefit/detractor equation, it is just emphasizing that docs are part of the code and making it more obvious that putting them in the code is generally pretty beneficial.
Bit of a plug I suppose, but this was what motivated me to set up AS Notes, my VS code extension which makes VS Code a personal knowledge management system, with linking and markdown tooling. I've built an html converter so they can be published to github pages from the repo. It's here if it's of interest to anyone https://www.appsoftware.com/blog/as-notes-turn-vs-code-into-... ... I'm so much more motivated to write docs when a) its easy to keep them up to date using an agent, and b) someone (agents) will actually read them!
Pure markdown is fine until you need decent tables or structured metadata. Docs-in-repo sounds clean on paper, but the minute you need comments, suggestions, inline edits, permissions, and approvals from people who do not live in git all day, you are recreating half of Notion or Google Docs with plugins and glue code.
Then you ask marketing or support to open a PR. That is usually where the markdown honeymoon ends.
What about a OneDrive folder shared with all developers, mounted in a place the AI can access? Putting docs in git makes it slow to iterate and share. That's my hesitancy with committing them.
Not sure I agree with this. MD files need to be constantly synced to code state- why not just grep the code files? This is just more unstructured indexing
yeah my teammates seem to enjoy checking in endless walls of MD texts of "documentation" generated by llms after it's done adding a feature. So even if that's an extreme and your documentation is more thoughtful, there is still a problem of:
* redundancy with the code: if code samples can be generated from the code, why bother duplicating them? what do they add? can they not be llm-generated later? and possibly kept somewhere out of the way (like, a website) so as not to clutter the codebase with redundancy
* if you do go for this duplication, then you are on the hook for ensuring it's always up-to-date otherwise it becomes worse than duplicate: misleading
So my preference is, when adding something to the repo, think very hard whether this information is redundant or not. Handcrafted docs, notes, comments that add more context like why was this built that way after a ton of deliberation - yes. Anything that is trivially derived from the code itself - no.
I've been trying to push people to use hitchstory or similar to generate docs from specification tests precisely to avoid that redundancy but most people just look blankly at it and go "why don't you just do that with AI?"
Grepping works when you wrote the code. Not so much when someone else installs your package and has no idea which export is public API. We added a one-page markdown saying "use these, ignore the rest" and the wrong-import issues mostly stopped.
Sounds like they are saying use a repo like git for your documents to help AI read/"understand" your docs. Is that correct ?
I am all for using a source control system for your documents, I usually use RCS. But give AI access to your docs, no thanks. If I upload any of my docs to a public server (very rarely happens), they are compressed and encrypted to make sure only I and a few people can view them.
For me it's a case of, I have to expose my canvas library documentation for the training data bots to find and (hopefully) include in the LLM training data because it's the only way I'll ever get LLMs to:
A) accept that my library exists, and has its uses (it's a tough world out there for canvas-focussed JS libraries that aren't Fabric.js, Konva.js or Pixi.js)
B) learn how to write code using my library in the best way possible (because the vibes ain't going away, so may as well teach the Agents how to do the work correctly)
Plus, writing the documentation[1] for a library I've been developing for over 10 years has turned into a useful brain-dumping activity to help justify all the decisions I've made along the way (such as my approach to the scene graph). I'm not going to be here forever, so might as well document as much as I can remember now.
It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
'You must write docs. Docs must be in your repo. You must write tests. You must document your architecture. Etc. Etc.'
These were all best practices before LLMs existed and they remain so even now. I have been writing extensive documentation for all my software for something like twenty years now, whether it was for software I wrote for myself, for my tiny open source projects or for businesses. I will obviously continue to do so and it has nothing to do with:
> AI changes the game
The reason is simply that tests and documentation are useful to humans working on the codebase. They help people understand the system and maintain it over time. If these practices also benefit LLMs then that is certainly a bonus, but these practices were valuable long before LLMs existed and they remain valuable even now regardless of how AI may have changed the game.
It is also a bit funny that these considerations did not seem very common when the beneficiaries were fellow human collaborators, but are now being portrayed as very important once LLMs are involved. I'd argue that fellow humans and your future self deserved these considerations even more in the first place. Still, if LLMs are what finally motivate people to write good documentation and good tests, I suppose that is a good outcome since humans will end up benefiting from it too.
> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
Maybe it's the speed of LLM iteration that makes the benefit more immediately obvious, vs seeing it unfold with a team of people over a longer time? It's almost like running a study?
I have a similar reaction to strong static types being advocated to help LLMs understanding/debugging code, catching bugs, refactoring... when it's obvious to me this helps humans as well.
Curious how "this practice helps LLMs be more productive" relates to studies that try to show this with human programmers, where running convincing human studies is really difficult. Besides problems with context sizes, are there best practices that help LLMs a lot but not humans?
AI means that you cannot defer software design until you've written half code; you cannot defer documentation to random notes at the end.
It has the effect of finally forcing people to think about the software they're making, assuming they care about quality. If they didn't, then it's not practically different from an insecure low-code app or something copy-pasted from 15 year old StackOverflow answers.
> The reason is simply that tests and documentation are useful to other humans working on the codebase.
Including future you
> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
About 95% of the work needed to make LLMs happy is just general purpose better engineering. Units tests? Integration tests? CI? API documentation? Good example? All great for humans too!
I consider this largely a good thing. It would be much worse if the changes needed for Happy LLMs were completely different than what you want for Happy Humans! Even worse would be if they were mutually exclusive.
It's a win. I'll take it.
Well, it's timely because there's a docs platform that has surged in popularity, and it really is not a good idea for most of those who need technical docs to be using a SaaS that approximates Squarespace.
Lately I have seen a lot of things coming full circle like this in a way that always seems positive for humans as well.
Many doomers are running around saying the future is grim because everything will be made for AI agents to use rather than humans. But so far everything done to push that agenda has looked more like a big de-enshittification.
Another one is Model Context Protocol, which brings forth the cutting edge (for 1970) idea of using a standard text based interface so that separate programs can interoperate through it.
If the cost of having non-user-hostile software is to let AI bros run around thinking they invented things like stdin and documentation, I'm all for it at this point.
If any AI bros are reading this here's another idea. Web pages that use a mostly static layout and a simple structure would probably be a lot easier for AI to parse. And google, it would be really beneficial to AI agents if their web searches weren't being interfered with by clickjacking sites such as Pinterest.
> These were all best practices before LLMs existed and they remain so even now
Okay, so what, should I be moving my docs out of the repo or something?
How should I make it as hard as possible for LLMs to make any use of or suggestions about my documentation?
There's a pattern where people create AI-specific infrastructure for coding agents which is essentially instantly obsolete because it's pointless. Stuff like most MCPs (instead of just using a CLI), agent-specific files (CLAUDE.md, AGENTS.MD, github-instructions.NET etc.) etc.
> You should have a good, concise introduction to the codebase that allows anyone to write and test a simple patch in under 15 minutes.
Yeah, that's the CONTRIBUTING file.
I agree and would go one step further. The way people are now talking to LLMs to write code is the way we need them to plan and discuss in meetings with humans.
Everything regarding AI-assisted development is basically training wheels for the young people coming into the workplace.
LLMs are making it more possible to maintain.
We just did this the other week and it's such a great setup using AI. Monorepos in general are better for coding agents since it's a single location to search. But now we have the ability to say "Add xyz optional param to our API" and claude adds the code + updates the documentation. I was also able to quickly ask "look at our API and our docs, find anything out of date".
Our set up is:
Using mintlify for the docs, just points to the markdown files in the docs folder. And then a line in the claude.md to always check /docs for updates after adding new code.Yes it's awesome! I'm creating a lot of CLIs with Claude Code to interact with external services. Yesterday made a CLI for the Google Search Console so I can prompt "get all problems from indexing in Google Search Console and fix them". Same with Sentry bugs. Same with the customer support "Use the the customer support cli skill to get recent conversations from customers and rank bug reporting and features requests and suggest things to work on"
Sentry MCP is great, “find out top 10 issues by users affected, check what it would take to fix and if you think it’s a low risk fix, apply it. Open a PR that links to the issues and explain the issue and the fix in three sentences max”.
The one thing I hate about monorepos is nothing ever gets versioned, packaged, and shipped.
Polyrepos are workable, the way to do it is to actually version, ship, and document every subcomponent. When I mean ship, I really mean ship, as in a .deb package or python wheel with a version number, not a commit hash. AI can work with this as well, as long as it has access to the docs (which can also be AI-generated).
The best thing about monorepos is nothing ever gets versioned and packaged.
That means, a subcomponent can just make a needed change in the supercomponent as well, and test and the ship the subcomponent without excess ceremonies and releases.
The monorepo make it easier to ship the overall product but harder to ship parts of it. I've used a monorepo for the past 13 years and I got all shared packages with version 0.0.0 and I still haven't figured out a simple way to share just some parts of it like a CLI. Does anyone have a monorepo and publishes NPM packages with source code of only that folder? Sub-gits required to pull in multiple places...
I've got about ~15 repos for a project and I just start Claude Code in the parent directory of all of them, so it has clear visibility everything and cross-reference whatever it needs.... super handy.
That time was like 10 years ago. I think it’s been best practice to have docs in the repo for a long time.
GitHub Pages came out in 2008.
The best time to plant a tree was 30 years ago. The second best time is now.
It's a good saying but the literal meaning is not entirely correct anymore. Climate change has changed the math on tree planting in a few ways. For example tree planting in your area today may backfire vs 30 years ago: https://www.scientificamerican.com/article/forest-preservati...
The best time to move your docs to your repo was 30 years ago. But now that they are written by LLMs, tomorrow's LLM will be able to write an even better doc than today's LLM. Nothing is gained in caching them now.
1984 was a bit more than 10 years ago:
http://literateprogramming.com/
c.f.,
https://news.ycombinator.com/item?id=47300747
that's true. Take care because in the YCombinator there is "Don't be snarky". Ask yourself how you could have provided the same useful insight without being snarky: https://news.ycombinator.com/newsguidelines.html
I don't see anything snarky about their comment. That rule is for cases where people are overly sarcastic and argumentative, not for comments like the above.
Python Sphinx as well.
The biggest win for me with docs-in-repo isn't the AI angle, it's that pull requests can't land without updating the relevant docs. When your support pages, privacy policy, and README all live in the same repo, they naturally stay in sync with the code.
GitHub Pages serving directly from a /docs folder makes it even simpler, no separate deploy, no separate CMS, no drift. The less infrastructure between writing and publishing, the more likely docs actually get maintained.
There's a lot of things that we mean when we say 'docs'.
The great talk "No Vibes Allowed" put me to the far end of the other extreme - persistent long term state on disk is bad. Always force agents to rebuild, aggressively sub agent or use tools to compress context. The code should be self documenting as much as possible and structured in a way such that it's easy to grep through it. No inline docs trying to describe the structure of the tree (okay, maybe like, 3 at most).
I don't have the time to build such an elaborate testing harness as they do though. So instead I check in a markdown jungle in ROOT/docs/* . And garbage collect them aggressively. Most of these are not "look for where the code is", they are plans of varying length, ADRs, bug reports, etc. and they all can and *will" get GC'ed.
I still use persistent docs but they're very spare and often completely contractual. "Yes, I can enumerate the exact 97 cases I need to support, and we are tracking each of these in a markdown doc". That is fine IMO. Not "here let me explain what this code does". Or even ADRs - I love ADRs, but at least for my use case, I've thrown out the project and rewritten from scratch when too many of them got cluttered up... Lol.
I'm also re-implementing an open source project (with the intent of genuinely making it better as a daily user, licensed under the same license, and not just clean rooming it), which makes markdown spam less appealing to me. I kind of wish there was yet another git wrapper like jujutsu which easily layered and kept commits unified on the same branch but had multi-level purposes like this. Persistent History for some things is not needed, but git as a wrapper for everything is so convenient. Maybe I just submodule the notes....
Note: my approach isn't the best, heck, 1 month ago OpenAI wrote an article on harness engineering where they had many parallel agents working, including some which aggressively garbage collected. They garbage collected in the sense that yes, prolific docs point agents to places XYZ, but if something goes out of date, sync the docs. Again, That works if you have a huge compute basin. But for my use cases, my approach is how I combatted markdown spam.
One of the better ways to maintain docs I've seen is with tests that let you describe what the inputs and outputs were for an API, and from it the framework generated your docs. (This was Spring Rest Docs) We included aggressive checks to have every input and output tested, it meant we had one truth about what fields existed: The code was aligned with the tests, and the tests were also the docs. I really liked this idea; Just one record of the truth. Granted it doesn't capture the intent of the code perfectly, but it solves a lot of the garbage collection.
ADR = "Architecture Decision Record" https://github.com/joelparkerhenderson/architecture-decision...
There’s an irresistible, almost demoralizing irony in the fact that developers are discovering docs and accessibility only now due to AI. They needed docs and didn’t know it until they had at their disposal an ersatz user in the form of an LLM that asked for context.
https://passo.uno/skills-are-docs/
Strongly agreed. However, some developers have trouble writing clearly and reading lots of text, and therefore prefer oral and interactive + real-time transmission of the information. Those developers, I suppose and hope, are discovering that they can talk out loud to their agents, explain everything interactively, and then the agent can create whatever longer-term artifact it wants to record the understanding. Multi-modal interfaces FTW?
Out-of-band docs have always been a constant source of frustration and discrepancies. It's really difficult to keep readme.com docs updated with actual code releases because there's no hard constraint preventing one from updating without the other. It just relies on "convention".
> difficult to keep [...] docs updated with actual code
I used my software and R Markdown documents to help address such problems. In the source code, you have:
In the R Markdown you write an R function to parse all snippets, then refer to snippets by name. If the snippet can't be found, building the documentation fails, and noisily breaks a CI/CD pipeline.What's nice is that you can then use this to parse C++ definitions into Markdown tables to render nicely formatted content.
The general idea is that you can have "living" documentation reference source code and break on mismatch. Whether you use knitr/pandoc or python or KeenWrite/R Markdown[1] is an implementation detail.
[1]: https://keenwrite.com/
In the Elixir ecosystem (where documentation is considered a "first-class citizen" in the language), you can run code examples as part of your test suite in a similar fashion ("doctest"): https://elixir-recipes.github.io/testing/doctests/
We have been on this path at work. But I challenge everyone to consider what you lose with MD vs Confluence (et al). It is NOT easier to author, comment on, label, view history of, move without breaking links, etc. markdown docs vs Confluence. If I am the sole author plus my AI and the scope is narrow (a library), I go for MD. But for a big org, process docs, fast iteration… I’m not convinced, until someone builds equally powerful editing UI on top of MD files.
When I start a new project with a team I start off with asking 'how we will work' and part of that is 'how we will communicate'. Less is more in that world. Jira, confluence, github, slack, email, standup, ad-hock meetings, bongo drums, etc etc. The more places you communicate the harder it is to keep everyone on the same page. I have always been a fan of putting docs next to code for this exact reason and, as far as I can tell, it has been the right decisions every time.
With AI code assistants I personally spend 90% of time/tokens on design and understanding and that means creating docs that represent the feature and the changes needed to implement it so I can really see the value growing over time to this approach. Software engineering is evolving to be less about writing the code and more about designing the system and this is supporting that trend.
In the end I don't think AI hasn't fundamentally changed the benefit/detractor equation, it is just emphasizing that docs are part of the code and making it more obvious that putting them in the code is generally pretty beneficial.
Bit of a plug I suppose, but this was what motivated me to set up AS Notes, my VS code extension which makes VS Code a personal knowledge management system, with linking and markdown tooling. I've built an html converter so they can be published to github pages from the repo. It's here if it's of interest to anyone https://www.appsoftware.com/blog/as-notes-turn-vs-code-into-... ... I'm so much more motivated to write docs when a) its easy to keep them up to date using an agent, and b) someone (agents) will actually read them!
More importantly move your docs from anything else to pure markdown. Finally we are free from weird file formats and superfluous syntax for docs.
Pure markdown is fine until you need decent tables or structured metadata. Docs-in-repo sounds clean on paper, but the minute you need comments, suggestions, inline edits, permissions, and approvals from people who do not live in git all day, you are recreating half of Notion or Google Docs with plugins and glue code.
Then you ask marketing or support to open a PR. That is usually where the markdown honeymoon ends.
ReST delivers most of what Markdown can't.
What about a OneDrive folder shared with all developers, mounted in a place the AI can access? Putting docs in git makes it slow to iterate and share. That's my hesitancy with committing them.
Not sure I agree with this. MD files need to be constantly synced to code state- why not just grep the code files? This is just more unstructured indexing
yeah my teammates seem to enjoy checking in endless walls of MD texts of "documentation" generated by llms after it's done adding a feature. So even if that's an extreme and your documentation is more thoughtful, there is still a problem of:
* redundancy with the code: if code samples can be generated from the code, why bother duplicating them? what do they add? can they not be llm-generated later? and possibly kept somewhere out of the way (like, a website) so as not to clutter the codebase with redundancy
* if you do go for this duplication, then you are on the hook for ensuring it's always up-to-date otherwise it becomes worse than duplicate: misleading
So my preference is, when adding something to the repo, think very hard whether this information is redundant or not. Handcrafted docs, notes, comments that add more context like why was this built that way after a ton of deliberation - yes. Anything that is trivially derived from the code itself - no.
I've been trying to push people to use hitchstory or similar to generate docs from specification tests precisely to avoid that redundancy but most people just look blankly at it and go "why don't you just do that with AI?"
Grepping works when you wrote the code. Not so much when someone else installs your package and has no idea which export is public API. We added a one-page markdown saying "use these, ignore the rest" and the wrong-import issues mostly stopped.
The code doesn't always say "why".
Wait, who didn't have the docs in the repo? Where else would it go?
Is the Git Book part of the git repo?
Is the Linux Doc Projec part of the kernel?
No. For good reasons. The only people who insists all doc must live in the same repo as the code are the ones who does not value documentation.
Note, that in both examples above there is a documentation in the main repo, but not all documentation lives there.
There are two main options, put your docs in the repo, or throw them all over the floor. Many companies opt for the floor.
Sounds like they are saying use a repo like git for your documents to help AI read/"understand" your docs. Is that correct ?
I am all for using a source control system for your documents, I usually use RCS. But give AI access to your docs, no thanks. If I upload any of my docs to a public server (very rarely happens), they are compressed and encrypted to make sure only I and a few people can view them.
For me it's a case of, I have to expose my canvas library documentation for the training data bots to find and (hopefully) include in the LLM training data because it's the only way I'll ever get LLMs to:
A) accept that my library exists, and has its uses (it's a tough world out there for canvas-focussed JS libraries that aren't Fabric.js, Konva.js or Pixi.js)
B) learn how to write code using my library in the best way possible (because the vibes ain't going away, so may as well teach the Agents how to do the work correctly)
Plus, writing the documentation[1] for a library I've been developing for over 10 years has turned into a useful brain-dumping activity to help justify all the decisions I've made along the way (such as my approach to the scene graph). I'm not going to be here forever, so might as well document as much as I can remember now.
[1] - https://scrawl-v8.rikweb.org.uk/docs/reference/index.html
> Just like code should be primarily written for humans to read, all files in a repository is written primarily for humans to review
The author at least acknowledges the point of files is to be read by humans.
Also the article is talking specifically about public docs mean to be used by others, not ones you’re specifically trying to keep private
"because of AI" is not a valid reason to change anything about how developer communities & projects are managed.