Reviewing the prompts, looks like you are using this CAS tool as a global context data manager, supporting primarily a code use case. There are a number of extant MCP-capable code understanding tools (Serena and others), but what I am lacking in my CLI toolchain is non-code memory. You even called this out in another thread, mentioning task management- I find that the type of memory I need is not scoped to a code module, but an agent session - specifically to the orchestration of many agent sessions. What we have today are techniques, using a bunch of hacked together context files for sessions (tasks.md, changes.md), for agents (roles.md), for tech (architecture.md), etc etc, hoping that our prompts guide the agent to use them, and this is IMO a natural place for some abstraction over memory that can provide rigor.
I am observing in my professional (non-Claude Max) life that context is a real limiter, from both the “too much is confusing the agent” and “I’m hitting limits doing basic shit” perspectives (looking at you, Bedrock and Github), and having a tool that will help me give an agent only what it needs would be really valuable. I could do more with the tools, spend less time trying to manually intervene, and spend less of my token budget.
While the examples and provided prompt lean toward code (since that's my personal use case), YAMS is fundamentally a generic content-addressed storage system.
I will attempt to run some small agents with custom prompts and report back.
I have been using it for task tracking, research, and code search. When using CLI tools, I found that the LLM's were able to find code in less tool calls when I stored my codebase in the tool. I had to wrangle the LLMs to use the tool verse native rgrep or find.
I am also trying to stabilize PDF text extraction to improve knowledge retrieval when I want to revisit a paper I read but cannot remember which one it was. Most of these use cases come from my personal use and updates to the tool but I am trying to make it as general as possible.
>block-level deduplication (saves 30-40% on typical codebases)
How is savings of 40% on a typical codebase possible with block-level deduplication? What kind of blocks are you talking about? Blocks as in the filesystem?
I am working to improve the CLI tools to make getting this information easier but I have stored the yam repo in yams with multiple snapshots and metadata tags and I am seeing about 32% storage savings.
I stored the codebase for yams in the tool. The "blocks" are content-defined blocks/chunks, not filesystem blocks. They're variable-size chunks (typically 4-64KB) created using Rabin fingerprinting to find natural content boundaries. This enables deduplication across files that share similar content.
Thank you for sharing this. Sorry for a possible noob question. How are embedding generated? Does it use a hosted embedding model? (I was trying to understand how is semantic search implemented)
Hmm I wonder how much that effects the compression benefits of block level duplication. The mock embeddings choose vector elements from a normal distribution, so it’s far from uniform
Although I developed it explicitly without search, and catered it to the latest agents which are all really good at searching and reading files. Instead you and LLMs cater your context to be easily searchable (folders and files). It’s meant for dev workflows (i.e a projects context, a user context)
I made a video showing how easy it is to pull in context to whatever IDE/desktop app/CLI tool you use
That sounds like a practical take on LLM memory — especially the block-level deduplication part.
Most “memory” layers I’ve seen for AI are either overly complex or end up ballooning storage costs over time, so a content-addressed approach makes a lot of sense.
Also curious — have you benchmarked retrieval speed compared to more traditional vector DB setups? That could be a big selling point for devs running local research workflow
The graph functionality is exposed through the retrieval functionality. I may improve this later but the idea was to maximize getting the best results when looking for stored data.
There is no built in graph functionality correct? But one could use existing mechanisms like metadata or storing the link between documents as a document itself?
The tool has built-in versioning. Each file gets a unique SHA-256 hash on storage (automatic versioning), you can update metadata to track version info, and use collections/snapshots to group versions together. I have been using the metadata to track progress and link code snippets.
Wicked cool. Useful for single users. Any plans to build support for multiple users? Would be useful for an LLM project that requires per user sandboxing.
There might be, but as of a few years ago they were not mature and may not have captured the mindshare yet. Company I worked for actually used websocketpp because Boost ASIO implementation had some bug they couldn't work around, but then it was fixed and we dropped websocketpp.
I can say one the the nice thing about Boost network implementation (ASIO) is fairly mature asychronous framework using a variety of techniques. Also if you need HTTP or Websockets you can use Beast which is built on top of ASIO.
And if you're using one thing from Boost, its easy to just use everything else you need and that Boost provides to minimize dependencies.
The reason for depending on Boost in this repo is just few search characters away - he needs HTTP/WebSocket implementation and Boost.Beast provides it. The actual bloat here in this repo is conan.
For its credit though, it follows the C++ "philosophy" fairly faithfully. If you don't like Boost you probably don't like C++ either.
Although that download is a monster, I think its like 1.6 GB even compressed. Its not modular at all, some of the modules depend on others and its impossible to separate them out (they've tried in the past)
But last I check there is ALOT they could have removed, especially support for older compilers like MSVC 200x (!), pre C++ 11/older GNU compilers, etc. without compromising functionality. I'm not if they got around to doing that.
Boost is a nearly 30 year old open source library that provides stuff for C++ that most standard libraries for other languages already have out of the box. You seem to think that it is hipster bullshit rather than almost a dinosaur itself.
Reviewing the prompts, looks like you are using this CAS tool as a global context data manager, supporting primarily a code use case. There are a number of extant MCP-capable code understanding tools (Serena and others), but what I am lacking in my CLI toolchain is non-code memory. You even called this out in another thread, mentioning task management- I find that the type of memory I need is not scoped to a code module, but an agent session - specifically to the orchestration of many agent sessions. What we have today are techniques, using a bunch of hacked together context files for sessions (tasks.md, changes.md), for agents (roles.md), for tech (architecture.md), etc etc, hoping that our prompts guide the agent to use them, and this is IMO a natural place for some abstraction over memory that can provide rigor.
I am observing in my professional (non-Claude Max) life that context is a real limiter, from both the “too much is confusing the agent” and “I’m hitting limits doing basic shit” perspectives (looking at you, Bedrock and Github), and having a tool that will help me give an agent only what it needs would be really valuable. I could do more with the tools, spend less time trying to manually intervene, and spend less of my token budget.
While the examples and provided prompt lean toward code (since that's my personal use case), YAMS is fundamentally a generic content-addressed storage system.
I will attempt to run some small agents with custom prompts and report back.
How do you use this in your workflow? Please give some examples because it’s not clear to me what this is for.
I have been using it for task tracking, research, and code search. When using CLI tools, I found that the LLM's were able to find code in less tool calls when I stored my codebase in the tool. I had to wrangle the LLMs to use the tool verse native rgrep or find.
I am also trying to stabilize PDF text extraction to improve knowledge retrieval when I want to revisit a paper I read but cannot remember which one it was. Most of these use cases come from my personal use and updates to the tool but I am trying to make it as general as possible.
This is an interesting approach! Why not offload PDF extraction to other frameorks that apply OCR pdf -> .md
I may explore this when I implement the vectordb implementation I started.
>block-level deduplication (saves 30-40% on typical codebases)
How is savings of 40% on a typical codebase possible with block-level deduplication? What kind of blocks are you talking about? Blocks as in the filesystem?
I am working to improve the CLI tools to make getting this information easier but I have stored the yam repo in yams with multiple snapshots and metadata tags and I am seeing about 32% storage savings.
Cool. I have no idea what "stored the yam repo in yams" means. What do you mean by "block-level deduplication"? What is a block?
I stored the codebase for yams in the tool. The "blocks" are content-defined blocks/chunks, not filesystem blocks. They're variable-size chunks (typically 4-64KB) created using Rabin fingerprinting to find natural content boundaries. This enables deduplication across files that share similar content.
Thank you for sharing this. Sorry for a possible noob question. How are embedding generated? Does it use a hosted embedding model? (I was trying to understand how is semantic search implemented)
It, uh... generates mock embeddings? https://github.com/trvon/yams/blob/c89798d6d2de89caacdbe50d2...
(seems like there's some vague future plans for models like all-MiniLM-L6-v2, all-mpnet-base-v2)
Hmm I wonder how much that effects the compression benefits of block level duplication. The mock embeddings choose vector elements from a normal distribution, so it’s far from uniform
I also developed yet another memory system !
https://github.com/jerpint/context-llemur
Although I developed it explicitly without search, and catered it to the latest agents which are all really good at searching and reading files. Instead you and LLMs cater your context to be easily searchable (folders and files). It’s meant for dev workflows (i.e a projects context, a user context)
I made a video showing how easy it is to pull in context to whatever IDE/desktop app/CLI tool you use
https://m.youtube.com/watch?v=DgqlUpnC3uw
That sounds like a practical take on LLM memory — especially the block-level deduplication part.
Most “memory” layers I’ve seen for AI are either overly complex or end up ballooning storage costs over time, so a content-addressed approach makes a lot of sense.
Also curious — have you benchmarked retrieval speed compared to more traditional vector DB setups? That could be a big selling point for devs running local research workflow
I have not, but that is something I plan to do when I have time.
How would you use the built in functionality to enable graph functionality? Metadata or another document used as the link or collection of links?
The graph functionality is exposed through the retrieval functionality. I may improve this later but the idea was to maximize getting the best results when looking for stored data.
There is no built in graph functionality correct? But one could use existing mechanisms like metadata or storing the link between documents as a document itself?
In my RAG I use qdrant w/ Redis. Very successfully. I don't really see the use of "another memory system for LLM", perhaps I'm missing something.
I like it and I will be perusing your code for what could be used in my 'not yet working' variant.
not trying to be a hater but how is 100mb/s high performance in 2025? that's as performant as a 20 years old hdd
The system is honestly tuned for storage efficiency not speed but these configurations are tunable and you can use the benchmarks as a reference for tuning. https://github.com/trvon/yams/blob/main/docs/benchmarks/perf...
I'm puzzled - where are the header files?
You mean these? https://github.com/trvon/yams/tree/main/include/yams
Thanks, I learned a lot from this.
How does this compare to Letta?
What about versioning of files?
The tool has built-in versioning. Each file gets a unique SHA-256 hash on storage (automatic versioning), you can update metadata to track version info, and use collections/snapshots to group versions together. I have been using the metadata to track progress and link code snippets.
Wicked cool. Useful for single users. Any plans to build support for multiple users? Would be useful for an LLM project that requires per user sandboxing.
The domain listed on the GitHub repo redirects too many times.
That should be fixed now. It was a misconfiguration of CloudFlare SSL with GitHub Pages.
Hader
>MCP server (requires Boost)
I see stuff like this, and I really have to wonder if people just write software with bloat for the sake of using a particular library.
Blame the committee for refusing to include basic functionality like regular expressions , networking and threads as part of the STL
I feel like there are pretty standard C++ server implementations that are less bloated.
There might be, but as of a few years ago they were not mature and may not have captured the mindshare yet. Company I worked for actually used websocketpp because Boost ASIO implementation had some bug they couldn't work around, but then it was fixed and we dropped websocketpp.
I can say one the the nice thing about Boost network implementation (ASIO) is fairly mature asychronous framework using a variety of techniques. Also if you need HTTP or Websockets you can use Beast which is built on top of ASIO.
And if you're using one thing from Boost, its easy to just use everything else you need and that Boost provides to minimize dependencies.
? Are you complaining about MCP or boost?
It’s an optional component.
What do you want the OP to do?
MCP may not be strictly necessary but it’s straight in line with the intent of the library.
Are you going to take shots at llama.cpp for having an http server and a template library next?
Come on. This uses conan, it has a decent cmake file. The code is ok.
This is pretty good work. Dont be a dick. (Yeah, ill eat the down votes, it deserves to be said)
The reason for depending on Boost in this repo is just few search characters away - he needs HTTP/WebSocket implementation and Boost.Beast provides it. The actual bloat here in this repo is conan.
My experience with Boost has been template metaprogramming hell.
For its credit though, it follows the C++ "philosophy" fairly faithfully. If you don't like Boost you probably don't like C++ either.
Although that download is a monster, I think its like 1.6 GB even compressed. Its not modular at all, some of the modules depend on others and its impossible to separate them out (they've tried in the past)
But last I check there is ALOT they could have removed, especially support for older compilers like MSVC 200x (!), pre C++ 11/older GNU compilers, etc. without compromising functionality. I'm not if they got around to doing that.
This feels like a shallow dismissal, which is frowned upon per the HN guidelines
Boost is a nearly 30 year old open source library that provides stuff for C++ that most standard libraries for other languages already have out of the box. You seem to think that it is hipster bullshit rather than almost a dinosaur itself.