5 comments

  • caleb_perez 12 hours ago

    If anyone wants it i can send ya the whole loop diagram or the memory layout on how the Memory Fusion Engine is truly persistent.

  • bradydward 10 hours ago

    We've been running something conceptually similar but with a deliberately simpler memory model - flat markdown files (MEMORY.md for curated long-term recall, daily logs for raw session notes) instead of a graph structure.

    The tradeoff we found: graph-based memory is more queryable but adds architectural complexity that breaks when the runtime crashes or the agent needs to be inspected by a human. Flat files are readable, git-diffable, and survive catastrophic failures better.

    The loop you describe (reconstruct > reason > decide > execute > record) matches almost exactly what we landed on. The part that's still unsolved for us is "update memory" - specifically, who decides what's worth keeping long-term vs discarding. Right now it's the agent's judgment call, which works until it isn't.

    Curious what your Memory Fusion Engine does differently at the curation layer - is it content-based similarity, recency weighting, or something else?

    • caleb_perez 8 hours ago

      Yeah, I chased that same problem for a while too. What cracked it open for me was using a different ruler entirely. It stopped being what’s the closest memory and turned into what still matters, what’s about to matter, why, and what can change those priorities..

      That’s honestly where it stopped being retrieval and started feeling more like actual memory. So far it hasn’t really left me hanging in any work scenarios, even in chaos or multi-process situations, because it’s constantly updating continuity and direction as things change.

      Then it kind of just naturally solved a way to handling decay / consolidation once the history gets long. At one point I several thousand revolving memories but I haven't found any weaknesses yet.

  • Ethosum 25 minutes ago

    [dead]