6 comments

  • _pdp_ 12 minutes ago

    Well the project is promising something without providing any details how exactly this is achieved which to me is always a huge red flag.

    Digging deeper I can see it is effectively pg_vector plus mcp with two functions: "recall" and "remember".

    It is effectively a RAG.

    You can make the argument that perhaps the data structure matters but all of these "memory" systems effectively do the same and none of them have so far proven that retrieval is improved compared to baseline vector db search.

  • great_psy an hour ago

    LLM Memeory (in general, any implementation) is good in theory.

    In practice, as it grows it gets just as messy as not having it.

    In the example you have on front page you say “continue working on my project”, but you’re rarely working on just one project, you might want to have 5 or 10 in memory, each one made sense to have at the time.

    So now you still have to say, “continue working on the sass project”, sure there’s some context around details, but you pay for it by filling up your llm context , and doing extra mcp calls

    • dennisy 39 minutes ago

      True! But this is a very naive implementation, a proper implementation could surpass these challenges.

    • vasco 9 minutes ago

      And once you're being specific about what it needs to remember you are 0 steps away from having just told AI to write and read files with the "memory"

  • dennisy an hour ago

    Congratulations on the launch!

    There is lots of competition in this space, how is your tool different?

  • alash3al 7 hours ago

    Platform memory is locked to one model and one company. Stash brings the same capability to any agent — local, cloud, or custom. MCP server, 28 tools, background consolidation, Apache 2.0.