40 comments

  • wyattjoh 17 hours ago

    It would be fantastic if this supported email and calendar providers that weren't Google. Supporting protocols like IMAP or JMAP alongside CalDav would be a fantastic step, as well as open source note-taking apps like Hyprnote would be neat.

    • danpalmer 11 hours ago

      Does anyone use JMAP? I know it came out of Fastmail, but I've not seen anything beyond a few hobby projects integrate with it. Event Fastmail's own clients are just web wrappers, not JMAP clients.

    • ramnique 17 hours ago

      Agreed 100% and we'll slot these into our roadmap. We started out with Google because it was the fastest. Will definitely look into Hypernote integration as well.

    • asciii 15 hours ago

      I second this, as a big Fastmail user

  • cloudking 6 hours ago

    The knowledge graph is well done. I think what's missing from all coworking apps is the UX.

    Prompting is a very specialized skill, average users just don't know what to ask for to get the most out of the LLMs.

    Ideally the UX should organize and surface information to the user that is important automatically, without needing to be prompted.

    • segmenta 5 hours ago

      Thanks, completely agree. UX is probably the hardest part here. Prompting should not be a prerequisite for getting value. We have been thinking about making the system more proactive, for example surfacing relevant notes ahead of meetings or highlighting changes that need attention. Would love to hear how you think this should ideally work.

  • mchusma 18 hours ago

    This is cool! A couple of pieces of feedback as I am looking for something in this family of things but haven't found the perfect fit: 1. I have multiple inboxes, and want to have them work on multiple. 2. I would really like to have skills and mcps visible and understandable. Craft Agents does a nice job of segmenting by workspace and making skills and mcps all visible so I can understand what exactly my agent is set up to do (no black boxes). 3. I want scheduled runs. I don't need push, I actually kind of prefer just the reliability of scheduled, but push would be fine too. In particular, I want to: a. After each granola meeting save in obsidian (I did this in Craft Code for example, but I prefer your more built in approach here, this is nice). b. On intervals, check my emails. I want to give it information on who/what is important to me, and ping me. E.g. billing on Anthropic failed, ping me. c. I also want it to email back and forth to schedule with approved categories of things on request. Just get it on my calendar (share calendly, send times, etc). d. I want junk etc archived. e. For important things, update my knowledge graph (ignore spam, etc). 4. Tying into a to-do list that actually updates based on priorities, and suggests auto archiving things etc would be good.

    In practice, i connected gmail and asked it: "can you archive emails that have an unsubscribe link in them (that are not currently archived)?" and it got stuck on "I'll check what MCP tools are available for email operations first." But i connected gmail through your interface, and I don't see in settings anything about it also having configured the mcp? I also looked at the knowledge graph and it had 20 entities, NONE of which I had any idea what they were. I'm guessing its just putting in people trying to spam me into the contacts? It didn't finish running, but I didn't want to burn endless tokens trying to see if it would find actual people i care about, so I shut it down. One "proxy" for "people i care about" might be "people I send emails to"? I could see how this is a hard problem. I also think regardless I want things more transparent. So for the moment, I'm sticking with Craft Code for this even though it is missing some major things but at least its more clear what it is: its claude code, with a nice UI.

    Hope this was helpful. I know there are multiple people working on things in this family, and I will probably be "largely solved" by the end of 2026, and then we will want it to do the next thing! Good luck, I will watch for updates and these are some nice ideas!

    • segmenta 17 hours ago

      Really appreciate the detailed feedback. There are bunch of great features that you are pointing out that are on our roadmap (will add whats missing). The agent can setup tasks on schedule and help manage them. You can try a prompt like 'Can you schedule a background task xyz to run every morning ...'. The background tasks would show up on the UI once it is scheduled by the assistant. However, you might have to connect the necessary MCP tools in your case.

      On Gmail actions - we currently don’t take write actions on inboxes like archiving or categorizing emails. The Google connection is read-only and used purely to build the knowledge graph. We’re working on adding write actions, but we’re being careful about how we implement them. Also probably why the agent was confused and was looking for an MCP to accomplish the same job.

      On noise in the knowledge graph — this is something we’re actively tuning. We currently have different note-strictness levels that auto-inferred based on the inbox volume (configurable in ~/.rowboat/config/note-creation.json) that control what qualifies as a new node. Higher strictness prevents most emails from creating new entities and instead only updates existing ones. That said, this needs to be surfaced in the product and better calibrated. Using “people I send emails to” as a proxy for importance is a really good idea.

    • rush86999 10 hours ago

      I'm really working towards getting something similar to work. Lots of bug fixing for now. Any help is appreciated if interested.

  • iugtmkbdfil834 12 hours ago

    I think this is a good example of what a good landing page can do.. I can immediately tell what it can do and the visualization makes me want to try it. And I don't think it is particularly refreshing or anything.. it just seems cool.

    • segmenta 9 hours ago

      Really appreciate that. Glad the visualizations made sense.

  • rukuu001 5 hours ago

    This is a product that just makes sense to me - well done on picking a great problem to solve and communicating it so well.

    What are the plans for monetization?

    • segmenta 4 hours ago

      Thanks for the kind words. We plan to offer an account-based option for users that want zero setup, with managed integrations and a choice of LLMs.

  • nkmnz 19 hours ago

    How does this differ from https://github.com/getzep/graphiti ?

    • segmenta 18 hours ago

      Graphiti is primarily focused on extracting and organizing structured facts into a knowledge graph. Rowboat is more focused on day-to-day work. We organize the graph around people, projects, organizations, and topics.

      One design choice we made was to make each node human-readable and editable. For example, a project note contains a clear summary of its current state derived from conversations and tasks across tools like Gmail or Granola. It’s stored as plain Markdown with Obsidian-style backlinks so the user can read, understand, and edit it directly.

  • alansaber 19 hours ago

    Big fan of the idea. 1: is the context graph tweakable in any way 2: how does the user handle/approve background tasks? Otherwise cool and good job!

    • segmenta 19 hours ago

      Thanks!

      All the knowledge is stored in Markdown files on disk. You can edit them through the Rowboat UI (including the backlinks) or any editor of your choice. You can use the built in AI to edit it as well.

      On background tasks - there is an assistant-skill that lets it schedule and manage background tasks. For now, background tasks cannot execute shell-commands on the system. They can execute built-in file handling tools and MCP tools if connected. We are adding an approval system for background tasks as well.

      There are three types of schedules - (a) cron, (b) schedule in a window (run every morning at-most once between 8-10am), (b) run once at x-time. There is also a manual enable/disable (kill switch) on the UI.

  • haolez 19 hours ago

    Cool idea. I use Logseq with some custom scripts and plugins for that. Works very well with today's models capabilities.

    • segmenta 18 hours ago

      Thanks. Obsidian and Logseq were definitely an inspiration while building this. What we’re trying to explore is pushing that a bit further. Instead of manually curating the graph and then querying it, the system continuously updates the graph as work happens and lets the agent operate directly on that structure.

      Would love to know what kind of scripts or plugins you’re using in Logseq, and what you’re primarily using it for.

      • haolez 18 hours ago

        My point was to say that your idea should work because today's models are capable enough.

        If I get some time later today, I'll post my scripts.

        • rukuu001 5 hours ago

          Also interested to hear about your Logseq scripts and the plugins you use as well

  • btbuildem 18 hours ago

    How do you manage scope creep (ie, context size), and contradictory information in the context?

    • segmenta 18 hours ago

      Good question. We don’t pass the entire graph into the model. The graph acts as an index over structured notes. The assistant retrieves only the relevant notes by following the graph. That keeps context size bounded and avoids dumping raw history into the model.

      For contradictory or stale information, since these are based on emails and conversations, we use the timestamp of the conversation to determine the latest information when updating the corresponding note. The agent operates on that current state.

      That said, handling contradictions more explicitly is something we’re thinking about. For example, flagging conflicting updates for the user to manually review and resolve. Appreciate you raising it.

      • delichon 15 hours ago

        > That said, handling contradictions more explicitly is something we’re thinking about.

        That's a great idea. The inconsistencies in a given graph are just where attention is needed. Like an internal semantic diff. If you aim it at values it becomes a hypocrisy or moral complexity detector.

        • segmenta 14 hours ago

          Interesting framing! We’ve mostly been thinking of inconsistencies as signals that something was missed by the system, but treating them as attention points makes sense and could actually help build trust.

          • iugtmkbdfil834 12 hours ago

            This was something that I was working on for a personal solution ( flagging various contradictory threads ). I suspect it is a common use case.

            • segmenta 8 hours ago

              That’s interesting. Would be curious to know what types of contradictions you were looking at and how you approached flagging them.

              • iugtmkbdfil834 an hour ago

                As a corporate drone, keeping track of various internal contradictions in emails is the name of the game ( one that my former boss mastered, but in a very manual way ). In a very boring way, he was able to say: today you are saying X, on date Y you actually said Z.

                His manual approach, which won't work if applied directly ( or more specifically, it will, but it would be unnecessarily labor intensive and on big enough set prohibitively so ), because it would require constant filtering re-evaluating all emails, can still be done though.

                As for exact approach, its a slightly longer answer, because it is a mix of small things.

                Since I try to track, which llm excel at which task ( and assign tasks based on those tracking scores ). It may seem irrelevant at first, but small things like: 'can it handle structured json' rubric will make a difference.

                Then we get to the personas that process the request, and those may make a difference in a corporate environment. Again, as silly as its sounds, you want to effectively have a Dwight and Jim ( yes, it is an office reference ) looking at those ( more if you have a use case that requires more complex lens crafting ) as will both be looking for different things. Jim and Dwight would add their comments noting the sender, what they seem to try to do and issues they noted ( if any ).

                Notes from Jim and Dwight for a given message is passed to a third persona, which will attempt to reconcile it noting discrepancies between Jim and Dwight and checking against other like notes.

                ...and so it goes.

                As for flagging itself, that is a huge topic just by itself. That said, at least in its current iteration, I am not trying to do anything fancy. Right now, it is almost literally, if you see something contradictory ( X said Y then, X says Y now ), show it in a summary. It doesn't solve for multiple email accounts, personas or anything like that.

                Anyway, hope it helps.

                • segmenta 23 minutes ago

                  This was a really interesting read. Thanks for the detailed breakdown and the office references. The multi-persona approach is interesting, almost like a mixture of experts. The corporate email contradiction use case is not something we had in mind, but I can see how flagging those inconsistencies could be valuable!

  • delichon 16 hours ago

    How do you handle entity clustering/deduplication?

    • segmenta 16 hours ago

      We use a two-layer approach.

      The raw sync layer (Gmail, calendar, transcripts, etc.) is idempotent and file-based. Each thread, event, or transcript is stored as its own Markdown file keyed by the source ID, and we track sync state to avoid re-ingesting the same item. That layer is append-only and not deduplicated.

      Entity consolidation happens in a separate graph-building step. An LLM processes batches of those raw files along with an index of existing entities (people, orgs, projects and their aliases). Instead of relying on string matching, the model decides whether a mention like “Sarah” maps to an existing “Sarah Chen” node or represents a new entity, and then either updates the existing note or creates a new one.

      • delichon 16 hours ago

        > the model decides whether a mention like “Sarah” maps to an existing “Sarah Chen” node or represents a new entity, and then either updates the existing note or creates a new one.

        Thanks! How much context does the model get for the consolidation step? Just the immediate file? Related files? The existing knowledge graph? If the graph, does it need to be multi-pass?

        • segmenta 16 hours ago

          The graph building agent processes the raw files (like emails) in a batch. It gets two things: a lightweight index of the entire knowledge graph, and the raw source files for the current batch being processed.

          Before each batch, we rebuild an index of all existing entities (people, orgs, projects, topics) including aliases and key metadata. That index plus the batch’s raw content goes into the prompt. The agent also has tool access to read full notes or search for entity mentions in existing knowledge if it needs more detail than what’s in the index.

          It’s effectively multi-pass: we process in batches and rebuild the index between batches, so later batches see entities created earlier. That keeps context manageable while still letting the graph converge over time.

  • einpoklum 15 hours ago

    > We’d love to hear your thoughts

    Google Mail should not be used, nor its use encouraged. Nor should you encourage the use of LLMs of large corporations which suck in user data for mining, analysis, and surveillance purposes.

    I would also be worried about energy use, and would not trust an "agent" to have shell access, that sounds rather unsafe.

  • rezmoss 19 hours ago

    this makes a lot of sense "work memory" feels like what agents have been missing

    • segmenta 19 hours ago

      Thanks! Agent capabilities are getting commoditized fast. The differentiator is context. If you had a human assistant, you'd want them sitting in on all your meetings and reading your emails before they could actually be useful. That's what we're trying to build.

  • Curiositiy 16 hours ago

    Fucking hate software dorks turning simple web searches into a polluted, unrelated results list, thanks to their stupid, unimaginative & completely unrelated one-word "product" names.

    • delichon 15 hours ago

      Dear software dorks turning raw text searches into meaningful, relevant linked data: rock on and thank you for your service.