14 comments

  • kanodiaayush an hour ago

    I tried it, I have tried a very similar but still different use case. I wonder if you have thoughts around how much of this is our own context management vs context management for the LLM. Ideally, I don't want to do any work for the LLM; it should be able to figure out from chat what 'branch' of the tree I'm exploring, and then the artifact is purely for one's own use.

  • cootsnuck an hour ago

    Yea, this really needed to happen. Idk if this specific branching type of interface will stand the test of time, but I'm glad to see people finally braving beyond the basic chat interface (which I think many of us forget was only ever meant to be a demo...yet it remains default and dominant).

  • boomskats 4 hours ago

    Ha! This looks really nice, and I'm right there with you on the context development UX being clunky to navigate.

    A couple of weeks ago I built something very very similar, only for Obsidian, using the Obsidian Canvas and OpenRouter as my baseline components. Works really nicely - handles image uploads, autolayout with dagre.js, system prompts, context export to flat files, etc. Think you've inspired me to actually publish the repo :)

    • heliostatic 14 minutes ago

      Would love to see that--haven't found a great LLM interface for obsidian yet.

    • jborland 4 hours ago

      That's great to hear! Best of luck with it, let me know how it goes.

      I definitely think that there is a lot of work to do with context management UX. For us, we use react flow for our graph, and we manage the context and its tree structure ourselves so it's completely model agnostic. The same goes for our RAG system, so we can plug and play with any model! Is that similar for you?

  • confusus 5 hours ago

    Really cool! I’d want something like this for Claude code or other terminal based tools. Basically when working on code sometimes I already interrupt and resume the same session in multiple terminals so I can explore different pathways at the same time without the parallel sessions polluting one another. Currently this is really clunky in Claude Code.

    Anyway, great project! Cheers.

    • jborland 5 hours ago

      Thanks! I totally agree, we want to add CLI agent integration! I often use Gemini CLI (as it's free), and it's so frustrating not being able to easily explore different tangents.

      Would you prefer a terminal Claude-Code style integration, or would browser based CLI integration work too?

      • captainkrtek 5 hours ago

        Imo I’d prefer terminal for this as well. Ie; if I could keep context specific to a branch, or even within a branch switch contexts.

        • jborland 5 hours ago

          Thanks for the feedback. We will add in CLI integration soon!

          Could you please explain what you mean by "within branch" context switches?

          The way Twigg works is you can choose exactly what prompt/output pairs (we call them nodes) are sent to the model. You can move 'nodes' from one branch to another. For example, if you do a bug fix in one branch, you can add the corrected solution as context to another branch by moving the node, whilst ignoring the irrelevant context spent trying to fix the bug.

          This way you can specify exactly what context is in each branch.

  • djgrant 10 hours ago

    This is an interesting idea. Have you considered allowing different models for different chat nodes? My current very primitive solution is to have AI studio on one side of my screen and ChatGPT on the other, and me in the middle playing them off each other.

    • jborland 10 hours ago

      Yes, you can switch models any time for different chat nodes. So you can have different LLM review each others work, as an example. We currently have support for all the major models from ChatGPT, Gemini, Claude and Grok. Hope this helps

  • Edmond 3 hours ago

    we implemented a similar idea some time back and it has proven quite useful: https://blog.codesolvent.com/2025/01/applying-forkjoin-model...

    In Solvent, the main utility is allowing forked-off use of the same session without context pollution.

    For instance a coding assistant session can be used to generate a checklist as a fork and then followed by the core task of writing code. This allows the human user to see the related flows (checklist gen,requirements gen,coding...etc) in chronological order without context pollution.

    • jborland 3 hours ago

      Great to hear others are thinking along similar lines!

      Context pollution is a serious problem - I love that you use that term as well.

      Have you had good feedback for your fork-off implementation?

      • Edmond 3 hours ago

        Feel to "borrow" the term "context pollution" :)

        Yes it has proven quite a useful feature. Primarily for the reason stated above, allowing users to get a full log of what's going on in the same session that the core task is taking place.

        We also use it extensively to facilitate back-and-forth conversation with the agents, for instance a lot of our human-in-loop capabilities rely on the forking functionality...the scope of its utility has been frankly surprising :)