My experience building agents is that the "main loop" of the framework is really not the hard part, and too much time gets devoted to framework picking. It reminds me a lot of early web application days, stuff feels at the level of PHP and WordPress in their attempt to simplify things, when in reality we still need low level stuff a lot of time and the framework gets in the way.
Generally agree, we're targeting teams who need to make agents accessible to both their developers and non-developers in one platform. There's not really a way to do that as far as I know in any other framework. That said, I do find with multi-agent systems, having good abstraction layers makes things like observability, tracing, etc. cleaner. When the LLMs are driving the execution, normalizing on how the LLMs interact with each other can simplify the stack.
I think we are going through the langchain era for agents. the world will look really different in couple month and the stack will be wildly different and more unified.
Yeah I think so. With LangChain, LangFlow took off because it was the "no-code" n8n style version that was layered on top of LangChain. To me it was always frustrating that it wasn't one ecosystem // fully interoperable. We're looking to make sure there's a good solution that works in either modality for agents.
We made a good video about the differences between n8n, OpenAI, and Inkeep here: https://youtu.be/tRgU5FQoe3s. Short overview.
Re:OpenAI agents builder - there is a hard one-time ejection to code. You can export to their TypeScript or Python SDKs (in some limited use cases), but it's a one-way fork. Their visual canvas is meant to stay visual canvas.
Their SDK is open source -- it's basically for calling the OpenAI APIs downstream. But their visual builder / orchestration layer is not.
> We built an agent builder with true two-way sync between code and a drag-and-drop visual editor.
Wow, what a clear pitch. I like it.
At the same time, I think about design space between Visual/DAG editors (here, a directed graph of agent workflows) versus, say, a high level textual configuration format (a la Dockerfiles).
- I think back ... how many visual tools have I been excited by [2] [3] [4] [5] [6], only to find that I usually prefer the textual editing most of the time? There are certainly cases where the visual editors really catch on. But on the other hand, when it comes to the programming world, it seems like the configuration format approach works more often.
- What do customers want here? (I don't have any particular expertise here) In my footnoted examples, my guess is that visual tools catch on the best when the target audience has a deep physical, even tactile, connection to the domain rather than a preference for textual representations.
Personally, I really like both. I like being able to quickly edit and share text files and also switch to a visualization. But it can be hard to make the visualization capture the necessary details without too much clutter.
All in all, delivering on two-way sync between code and visual editors might be hard. Hard is not necessarily bad. Delighting customers on both fronts could be a competitive advantage, for sure. [7]
--
I know this comment could be better organized, sorry about that. This is a "thinking out loud comment"... I haven't even touched on the "no code" and "low code" angle to it. I'd be happy to hear from others on their experiences.
[3]: Max for Live (integrated with Ableton for sound design)
[4]: LabVIEW (used for electrical engineering)
[5]: Various visual SQL Schema editors
[6]: Graphical views of document linkages: e.g. Obsidian, The Brain (going way back)
[7]: It may be difficult in achieve parity between the different capabilities of each. It seems to me many applications recognize that full parity isn't practical and instead let each "view" do what it does best. Traditionally, the visual approaches help with the top-level view and the code versions get into the details.
Yes! This is what I struggled with prior. A Multi-agent system makes a lot of sense to the person who wired it up, less so to other people, even when looking at just code. We architected the SDK so it feels like a declarative ORM - similar to Drizzle for databases. In a way, it is a DSL, just in TypeScript, so you get full typesafety and devex of an IDE.
Being able to handle off and give the same system to other engineers or non-engineers in a visual format for them to own and edit makes it easy to make these agents portable and explainable.
The Git-style workflow is super clever. How do teams typically collaborate around it? For example, can multiple people work on different branches of the same agent (visual and code), or is the sync more linear?
For now -- more linear. n-player simultaneous edits with instant versioning for any edit is something we're working on.
You can always "pull" a project to a staging folder so you can resolve conflicts in code manually if someone made changes in visual while you made changes in code.
Genuine question, what makes this more effective than something like N8n? Right now i'm not seeing what I could achieve within Inkeep that I couldn't on N8n. Less being snide and more being genuinely curious
Good question! Main things: n8n you can't export to code (or import). It's all visual, so you don't get CI/CD, typesafety, etc. when you want it. That can be fine if visual is all you need.
Architecturally, n8n is good for deterministic workflows and adding some LLM nodes for data transformations and tool calling, but because their system is not truly multi-agent, then a) it's not good for conversational experiences like chatbots or copilots and b) agents can't actually go back and forth with each other to solve problems with a shared conversation history, etc.
Since this is for people who can't code, I'd make sure the debugging capabilities are beefed up. Otherwise what's going to happen when the code doesn't work? This is the whole problem with no code tools. Does the visualization help here? If not, what do the visuals add; why not stick with a simple text prompt?
Agree. The visual builder has a live tracer that shows visually the state of the execution, which can be helpful (even as an engineer). Working on other debugging utilities.
That said - for devs, you still get the TypeScript representation, so you can always interface with the system that way if you prefer.
Is this primarily for building chat based agents? What if I want to trigger a workflow via API or webhook and the wait for some sort of human in the loop verification? Do you have an example for something like that?
The visual UI + code is really cool, addresses the weaknesses of both approaches.
Both work - Agents can be triggered via API just like any normal process, so they will go do the work async and post the result via e.g. an MCP of your choice to Slack, backend forms, etc. We don't have a built-in human-in-loop orchestration layer just yet, but since each execution has a conversation thread, you could orchestrate a way for your human-in-loop process to simply submit a new message with whatever was sent. Basically using the messaging as the event queue system.
That said, yes, we deal with a lot of customer support use cases so conversational experiences were a top priority for us. It's nice to be able to interact with an agent as a workflow or conversationally.
I see "Get a demo" as the only call-to-action and a bunch of other venture-backed Saas startups as your customers...so I'm guessing pricing will be...high.
This might sound vaguely pessimistic, but I'm getting the nagging feeling lately that we might be inflating a ponzi scheme of B2B saas selling exclusively to other B2B saas companies for niche B2B saas use cases.
In earlier generations of Saas the assumption was you land and expand from there to wider markets. But this seems so specifically tailored to the needs of other VC-funded B2B Saas companies...
As the things that fall under [vibe code-able in 1 hour] expands, at what point does building platforms like this for semi-technical B2B Saas employees not make sense any more?
Like, if you're smart enough to figure out an agent that connects the OpenAI API to the Zendesk API can automate something...at what point do you just set it up yourself instead of dealing with the sales vultures at [insert YC startup] and also fighting your own internal procurement process...just so you might be able to have that agent running in 1-6 months?
In enterprise I can totally see value in having help during setup and an external throat to choke when things go wrong, but at that point, isn't that a consulting agency masquerading as a Saas platform?
We're working on Inkeep Cloud which will have an accessible generous tier that's based on usage. You can get notified here: https://inkeep.com/cloud-waitlist. Our goal is to make Inkeep accessible to everyone.
> The Inkeep Agent Framework is licensed under the Elastic License 2.0 (ELv2) subject to Inkeep's Supplemental Terms (SUPPLEMENTAL_TERMS.md). This is a fair-code, source-available license that allows broad usage while protecting against certain competitive uses.
Fake "Open source" all over again.. why do we repeatedly have to do this? You can own the "Fair source" and call it that.
I was excited with the pitch. And then had this completely ruin your image. If you'd been upfront, then you could still have retained the interest.
I recommend everyone keep away from this if you care about your autonomy should your side project commercialize. There's plenty of good alternatives. The intent seems dishonest - given how brazenly every comment about the license is exclusively being ignored.
We aim to be upfront with the fair-code approach by detailing that in the post, docs, etc. Our goal is to allow for broad usage for folks using it for building assistants, copilots, workflows, etc., while protecting against direct competitive use of the platform. This helps us guarantee we can continue to innovate.
It looks like OpenAI really has messed around with the definition of "Open" and we are seeing lots of startups run with that to the point where the definition is meaningless.
From what I recall, misuse of "open source" started becoming popular around the time that every dev and their dog was writing a GUI REST client with the following playbook:
1. Code up a halfway decent product.
2. Call yourself the "open source alternative to X" where X is some well-known proprietary enterprise product.
3. Once you have a community and your product has gotten stable on the backs of thousands of unpaid volunteers, pull the rug and rake in millions of those sweet, sweet enterprise dollars.
We aim to be transparent with the fair-code approach by detailing that in the post, docs, readme. Our goal is to allow for broad usage for folks using it for building assistants, copilots, workflows, etc. while ensuring it can be economically viable for us long term.
My experience building agents is that the "main loop" of the framework is really not the hard part, and too much time gets devoted to framework picking. It reminds me a lot of early web application days, stuff feels at the level of PHP and WordPress in their attempt to simplify things, when in reality we still need low level stuff a lot of time and the framework gets in the way.
Generally agree, we're targeting teams who need to make agents accessible to both their developers and non-developers in one platform. There's not really a way to do that as far as I know in any other framework. That said, I do find with multi-agent systems, having good abstraction layers makes things like observability, tracing, etc. cleaner. When the LLMs are driving the execution, normalizing on how the LLMs interact with each other can simplify the stack.
> true 2-way sync between code and a drag-and-drop visual editor...
EXACTLY what was missing in many tools, and exactly what is required for reducing the time from quick prototype to production quality code.
Congrats on the launch. Looking forward to trying this out. Please add support for AWS Bedrock based models.
Apppreciate it and will add to tracked request. You should be able to use OpenRouter or the the Vercel AI Gateway.
I think we are going through the langchain era for agents. the world will look really different in couple month and the stack will be wildly different and more unified.
Yeah I think so. With LangChain, LangFlow took off because it was the "no-code" n8n style version that was layered on top of LangChain. To me it was always frustrating that it wasn't one ecosystem // fully interoperable. We're looking to make sure there's a good solution that works in either modality for agents.
How does it compare to OpenAI agents builder?
inkeep isn't open source with a Elastic License 2.0, why not just go with OpenAI agents sdk (MIT)?
We made a good video about the differences between n8n, OpenAI, and Inkeep here: https://youtu.be/tRgU5FQoe3s. Short overview.
Re:OpenAI agents builder - there is a hard one-time ejection to code. You can export to their TypeScript or Python SDKs (in some limited use cases), but it's a one-way fork. Their visual canvas is meant to stay visual canvas.
Their SDK is open source -- it's basically for calling the OpenAI APIs downstream. But their visual builder / orchestration layer is not.
Don't forget to cover Google Opal as well.
https://opal.google/landing/
will add to the queue !
The opening line from the video [1] impressed me:
> We built an agent builder with true two-way sync between code and a drag-and-drop visual editor.
Wow, what a clear pitch. I like it.
At the same time, I think about design space between Visual/DAG editors (here, a directed graph of agent workflows) versus, say, a high level textual configuration format (a la Dockerfiles).
- I think back ... how many visual tools have I been excited by [2] [3] [4] [5] [6], only to find that I usually prefer the textual editing most of the time? There are certainly cases where the visual editors really catch on. But on the other hand, when it comes to the programming world, it seems like the configuration format approach works more often.
- What do customers want here? (I don't have any particular expertise here) In my footnoted examples, my guess is that visual tools catch on the best when the target audience has a deep physical, even tactile, connection to the domain rather than a preference for textual representations.
Personally, I really like both. I like being able to quickly edit and share text files and also switch to a visualization. But it can be hard to make the visualization capture the necessary details without too much clutter.
All in all, delivering on two-way sync between code and visual editors might be hard. Hard is not necessarily bad. Delighting customers on both fronts could be a competitive advantage, for sure. [7]
--
I know this comment could be better organized, sorry about that. This is a "thinking out loud comment"... I haven't even touched on the "no code" and "low code" angle to it. I'd be happy to hear from others on their experiences.
[1]: https://www.youtube.com/watch?v=4FuEnAEPqwU
[2] Tools like SAS Enterprise Miner (https://www.sas.com/en_us/software/enterprise-miner.html) or Orange Data Mining: Visual Programming: (https://orangedatamining.com/home/visual-programming/)
[3]: Max for Live (integrated with Ableton for sound design)
[4]: LabVIEW (used for electrical engineering)
[5]: Various visual SQL Schema editors
[6]: Graphical views of document linkages: e.g. Obsidian, The Brain (going way back)
[7]: It may be difficult in achieve parity between the different capabilities of each. It seems to me many applications recognize that full parity isn't practical and instead let each "view" do what it does best. Traditionally, the visual approaches help with the top-level view and the code versions get into the details.
Yes! This is what I struggled with prior. A Multi-agent system makes a lot of sense to the person who wired it up, less so to other people, even when looking at just code. We architected the SDK so it feels like a declarative ORM - similar to Drizzle for databases. In a way, it is a DSL, just in TypeScript, so you get full typesafety and devex of an IDE.
Being able to handle off and give the same system to other engineers or non-engineers in a visual format for them to own and edit makes it easy to make these agents portable and explainable.
The Git-style workflow is super clever. How do teams typically collaborate around it? For example, can multiple people work on different branches of the same agent (visual and code), or is the sync more linear?
For now -- more linear. n-player simultaneous edits with instant versioning for any edit is something we're working on.
You can always "pull" a project to a staging folder so you can resolve conflicts in code manually if someone made changes in visual while you made changes in code.
Genuine question, what makes this more effective than something like N8n? Right now i'm not seeing what I could achieve within Inkeep that I couldn't on N8n. Less being snide and more being genuinely curious
Good question! Main things: n8n you can't export to code (or import). It's all visual, so you don't get CI/CD, typesafety, etc. when you want it. That can be fine if visual is all you need.
Architecturally, n8n is good for deterministic workflows and adding some LLM nodes for data transformations and tool calling, but because their system is not truly multi-agent, then a) it's not good for conversational experiences like chatbots or copilots and b) agents can't actually go back and forth with each other to solve problems with a shared conversation history, etc.
Great, thank you, that makes the benefits stand out a lot to me.
You got it. Feel free to shoot us over any feedback - nick @inkeep.com.
Since this is for people who can't code, I'd make sure the debugging capabilities are beefed up. Otherwise what's going to happen when the code doesn't work? This is the whole problem with no code tools. Does the visualization help here? If not, what do the visuals add; why not stick with a simple text prompt?
Agree. The visual builder has a live tracer that shows visually the state of the execution, which can be helpful (even as an engineer). Working on other debugging utilities.
That said - for devs, you still get the TypeScript representation, so you can always interface with the system that way if you prefer.
Is this primarily for building chat based agents? What if I want to trigger a workflow via API or webhook and the wait for some sort of human in the loop verification? Do you have an example for something like that?
The visual UI + code is really cool, addresses the weaknesses of both approaches.
Both work - Agents can be triggered via API just like any normal process, so they will go do the work async and post the result via e.g. an MCP of your choice to Slack, backend forms, etc. We don't have a built-in human-in-loop orchestration layer just yet, but since each execution has a conversation thread, you could orchestrate a way for your human-in-loop process to simply submit a new message with whatever was sent. Basically using the messaging as the event queue system.
That said, yes, we deal with a lot of customer support use cases so conversational experiences were a top priority for us. It's nice to be able to interact with an agent as a workflow or conversationally.
I started to follow documentation here (https://docs.inkeep.com/self-hosting/docker-local) to run things with Docker.
-> connect to the locally installed instance of SigNoz
-> It asks for an email -> when I type it, it says: "This account does not exist. To create a new account, contact your admin to get an invite link"
-> But I am the admin :'), also tried to create an account there https://signoz.io/
-> but they refuse personal Github or Gmail accounts for now.
Conclusion
So it's literally impossible to run your app for a 'normal person' running their own server :(
Or maybe I missed a step :/
Hi! Could you look inside the generated .env file? You should be able to find there
Oh alright, I didn't notice that!
Thanks a lot <3
Great!
I see "Get a demo" as the only call-to-action and a bunch of other venture-backed Saas startups as your customers...so I'm guessing pricing will be...high.
This might sound vaguely pessimistic, but I'm getting the nagging feeling lately that we might be inflating a ponzi scheme of B2B saas selling exclusively to other B2B saas companies for niche B2B saas use cases.
In earlier generations of Saas the assumption was you land and expand from there to wider markets. But this seems so specifically tailored to the needs of other VC-funded B2B Saas companies...
As the things that fall under [vibe code-able in 1 hour] expands, at what point does building platforms like this for semi-technical B2B Saas employees not make sense any more?
Like, if you're smart enough to figure out an agent that connects the OpenAI API to the Zendesk API can automate something...at what point do you just set it up yourself instead of dealing with the sales vultures at [insert YC startup] and also fighting your own internal procurement process...just so you might be able to have that agent running in 1-6 months?
In enterprise I can totally see value in having help during setup and an external throat to choke when things go wrong, but at that point, isn't that a consulting agency masquerading as a Saas platform?
We're working on Inkeep Cloud which will have an accessible generous tier that's based on usage. You can get notified here: https://inkeep.com/cloud-waitlist. Our goal is to make Inkeep accessible to everyone.
[stub for offtopicness]
(ok you guys, we've taken 'open source' out of the title, let's talk about something more interesting now)
This is not an open source application. Here is the definition of open source should you wish to correct your post - https://opensource.org/osd
Inkeep SDK – Elastic License 2.0 with Supplemental Terms https://github.com/inkeep/agents?tab=License-1-ov-file#readm...
That's an extremely restrictive license, the best you could say about it is it's "source available"
Please do not call this open source.
> The Inkeep Agent Framework is licensed under the Elastic License 2.0 (ELv2) subject to Inkeep's Supplemental Terms (SUPPLEMENTAL_TERMS.md). This is a fair-code, source-available license that allows broad usage while protecting against certain competitive uses.
Fake "Open source" all over again.. why do we repeatedly have to do this? You can own the "Fair source" and call it that.
I was excited with the pitch. And then had this completely ruin your image. If you'd been upfront, then you could still have retained the interest.
I recommend everyone keep away from this if you care about your autonomy should your side project commercialize. There's plenty of good alternatives. The intent seems dishonest - given how brazenly every comment about the license is exclusively being ignored.
We aim to be upfront with the fair-code approach by detailing that in the post, docs, etc. Our goal is to allow for broad usage for folks using it for building assistants, copilots, workflows, etc., while protecting against direct competitive use of the platform. This helps us guarantee we can continue to innovate.
You're "source available" don't call something open source when it's not
You aren’t being upfront if you are calling it open source. Fair code is not open source.
This is not "open source" at all.
It looks like OpenAI really has messed around with the definition of "Open" and we are seeing lots of startups run with that to the point where the definition is meaningless.
Just like "AGI" is also meaningless.
The good news is their rugpull is their launch day since they came out of the gate with a fake open source license
for a product that nobody is actually interested, the 10th agent builder from YC in the last month.
I mean I think this agent builder might be cool, but I'm definitely not using it when OpenAI's is MIT licensed
From what I recall, misuse of "open source" started becoming popular around the time that every dev and their dog was writing a GUI REST client with the following playbook:
1. Code up a halfway decent product.
2. Call yourself the "open source alternative to X" where X is some well-known proprietary enterprise product.
3. Once you have a community and your product has gotten stable on the backs of thousands of unpaid volunteers, pull the rug and rake in millions of those sweet, sweet enterprise dollars.
Was expecting MIT license but disappointed. If not there’s plenty of established “OSS” options.
We aim to be transparent with the fair-code approach by detailing that in the post, docs, readme. Our goal is to allow for broad usage for folks using it for building assistants, copilots, workflows, etc. while ensuring it can be economically viable for us long term.