Show HN: Axe A 12MB binary that replaces your AI framework

(github.com)

61 points | by jrswab 4 hours ago ago

51 comments

  • jrswab 3 hours ago

    I built Axe because I got tired of every AI tool trying to be a chatbot.

    Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.

    Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.

    What Axe is:

    - 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)

    - Stdin piping, something like `git diff | axe run reviewer` just works

    - Sub-agent delegation. Where agents call other agents via tool use, depth-limited

    - Persistent memory. If you want, agents can remember across runs without you managing state

    - MCP support. Axe can connect any MCP server to your agents

    - Built-in tools. Such as web_search and url_fetch out of the box

    - Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format

    - Path-sandboxed file ops. Keeps agents locked to a working directory

    Written in Go. No daemon, no GUI.

    What would you automate first?

    • bensyverson 3 hours ago

      It's exciting to see so much experimentation when it comes to form factors for agent orchestration!

      The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.

      • jrswab 2 hours ago

        > how you would approach budget when using Axe

        Great question and it's something that I've not dig into yet. But I see no problem adding a way to limit LLMs by tokens or something similar to keep the cost for the user within reason.

    • hamandcheese an hour ago

      > Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out.

      I'm a bit skeptical of this approach, at least for building general purpose coding agents. If the agents were humans, it would be absolutely insane to assign such fine-grained responsibilities to multiple people and ask them to collaborate.

    • dumbfounder 2 hours ago

      Now what we need is a chat interface to develop these config files.

    • punkpeye 3 hours ago

      What are some things you've automated using Axe?

      • jrswab 2 hours ago

        I have a few flows I'm using it for and have a growing list of things I want to automate. Basically, if there is a process that takes a human to do (like creating drafts or running scripts with variable data) I make axe do it.

        1. I have a flow where I pass in a youtube video and the first agent calls an api to get the transcript, the second converts that transcript into a blog-like post, and the third uploads that blog-like post to instapaper.

        2. Blog post drafting: I talk into my phone's notes app which gets synced via syncthing. The first agent takes that text and looks for notes in my note system for related information, than passes my raw text and notes into the next to draft a blog post, a third agent takes out all the em dashes because I'm tired of taking them out. Once that's all done then I read and edit it to be exactly what I want.

    • let_rec 2 hours ago

      Is there Gemini support?

      • jrswab an hour ago

        Not yet but it will be easy to add. If you need it can you create an issue in GitHub? I should be able to get that in today.

    • zrail 2 hours ago

      Looks pretty interesting!

      Tiny note: there's a typo in your repo description.

      • jrswab 2 hours ago

        nooo! lol but thanks, I'll go hunt it down.

    • ufish235 3 hours ago

      Why is this comment an ad?

      • ForceBru 3 hours ago

        This is the OP promoting their project — makes sense to me

      • stronglikedan 2 hours ago

        How can it be an ad if it's not selling anything? Seems like a proud parent touting their child to me.

        • jrswab 2 hours ago

          I am pretty proud of this one :)

      • zrail 2 hours ago

        It's a Show HN. That's the point.

      • lovich 2 hours ago

        Because they had an AI write it. Their other comments seem organic but the one you’re responding to does not

  • reacharavindh an hour ago

    Reminded me of this from my bookmarks.

    https://github.com/chr15m/runprompt

  • btbuildem 2 hours ago

    I really like seeing the movement away from MCP across the various projects. Here the composition of the new with the old (the ol' unix composability) seems to um very nicely.

    OP, what have you used this on in practice, with success?

    • jrswab an hour ago

      I've shared a few flows I use a lot right now in some other comments.

  • 0xbadcafebee 2 hours ago

    Nice. There's another one also written in Go (https://github.com/tbckr/sgpt), but i'll try this one too. I love that open source creates multiple solutions and you can choose the one that fits you best

    • jrswab 2 hours ago

      Thanks! Looks like sgpt is a cool tool. Axe is oriented around automation rather than interaction like sgpt. Instead of asking something you define it once and hook it into a workflow.

  • armcat 3 hours ago

    Great work! Kind of reminds me of ell (https://github.com/MadcowD/ell), which had this concept of treating prompts as small individual programs and you can pipe them together. Not sure if that particular tool is being maintained anymore, but your Axe tool caters to that audience of small short-lived composable AI agents.

    • jrswab 2 hours ago

      Thanks for checking it out! And yes the tool is indeed catering to that crowed. It's a need I have and thought others could use it as well.

  • mark_l_watson 3 hours ago

    If I have time I want to try this today because it matches my LLM-based work style, especially when I am using local models: I have command line tools that help me generated large one-shot prompts that I just paste into an Ollama repl - then I check back in a while.

    It looks like Axe works the same way: fire off a request and later look at the results.

    • jrswab 2 hours ago

      Exactly! I also made it to chain them together so each agent only gets what it needs to complete its one specific job.

  • swaminarayan an hour ago

    Axe treats LLM agents like Unix programs—small, composable, version-controllable. Are we finally doing AI the Unix way?

    • jrswab an hour ago

      That's my dream.

  • creehappus 32 minutes ago

    I really like the project, although I would prefer a json5 config, not toml, which I find annoying to reason about.

  • TSiege 2 hours ago

    This looks really interesting. I'm curious to learn more about security around this project. There's a small section, but I wonder if there's more to be aware of like prompt injection

    • jrswab 2 hours ago

      I'm happy you brought this up. I've been thinking about this and working on a plan to make it as solid as possible. For now, the best way would be to run each agent in a docker container (there is an example Dockerfile in the repo) so any destructive actions will be contained to the container.

      However, this does not help if a person gives access to something like Google Calendar and a prompt tells the LLM to be destructive against that account.

  • jedbrooke 3 hours ago

    looks interesting, I agree that chat is not always the right interface for agents, and a LLM boosted cli sometimes feels like the right paradigm (especially for dev related tasks).

    how would you say this compares to similar tools like google’s dotprompt? https://google.github.io/dotprompt/getting-started/

    • jrswab 2 hours ago

      I've not heard of that before but after looking into it I think they are solving different problems.

      Dotprompt is a promt template that lives inside app code to standardize how we write prompts.

      Axe is an execution runtime you run from the shell. There's no code to write (unless you want the LLM to run a script). You define the agent in TOML and run with `axe run <agent name> and pipe data into it.

  • Orchestrion 2 hours ago

    The Unix-style framing resonates a lot.

    One thing I’ve noticed when experimenting with agent pipelines is that the “single-purpose agent” model tends to make both cost control and reasoning easier. Each agent only gets the context it actually needs, which keeps prompts small and behavior easier to predict.

    Where it gets interesting is when the pipeline starts producing artifacts instead of just text — reports, logs, generated files, etc. At that point the workflow starts looking less like a chat session and more like a series of composable steps producing intermediate outputs.

    That’s where the Unix analogy feels particularly strong: small tools, small contexts, and explicit data flowing between steps.

    Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text.

    • jrswab 2 hours ago

      > Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text.

      Yes! I run a ghost blog (a blog that does not use my name) and have axe produce artifacts. The flow is: I send the first agent a text file of my brain dump (normally spoken) which it then searched my note system for related notes, saves it to a file, then passes everything to agent 2 which make that dump a blog draft and saves it to a file, agent 3 then takes that blog draft and cleans it up to how I like it and saves it. from that point I have to take it to publish after reading and making edits myself.

      • Orchestrion an hour ago

        That’s a really nice pipeline. The “save to file between steps” pattern seems to appear very naturally once agents start doing multi-stage work.

        One thing I’ve noticed when experimenting with similar workflows is that once artifacts start accumulating (drafts, logs, intermediate reports, etc.), you start running into small infrastructure questions pretty quickly:

        – where intermediate artifacts live – how later agents reference them – how long they should persist – whether they’re part of the workflow state or just temporary outputs

        For small pipelines the filesystem works great, but as the number of steps grows it starts to look more like a little dataflow system than just a sequence of prompts.

        Do you usually just keep everything as local files, or have you experimented with something like object storage or a shared artifact layer between agents?

  • nthypes 3 hours ago

    There is no "session" concept?

    • jrswab 2 hours ago

      Not yet but is on the short list to implement. What would you need from a session for single purpose agents? I'm seeing it more as a way to track what's been done.

  • saberience 2 hours ago

    I’m having trouble understanding when/where I would use this? Is this a replacement for pi or codex?

    • jrswab 2 hours ago

      This is not a replacement for either in my opinion. Apps like codex and pi are interactive but ax is non-interactive. You define an agent once and the trigger it however you please.

  • a1o 3 hours ago

    Is the axe drawing actually a hammer?

  • Lliora 2 hours ago

    12MB for an "AI framework replacement"? That's either brilliant compression or someone's redefining "framework" to mean "toy model that works on my laptop." Show me the benchmarks on actual workloads, not the readme poetry.

    • jrswab 2 hours ago

      This is not an LLM but a Binary to run LLMs as single purpose agents that can chain together.

      • mrweasel an hour ago

        Yeah I was disappointed by that too.