Aqua: A CLI message tool for AI agents

(github.com)

29 points | by lyricat 4 hours ago ago

16 comments

  • resfirestar an hour ago

    With the disclaimer that I haven't tried to set up any kind of agent-to-agent messaging so it may be obvious to those who have, what's the reason I would want something like this rather than just letting agents communicate over some existing messaging protocol that has a CLI (like, I don't know, GPG email)?

    • sitkack 31 minutes ago

      It is a fun problem to play with, but it turns out you can use anything. I use a directory per recipient and throw anything I want in there. Works fine, LLMs are 1000x more flexible than any human mind.

  • vessenes 2 hours ago

    Ooh cool. I’ve been hacking on something very similar, https://qntm.corpo.llc/. I’d love to compare notes — been thinking a lot about the group messaging side.

    • handfuloflight an hour ago
      • vessenes 22 minutes ago

        yes, still private. the main tools aren't ready to deploy. but it is pushed out to pypi and runnable with `uvx qntm --help`. I'm hoping next day or two. send me an email and I'll let you know when it's ready for public code review. I'll definitely want eyes on it.

  • esafak 2 hours ago

    I'd rename it; aqua is also a CLI version manager. https://aquaproj.github.io/

  • roxolotl 2 hours ago

    I wonder what something like rabbitmq could look like for this. Agents could subscribe to chosen topics. A topic per agent and then topics per relevant topic.

    • UperSpaceGuru 25 minutes ago

      Tried this, since agents are non deterministic, this is where tools come in handy

  • handfuloflight 2 hours ago

    So many primitives. All for the taking. Danke.

  • JohnMatthias 2 hours ago

    Everyone and their Uncle Bob have been scrambling to leverage LLM Agents for Process/Task/Message Scheduling and Orchestration with Durable Execution. They have been worshiping Peter Steinberger as their champion and the God of LLM Agents. While Temporal.IO has quietly partnered with Apple to Schedule and Orchestrate all of their services with Durable Execution. It's funny how everyone assumes that using Inference for Deterministic Tasks like Mathematics and Compiler Optimization is a good idea. Reality doesn't agree. Wasting Electricity and Precious Minerals for Inference Compute is Reality. Compilers and Schedulers are deterministic, your LLM is not. You cannot infer Mathematics and assume the correct answer, we have Calculators and Compilers for a reason. Scheduling Algorithms have existed since the 1950's just like Inference Algorithms. Let me introduce you to a few of my friends: Make, Task, Dagu, Windmill, Rivet, Inngest, OVH/uTask, OVH/cds, Restate, Woodpecker CI, Erlang BEAM VM, Gradle, Zig Build, Cargo, Linux Package Managers, Bazel... Shall I go on? Keep your AGENTS.MD, we have Temporal.IO at home. Thank you for your Contributions to Open Source Maxim Fateev. Betting the US Economy on LLM Chat Bots was a bad idea my beautiful friends. Remember Elizabeth Holmes, Mortgage Backed Securities? Scam Altman must be laughing from his Tower of Evil right now...

    • linkregister 2 hours ago

      Why did you capitalize every noun?

      • JohnMatthias an hour ago

        For emphasis. Something sorely lacking in the AI Fraud Circus is the emphasis on that Fraud.

    • dakolli 2 hours ago

      I approve of this schiz'd response, its on haqq as far as I'm concerned. Its funny to see everyone constantly arguing about "how can I optimize context and improve reliability, ect ect"

      What they want is a deterministic process.

      The problem is they, like most humans are lazy and want a stochastic parrot to create this solution for them. Even if it means atrophying their brain, and paying a billionaire for access to their thinking machine. Humans are lazy, its the same reason people drive 3 blocks as opposed to walking, or pay a billionaire for this rent-a-serf service to pick up your food for you instead of getting off the couch. LLMs are no different here, but the stakes are just much higher if your brain "muscles" atrophy as opposed to your leg's.

      They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.

      • handfuloflight an hour ago

        "Whoever says the people are ruined, he himself is ruined." To paraphrase, but that's actual haqq.

      • resfirestar an hour ago

        >They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.

        I realize this feels good to write and that's why people say it, but I can't help chuckling at seeing it combined with "stochastic parrot" in the same comment since the two descriptions are mutually exclusive...

        • dakolli 4 minutes ago

          You spent too much time using "Think for Me SaaS" and your brain doesn't work anymore..