17 comments

  • qxxx 2 hours ago

    Just tried it out. Works fine. Love it! I tried it with a wordpress site. It is showing hundreds of sql queries in one request (thats probably why that wordpress site is so slow lol)

    What I would love to see here is:

    - some kind of sorting: eg. by excecution time or order. So I can see the slowest queries.

    - search/filter feature.

    - faster scrolling with pgup/pgdown keys.

    - maybe how often the same query was executed. I could check the code and maybe optimize the queries.

  • buremba 4 hours ago

    This is very neat! IMO inspecting the queries the agents run on the database is a better approach to understand how the code works, even more than reviewing the code.

    I just tried and it works smoothly. For those who doesn't want to plug in the agents to their database directly, I built a similar tool https://dbfor.dev for the exact purpose, it just embeds PGLite and implements PG wire protocol to spin up quick PG databases with a traffic viewer included.

  • camel_gopher 2 hours ago

    Why do you need a proxy? Pull the queries off the network. You’re adding latency to every query!

    https://github.com/circonus-labs/wirelatency

    • Sentinel-gate 13 minutes ago

      The proxy vs packet capture debate is a bit of a non-debate in practice — the moment TLS is on (and it should always be on), packet capture sees nothing useful. eBPF is interesting for observability but it works at the network/syscall level — doing actual SQL-level inspection or blocking through eBPF would mean reassembling TCP streams and parsing the Postgres wire protocol in kernel space, which is not really practical.

      I've been building a Postgres wire protocol proxy in Go and the latency concern is the thing people always bring up first, but it's the wrong thing to worry about. A proxy adds microseconds, your queries take milliseconds. Nobody will ever notice. The actual hard part — the thing that will eat weeks of your life — is implementing the wire protocol correctly. Everyone starts with simple query messages and thinks they're 80% done. Then you hit the extended query protocol (Parse/Bind/Execute), prepared statements, COPY, notifications, and you realize the simple path was maybe 20% of what Postgres actually does. Once you get through that though, monitoring becomes almost a side effect. You're already parsing every query, so you can filter them, enforce policies, do tenant-level isolation, rotate credentials — things that are fundamentally impossible with any passive approach.

    • nimrody 2 hours ago

      Won't work for SSL encrypted connections (but, yes, this does add some latency)

  • debarshri 3 hours ago

    We do something similar in adaptive [1].

    What you can also do is add frontend and backend user to the proxy and then agents won't ever get the actual db user and password. You can make it throwaway too as well as just in time if you want.

    Traditionally it was database activity monitoring which kind of fell out of fashion, but i think it is going to be back with advent of agents.

    [1] https://adaptive.live

  • altmanaltman 3 hours ago

    Looks really cool, will try it out soon

  • stephenr 3 hours ago

    Can you explain how this is a better option than just enabling the general log for MySQL as needed?

    • ahoka 2 hours ago

      Or log_statement = 'all' in Postgres.

    • anonymous344 2 hours ago

      yes, this was my first question.

      why would i inspect this data, because maybe trying to find a cause to a problem.. are there any other reasons

  • Spixel_ 2 hours ago

    Maybe consider renaming this since pgTAP [0] exists and has nothing to do with this.

    [0]: https://pgtap.org/

  • nwellinghoff 5 hours ago

    Nice. I like how you made it an easy to drop in proxy. Will definitely use this when debugging issues!

  • CodeWriter23 3 hours ago

    Really been wanting something like this. Thanks!

  • ranger_danger an hour ago

    I prefer to use eBPF; no additional software, proxy or configuration needed.

    https://eunomia.dev/tutorials/40-mysql/

  • sneak 2 hours ago

    Was AI used to build this? It looks a lot like the kind of scratch-an-itch projects I have been grinding out with AI lately, in size, timeline, code, and function. If not, you are a very very productive programmer.

    If so, would you mind sharing which model(s) you used and what tooling?

  • jauntywundrkind 4 hours ago

    That's some sick observability, nice.