Interesting timing - I've been working on something complementary to this.
TBN Protocol focuses on the identity and trust layer between agents themselves
(cryptographic certificates, verified handshakes) rather than the database layer.
The two could work well together. What's your approach to agent identity verification?
Currently we are not tackling agent identity verification, however, it can be done by generating a public private key and authenticating based on that.
Faz is a middleware that sits between AI agents and databases and ensures all query passes through safety pipelines, so that your agents can't nuke your databases or access data that they are not supposed to.
Nice concept!
I’m concerned about that LLM might discover `faz.yaml` and directly access the databases.
Wouldn’t it be more deterministic and safer to wrap the database itself and use a safety-pipeline-enabled DB instead?
It will definitely do that. That we believe you have to make it protocol aware. For eg. if it mysql, it should be proxy that translates as is. then it can't by pass. Also, the proxy should not reveal real credentials.
It can be, however it's not fully deterministic, because the agent can hallucinate or misbehave, which is fairly common. Because MCP is a separate process, we can ensure that safety pipeline is fully followed.
Interesting timing - I've been working on something complementary to this. TBN Protocol focuses on the identity and trust layer between agents themselves (cryptographic certificates, verified handshakes) rather than the database layer. The two could work well together. What's your approach to agent identity verification?
Interesting times indeed. We are have running protocol aware proxying doing the same including masking for 10+ databases, no tool change required [1]
[1] https://adaptive.live/
Currently we are not tackling agent identity verification, however, it can be done by generating a public private key and authenticating based on that.
Faz is a middleware that sits between AI agents and databases and ensures all query passes through safety pipelines, so that your agents can't nuke your databases or access data that they are not supposed to.
Nice concept! I’m concerned about that LLM might discover `faz.yaml` and directly access the databases. Wouldn’t it be more deterministic and safer to wrap the database itself and use a safety-pipeline-enabled DB instead?
It will definitely do that. That we believe you have to make it protocol aware. For eg. if it mysql, it should be proxy that translates as is. then it can't by pass. Also, the proxy should not reveal real credentials.
That’s awesome! When the time comes, definitely do a show HN post. I’m looking forward to it!
a good mcp, just curious about the reason you pick the MCP instead of a skill, can it done by a skill as well?
It can be, however it's not fully deterministic, because the agent can hallucinate or misbehave, which is fairly common. Because MCP is a separate process, we can ensure that safety pipeline is fully followed.