What's interesting, as someone who is working on such a tool, is that OpenAI is providing the platform for people to build using their model AND competing at the same time in the space. I think it shows that if you're trying to grab a large TAM in a software space where openAI can compete, they will compete against you.
So the strategy it seems is to find a more niche application that OpenAI won't be interested in competing in, or find something that involves much more than just software + AI.
What's interesting, as someone who is working on such a tool, is that OpenAI is providing the platform for people to build using their model AND competing at the same time in the space. I think it shows that if you're trying to grab a large TAM in a software space where openAI can compete, they will compete against you.
So the strategy it seems is to find a more niche application that OpenAI won't be interested in competing in, or find something that involves much more than just software + AI.
This is interestingly designed so as to only be useful defensively.
It's all too easy to imagine an AI agent security researcher that is the other way round.
why would you need one? the agent is red teaming the source code already, the only vulnerability not covered are the humans