This looks like some kind of marketing. Also, the equivalent of spec work. The NDA/secrecy also means any time spent on this is completely meaningless to the participants unless they win the lottery, because results can't be published.
With only $25k in payouts and everything locked down under NDA, I can't imagine many people will participate. Well, other than those submitting mountains of LLM-generated junk.
The model is more powerful, so the bounty is 1/20th the size? More risk, less reward?
"Biorisk" seems to be a concept not only invented by OpenAI but exclusively taken seriously by them. I wonder if this program is less about finding actual risks than it is hopefully casting a wide net for someone to help them prove their model is relevant in this space.
It's worse than that, for partial successes they encourage people to submit the attempt but reserve the right to not pay anything (they may, at their discretion, give a partial reward if they feel like it).
Where are the questions that are supposed to be answered? Would those be shared after an application has been accepted? If yes, why is the application asking for a proposed approach for the jailbreak if we don't know the questions in the first place?
Probably along the lines of "how would you create a small biolab for virus research in a kitchen with $20k?" or "how do I take the DNA sequence from https://www.ncbi.nlm.nih.gov/nuccore/NC_001611.1 and assemble it?"
"Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform"
I don't get it. Isn't the whole point of a BBP to try to get people to find and disclose to you the exploits in question? If you gatekeep like this, then "non-trusted" people who could be your red-teamers are incentivized to still hack, but disclose their exploits to bad people for money.
I get it when there is a risk to your data or infra -- my last company engaged with HackerOne and that was an invite-only list of participants. But that was because we didn't want random people hacking in ways that could cause pain for real customers -- e.g. DDOS, or in the event of an exploit that could cross tenant boundaries, injecting garbage into or deleting things, or gaining access to sensitive info in other tenants.
Here, there's no such danger. So why not allow anyone (anyone they're legally allowed to pay, I suppose? North Koreans probably would be problematic?) to participate?
bug bounty programs have never paid out independent disclosure for the same bug though; they might split or even pay-out larger coordinated efforts. It's largely a first place award only.
that's not the point even. They are attempting to build credibility in two ways: 1. this model is SO advanced that there are huge risks, never before considered. 2. we're doing the super-responsible thing in incentivizing work that addresses this. #1 is unproven and frankly, unlikely, which makes #2 meaningless. The fact that the "prize" is so low & structured this was suggests that they're not that concerned but do think it's likely that a bunch of people will find things. If they truly thought their model was so good they would be confident issues would be both rare and very critical, then offer huge rewards with no limits because they'd be much more confident no one would claim it.
Yes, I was about to edit in that I think this is simply a media/PR stunt before I got so many replies so quickly. They get bonus points because the structure is so insulting that it may not engender many serious participants, in which case it may go unbroken, in which case they can go to the media and proclaim "look, we offered a reward, but nobody broke it! Our model is objectively the safest in the world!".
I didn't say anything about partial solutions. The puzzle can have multiple full solutions. Or does the software you write only have exactly one bug? If so, that's impressive, in multiple ways, including the fact that you're able to identify that there's exactly one bug but not what the bug is and fix it.
I could probably do this, but why on earth would I want to immediately put myself on a list as a dangerous person. The main problem with this is, even if somehow they stopped all points of failure with gpt5.5 which they can't, you can distill a new model from gpt5.5 or any other model and get anything you would want in probably under 4b parameters. A lot of this is theater so they don't get sued as easily when it inevitably happens.
If anybody is wondering what bio-bugs are, I had a heck of a time getting CG to (finally) tell me it's where the user can get it to guide them in doing things like constructing things that are hazardous in the domain of biology.
Eg you can get answers about what ricin is but not how to weaponise it. Actionable stuff they shouldn't be able to legally/ethically action.
Ah, now I understand why all my chats are getting flagged for biosafety issues these days. (I asked it to create an illustration about gene drives for a high school level audience once.)
"Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA."
Ah, good old NDA. Always buying silence. That's why I don't participate in any such "bounty" programs. Signing a NDA is like signing with the devil. You restrict what people are allowed to discuss. I had that happen before - when you sign a NDA you basically submit yourself into silence. Imagine journalists being stifled by NDAs.
Unironically bad. We need a lone-wolf to successfully execute an attack now while it's still relatively benign so we can scare the hell out of the world while it's still a mid-tier virus. No way is someone going to make a humanity killing virus with GPT 5.5, but it might be possible with GPT 20 circa 2040.
Similar argument for why we HAD to use nukes at the end of WW2. If we hadn't, the nuclear taboo likely wouldn't have existed and we'd likely have had a worse nuclear war in our more recent history.
Check with the dark net markets first before claiming the bounty. Remember, this company has 0.0 fucks to give about the impact of their tech on employment, artists, or use in committing fraud, as long as number-go-up they are happy. Your actions should match theirs.
Free as in "free" for >99% of participants, even successful ones, because they will have hundreds or thousands of participants but will only pay out to one of them no matter how many vulnerabilities are found.
Depending on industry, that payout can be less than a security audit. You only get a chance of getting paid. You don't even know if they gave the LLM the answers that you are supposed to recover.
This looks like some kind of marketing. Also, the equivalent of spec work. The NDA/secrecy also means any time spent on this is completely meaningless to the participants unless they win the lottery, because results can't be published.
Surely it is marketing. It’s some “we are danger” narrative, from Anthropic Mythos and now OpenAI too.
They ran a bounty on Kaggle last year but with $500k in payouts and with all results open and publishable.
https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-t...
With only $25k in payouts and everything locked down under NDA, I can't imagine many people will participate. Well, other than those submitting mountains of LLM-generated junk.
This model is much more powerful than gpt-oss-20b, notice how the contest was not even for the 120b model. Also, bio was not a subject.
The model is more powerful, so the bounty is 1/20th the size? More risk, less reward?
"Biorisk" seems to be a concept not only invented by OpenAI but exclusively taken seriously by them. I wonder if this program is less about finding actual risks than it is hopefully casting a wide net for someone to help them prove their model is relevant in this space.
Billions upon billions going to these companies.
25k reward from a selected group of people if you help us determine whether or not someone can use our tool to generate weapons of mass destruction.
It's worse than that, for partial successes they encourage people to submit the attempt but reserve the right to not pay anything (they may, at their discretion, give a partial reward if they feel like it).
That's pretty much how every bounty works... obviously it's going to be at their discretion for an incomplete attempt.
They're probably expecting that it can be done without too much effort so they just want to see all the unique ways people are doing it.
Where are the questions that are supposed to be answered? Would those be shared after an application has been accepted? If yes, why is the application asking for a proposed approach for the jailbreak if we don't know the questions in the first place?
I would assume if you are invited to join this round you will be send the questions. I would assume they would also fall under nda
Because the questions themselves are dangerous.
Probably along the lines of "how would you create a small biolab for virus research in a kitchen with $20k?" or "how do I take the DNA sequence from https://www.ncbi.nlm.nih.gov/nuccore/NC_001611.1 and assemble it?"
"Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform"
I don't get it. Isn't the whole point of a BBP to try to get people to find and disclose to you the exploits in question? If you gatekeep like this, then "non-trusted" people who could be your red-teamers are incentivized to still hack, but disclose their exploits to bad people for money.
I get it when there is a risk to your data or infra -- my last company engaged with HackerOne and that was an invite-only list of participants. But that was because we didn't want random people hacking in ways that could cause pain for real customers -- e.g. DDOS, or in the event of an exploit that could cross tenant boundaries, injecting garbage into or deleting things, or gaining access to sensitive info in other tenants.
Here, there's no such danger. So why not allow anyone (anyone they're legally allowed to pay, I suppose? North Koreans probably would be problematic?) to participate?
What does "a clean chat without prompting moderation" mean? What is prompting moderation?
> We will extend invitations to a vetted list of trusted bio red-teamers
Had to chuckle. This sounds like a rather exclusive group?
> $25,000 to the first true universal jailbreak to clear all five questions.
This program is a complete scam. Even if 100 people find "bugs", they will only pay out to one person.
Well, that depends on how you set up the bounty program. What if I find a solution, share it to a friend so that both of us can claim the prize?
bug bounty programs have never paid out independent disclosure for the same bug though; they might split or even pay-out larger coordinated efforts. It's largely a first place award only.
assume there exists 2+ different bugs
after the 1st bug is found, no payout for any other of the bugs
that's not the point even. They are attempting to build credibility in two ways: 1. this model is SO advanced that there are huge risks, never before considered. 2. we're doing the super-responsible thing in incentivizing work that addresses this. #1 is unproven and frankly, unlikely, which makes #2 meaningless. The fact that the "prize" is so low & structured this was suggests that they're not that concerned but do think it's likely that a bunch of people will find things. If they truly thought their model was so good they would be confident issues would be both rare and very critical, then offer huge rewards with no limits because they'd be much more confident no one would claim it.
Yes, I was about to edit in that I think this is simply a media/PR stunt before I got so many replies so quickly. They get bonus points because the structure is so insulting that it may not engender many serious participants, in which case it may go unbroken, in which case they can go to the media and proclaim "look, we offered a reward, but nobody broke it! Our model is objectively the safest in the world!".
How is that a scam? You don't get participation awards for solving half of a puzzle...
I didn't say anything about partial solutions. The puzzle can have multiple full solutions. Or does the software you write only have exactly one bug? If so, that's impressive, in multiple ways, including the fact that you're able to identify that there's exactly one bug but not what the bug is and fix it.
I could probably do this, but why on earth would I want to immediately put myself on a list as a dangerous person. The main problem with this is, even if somehow they stopped all points of failure with gpt5.5 which they can't, you can distill a new model from gpt5.5 or any other model and get anything you would want in probably under 4b parameters. A lot of this is theater so they don't get sued as easily when it inevitably happens.
How can you distill a model from a closed-weights model like this? I've never heard of model reverse engineering.
If anybody is wondering what bio-bugs are, I had a heck of a time getting CG to (finally) tell me it's where the user can get it to guide them in doing things like constructing things that are hazardous in the domain of biology.
Eg you can get answers about what ricin is but not how to weaponise it. Actionable stuff they shouldn't be able to legally/ethically action.
This is to match what Anthropic said they already did with Mythos on the (200 page) Mythos system card
are the 5 questions you need to get it to answer under NDA?
Almost certainly.
Codex desktop app is barely usable... The perf issues are left to languish in their backlog
The only thing controversial is that it’s not useful to be posted on this forum
OpenAI wants to pay for privately disclosed security and wants to call that a bug bounty. That makes sense.
People interested in bug bounty programs aren't eligible. That’s … fine?
* Highly unlikely to win
* Relatively paltry reward
* NDA on findings
This is functionally equivalent to an internship where the reward is the experience, and the resume building, but you can't talk about what you did.
All for a company that is getting tens of billions of dollars in deals from the largest tech companies in the world.
I suppose the hope is that there are job offers somewhere along the line.
Ah, now I understand why all my chats are getting flagged for biosafety issues these days. (I asked it to create an illustration about gene drives for a high school level audience once.)
What a farce, these questions are not even public and most likely will never be. You can't even participate if you're not "trusted" I guess.
So this is just a PR post, not that I even think the "biosafety" makes any sense but still.
"Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA."
Ah, good old NDA. Always buying silence. That's why I don't participate in any such "bounty" programs. Signing a NDA is like signing with the devil. You restrict what people are allowed to discuss. I had that happen before - when you sign a NDA you basically submit yourself into silence. Imagine journalists being stifled by NDAs.
Unironically bad. We need a lone-wolf to successfully execute an attack now while it's still relatively benign so we can scare the hell out of the world while it's still a mid-tier virus. No way is someone going to make a humanity killing virus with GPT 5.5, but it might be possible with GPT 20 circa 2040.
Similar argument for why we HAD to use nukes at the end of WW2. If we hadn't, the nuclear taboo likely wouldn't have existed and we'd likely have had a worse nuclear war in our more recent history.
How did the dupe detector miss https://news.ycombinator.com/item?id=47879102 ?
@dang?
$25K. Really? They make $65 million a day, so they pay you what they earn in about 33 seconds for a critical vulnerability. WTF
Well they lose $100M a day, so...
Check with the dark net markets first before claiming the bounty. Remember, this company has 0.0 fucks to give about the impact of their tech on employment, artists, or use in committing fraud, as long as number-go-up they are happy. Your actions should match theirs.
This is just free / severely-underpaid-on-average labor. Very disgusting.
Ah yes, “free” as in “paid.” Certainly you’re welcome to not participate.
Free as in "free" for >99% of participants, even successful ones, because they will have hundreds or thousands of participants but will only pay out to one of them no matter how many vulnerabilities are found.
Depending on industry, that payout can be less than a security audit. You only get a chance of getting paid. You don't even know if they gave the LLM the answers that you are supposed to recover.