OpenAI – vulnerability responsible disclosure

(requilence.any.org)

206 points | by requilence 12 hours ago ago

67 comments

  • novia 11 hours ago
  • requilence 12 hours ago

    Reported a flaw to OpenAI that lets users peek at others' chat responses. Got an auto-reply on May 29th, radio silence since. Issue remains unpatched :( Avoided their bug bounty due to permanent NDAs preventing disclosure even after fixes. Following standard 45-day disclosure window—users should avoid sharing sensitive data until this is resolved.

    • jonrouach 11 hours ago

      you're sure it's not their "feature" that calling the api with empty string returns random hallucinations?

      https://jarbon.medium.com/gpt-prompt-bug-94322a96c574

      • requilence 11 hours ago

        No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.

        • addandsubtract an hour ago

          New Touring Test unlocked! Differentiate between real and fake hallucinations.

        • jonrouach 11 hours ago

          i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.

          https://snipboard.io/FXOkdK.jpg

          • postalcoder 9 hours ago

            There was an issue with conversation leakage, though. It involved some bug with Redis.

            I felt like it was a huge deal at the time but it’s surprisingly hard to quickly google it.

            • Sebguer 9 hours ago

              It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/

              • postalcoder 7 hours ago

                ah there it is. thanks for jogging my memory. funny to think of how niche chatgpt was considered then to now.

        • JyB 11 hours ago

          I don’t see anything here that would prevent a LLM from generating these. Right?

          • requilence 11 hours ago

            In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.

            • Sebguer 10 hours ago

              Do you understand what a hallucination is?

              • jojobas 9 hours ago

                Coming up with accurate financial data that you can't get it to report outright doesn't seem like one.

                • Sebguer 9 hours ago

                  Models do not possess awareness of their training data. Also you are taking at face value that it is "accurate".

                • refulgentis 9 hours ago

                  I don't understand the wording

                  Accurate financial data?

                  How do we know?

                  What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked?

                  • 01HNNWZ0MV43FF 8 hours ago

                    > I found this company; it is real and numbers in the response are real.

                    ???

                    • refulgentis 7 hours ago

                      Which of my questions does that answer?

                      • queenkjuul 3 hours ago

                        That the financial data is accurate?

    • 999900000999 11 hours ago

      Users should always avoid sharing sensitive data.

      A lot of AI products straight up have plan text logs available for everyone at the company to view.

      • ameliaquining 11 hours ago

        Which ones? Do you just mean tiny startups and side projects and the like or is this a problem that major model providers have?

      • pyman 11 hours ago

        It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages.

        I really hope they fix this bug and start taking security more seriously. Trust is everything.

        • milkshakes 10 hours ago

          maybe you should stop trusting random people on the internet making extraordinary claims without proof then?

          • baby_souffle 9 hours ago

            Isn't "assume vulnerable" The only prudent thing to do here?

            • milkshakes 9 hours ago

              everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case?

            • refulgentis 9 hours ago

              No? Yes? Mu?

              After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose

          • 999900000999 8 hours ago
            • ameliaquining 8 hours ago

              This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view".

    • com2kid 8 hours ago

      I see other users conversations on my Gemini dashboard, not sure who to even complain to.

      Software quality is... Minimal now days.

    • fcpguru 12 hours ago

      well done, sounds very reasonable and following the rules.

      • requilence 12 hours ago

        Appreciate it. Just trying to do the right thing by both OpenAI and users here.

    • poniko 12 hours ago

      The NDA part feels really murky.

      • tptacek 11 hours ago

        It's pretty standard for bounty programs. If you don't like it, which is reasonable, do what this researcher did and just post independently.

        • asadotzler 11 hours ago

          That's an exaggeration. Most industry leaders do not require NDAs, only coordinated disclosure.

          Mozilla's program, which has been around longer than most, doesn't. Google and Microsoft don't. Meta and Apple don't.

          This is water carrying, intentional or not, for a terrible practice that should be shamed, so that it doesn't become standard.

          • tptacek 11 hours ago

            My understanding is that all Bugcrowd bounties do by default.

            You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules.

        • pyman 11 hours ago

          The bug bounty world is a funny one. I remember one complaining that their bug was dismissed and fixed after they signed an NDA, no payout, nothing. Another one got $100 instead of $5,000 because the company downgraded the severity from high to low. So they ended up with little or no money, and no recognition either. Not sure if these were edge cases, but it does make you wonder how fair the process really is.

          • tptacek 11 hours ago

            If you're dealing with large companies, a good rule of thumb is that the bounty program is incentivized to pay you out. Their internal metrics improve the more they pay; the point is to turn up interesting bugs, and the figure of merit for that is "how much did we have to spend". At a large company, a bounty that isn't paying anything out is a failure.

            All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you.

            Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it.

            • pyman 10 hours ago

              Interesting insights, thanks for sharing

    • maxlin 11 hours ago

      Permanent NDA's? Oof. It's like their plan is to just try to force the lid down till they reach ASI or something lol

      • tptacek 11 hours ago

        Again: NDAs are bog standard bounty terms.

  • winstonhowes 6 hours ago

    Hi all, I work on security at OpenAI. We have looked into this report and the model response does not contain outputs from any other users nor does it reflect a security vulnerability, compromise, or exploit.

    The original report was that submitting a message close to (but not quite) 1500 seconds to the audio transcription API would result in weird, unrelated, off-topic responses that look like they might be replies to someone else’s query. This is not what’s happening. Our API has a bug where if the tokenization of the audio (which is not strictly correlated with the audio length) exceeds a limit, the entire input is truncated, and the model effectively receives a blank query. We’re working with our API team to get this fixed and to produce more useful error messages.

    When the model receives an empty query, it generates a response by selecting one random token, then another (which is influenced by the first token), and another, and so on until it has completed a reply. It might seem odd that the responses are coherent, but this is a feature of how all LLM's work - each token that comes before influences the probability for the next token, and so the model generates a response containing words, phrases, code, etc. in a way that appears humanlike but in fact is solely a creation of the model. It’s just that in this case, the output started in a random (but likely) place and the responses were generated without any input. Our text models display the same behavior if you send an empty query, or you can try it yourself by directly sampling an open source model without any inputs.

    We took a while to respond to this. Our goal is to provide a reasonable response to reports. If you have found a security vulnerability, we encourage you to report it via our bug bounty program: https://bugcrowd.com/engagements/openai.

    • diggan 3 hours ago

      > If you have found a security vulnerability, we encourage you to report it via our bug bounty program

      It seems like reporting bugs/issues via that program forces you to sign a permanent NDA preventing disclosures after the reported issue been fixed. I'm guessing the author of this disclosure isn't the only one that avoided it because of the NDA. Is that potentially something you can reconsider? Otherwise you'll probably continue to see people disclosing these things publicly and as a OpenAI user it sounds like a troublesome approach.

    • 5 hours ago
      [deleted]
    • 6 hours ago
      [deleted]
  • thorum 11 hours ago

    > The leaked responses show clear signs of being real conversations: they start with contextually appropriate replies, sometimes reference the original user question, appear in various languages, and maintain coherent conversational flow. This pattern is inconsistent with random model hallucinations but matches exactly what you'd expect from misdirected user sessions.

    A model like GPT-4o can hallucinated responses that are indistinguishable from real user interactions. This is easy to confirm for yourself: just ask it to make one up.

    I’m certainly willing to believe OpenAI leaks real user messages, but this is not proof of that claim.

    • requilence 11 hours ago

      In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.

      • Xx_crazy420_xX 6 hours ago

        Did you try to ask it to provide data of the company, by explicitly invoking hallucination in the model?

        Right now there is no real proof, untill you confirm that the data it provided cannot be hallucinated (which could be not feisable).

        Also, acknowledging the response fron OpenAI staff dismissing it, would you mind sharing PoC?

      • krainboltgreene 7 hours ago

        I’m struggling to understand why you are so adamant that this is proof.

    • astrange 9 hours ago

      GPT-4o's writing style is so specific that I find it hard to believe it could fake a user query.

      You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly.

      • wavemode 9 hours ago

        LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination.

        If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun.

        Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security.

    • robertclaus 11 hours ago

      Ya, hard to know how to react without more information.

  • ajdude 11 hours ago

        > I am issuing this limited, non‑technical disclosure:
        > No exploit code, proof‑of‑concept, or reproduction steps are included here.
    
    Then why bother? I feel a bit cynical here, but if the goal is to get this fixed, they're not going to care unless it becomes a zero day and is given to the masses, otherwise it's going to quietly be exploitable by the few unsavory groups who know of it and will never be patched. Isn't the whole point of responsible disclosures to give them a time clock to get this situated before actual publication? Forgive me if I'm wrong, I haven't been in that field in a long time.
    • tptacek 11 hours ago

      This is the security equivalent of getting Google support by getting something to the top of HN. The real audience for this post is OpenAI, not you.

    • lyu07282 8 hours ago

      It adds some pressure, we know now what the bug is about so we can guess which endpoints to poke at, then it's only a matter of time before it leaks. It would be unethical for the researcher to just publish it.

  • Eduard 8 hours ago

    > PGP Key: 1234 5678 9ABC DEF0 1234 5678 9ABC DEF0 1234 5678

    For real? At least doesn't match the one on https://keybase.io/requilence

  • robswc 11 hours ago

    Reminds me of a time I found a serious issue with mailgun. Messaged them, no reply. Had to spam their twitter to get a response. Basically you could have stolen tons of API keys from users without their knowledge and mailgun never disclosed it.

    I could have actually gone to their office in person if I wanted to be pedantic but it actually seemed like a pretty weird office space lol.

    • tptacek 11 hours ago

      I don't think disclosure of reported security issues is really a norm, unless the firm finds evidence the bug was exploited (by someone other than the reporter). It's a good thing to do, but I think the majority of stuff that gets reported everywhere is never disclosed --- with the major and obvious exception of consumer or commercial software that needs to be updated "on prem".

      • robswc 10 hours ago

        Makes sense.

        The problem I have with it is that there's no way they could have determined if an API key was stolen or not, even to this day.

        Basically, their docs (which seemed auto-generated) pointed to a domain they did not own (verified this). So if you ran any API examples you sent your keys to a 3rd party. I know because I did this. There's no way to know that the domain in the docs is simply wrong.

        I tried explaining this to the support people, that I needed to talk with a software engineer but they kept stonewalling. I think it was fixed after 24 hours or so.

  • jofzar 11 hours ago

    I'm curious which mailbox they sent to, trying to find a mailbox is surprisingly hard even with my Google searching.

    • requilence 11 hours ago

      they have security.txt file on their domain and mentioned it in some other place

  • pyman 11 hours ago

    > A single misconfiguration can leak thousands of sensitive conversations in seconds. Treating privacy as an afterthought is untenable when the blast radius is this large.

    Massive security bug, well spotted. It's like Bank of America showing other people my transactions, or Meta leaking my WhatsApp messages.

    This raises some serious questions about security.

  • blibble 11 hours ago

    good to see more and more hackers refusing to use corporate bug bounty platforms with onerous terms

    I certainly wouldn't sign an indefinite NDA for a chance to win:

    Average payout: $836.36

    openai should be grateful, after all, they want all information to be free

  • JyB 11 hours ago

    I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.

    Just to avoid yet another case of hallucinations outputs getting misinterpreted.

    • requilence 11 hours ago

      Right, thank you for the suggestion. Just added a paragraph to the original blog post.

      • tabletcorry 10 hours ago

        Your added paragraph appears to suggest the opposite, that this was an LLM response. Was the "leaked data" a response from an LLM directly?

  • 11 hours ago
    [deleted]
  • rglover 11 hours ago

    Thank you for sharing and reporting this.

  • Eduard 8 hours ago

    POC?