12 comments

  • 7222aafdcf68cfe 21 hours ago

    If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.

  • f30e3dfed1c9 a day ago

    Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.

    • AznHisoka 15 hours ago

      This. Don’t ever trust anything it says, especially about itself or how it works under the hood

  • drewbug a day ago

    IP-based geolocation. Really annoying that we can't disable it.

    • kypro 19 hours ago

      That was my initial assumption about how they're collecting the location data too. However I find if I switch to a different browser it will rarely mention my location, so it could be finger printing and pulling in info from previous device session where I have mentioned my location.

      I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.

    • simianwords 21 hours ago

      just ask it not to?

  • muzani 19 hours ago

    "These are all new chats."

    Bear in mind that it shares memory from previous chats.

    There's at least two types. One saving things you tell it. Another querying recent chats.

    There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.

    • I_am_tiberius 18 hours ago

      No, when memory is disabled this shouldn't be the case.

  • theredknight11 16 hours ago

    For my work, I spoke with OpenAI representatives directly and we talked about privacy. They assured me and my colleague no one could see your chats, something like "you don't have to worry that your boss can read your chats and think you're dumb."

    My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."

    Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.

  • I_am_tiberius 18 hours ago

    I said it and say it again, openai is the biggest privacy disaster ever. Sam Altman should be ashamed, because it's not at all necessary.

  • accrual a day ago

    Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.

  • nacozarina 20 hours ago

    they have de facto immunity and have been openly violating laws for years, why did you think they would not lie about this?