49 comments

  • another-dave 3 hours ago

    > Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. > > Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. > > Manhattan-based U.S. District Judge Jed Rakoff ruled, opens new tab in February that Heppner must hand over 31 documents generated by Anthropic's chatbot Claude related to the case. > > No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.

    If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.

    Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?

    What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.

    • jubilanti 2 hours ago

      > If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.

      Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.

      The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.

      • nostrademons 22 minutes ago

        Note that the judge is bound by precedent and law as to what "unreasonable" means, they can't just make it up as they go along unless there is no precedent. Otherwise the case can be reversed on appeal.

        I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.

      • reactordev 2 hours ago

        This. All of your rights are up for debate under a judge. There’s only a few you can still exercise if a judge wants something from you but ultimately if a judge decides it’s relevant to the case, it’s relevant to the case and you must comply. Or be held in contempt. Or praise? With a senate hearing to boot. I’m confused on how our legal system actually functions now but that is how it’s supposed to be. If a judge decides to include it, it’s in. Go get it.

    • phire 3 hours ago

      > If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.

      There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.

    • jcranmer 2 hours ago

      Reading the ruling in more detail, this is definitely a "this is not even close case."

      First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.

      Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."

      Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.

      Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?

      • SoftTalker 32 minutes ago

        You cannot be compelled to provide testimonial evidence that might incriminate you. Physical evidence, documents, computer files, anything not under attorney-client privilege is fair game for a subpoena or warrant.

      • ludicrousdispla an hour ago

        What if I hire a lawyer to use Claude for me instead? Seems like that is space for a disruptive startup.

    • ndr 2 hours ago

      Consider AI prompts no different from Google searches: they can be subpoenaed.

      And consider local LLM logs no different from your txt file or command history on your computer. Could still be requested for discovery.

    • wat10000 12 minutes ago

      I don't think this is any different from other cloud-based software. Cloud providers can be compelled to turn over your data, as long as they're actually capable of doing so. If you don't want your data being snarfed up from a cloud provider and used in court, then only use cloud providers with end-to-end encryption, or better yet don't put your data in cloud providers at all.

      The only reason this ruling is even remotely interesting is because people don't understand computer systems, and chatbots feel different. For the technologically minded, it should be pretty obvious that typing into a chatbot is no different from typing into a Google Doc, and that the data in both can be available to the legal system without the user's involvement or consent. But most people aren't technologically minded and may not have realized that all of their data is being saved and made available like that.

    • rcxdude 3 hours ago

      >If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.

      Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).

  • kstrauser 3 hours ago

    It would never occur to me that they couldn’t. From a legal POV, that sounds a lot like using your search history against you.

  • neya 2 hours ago

    Of all the words to use in the title, they chose "prompts" when talking about AI. Had to read it twice because, if you assume the AI "prompts" equivalent, the whole title becomes gibberish.

  • segmondy 3 hours ago

    This is why you should have local models. The local models are good enough for private chats, they might not be as good as the cloud models for precise technical work, but for general sensitive chat you definitely should stick to local.

    • drak0n1c 2 hours ago

      Yes, local for anything that can run locally. For higher-end model needs there are privacy platforms like Venice (https://venice.ai/privacy) with ZDR legal contracts and multiple E2EE options for their open-weight models. The OpenAI/Anthropic/Google models are also available through through them but at least your identity is anonymized, though the contents of your prompt could still be stored by the destination company.

      • bossyTeacher an hour ago

        I might be missing something here but how does this change anything?

        'No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote'.

        A local model or Venice are still platforms, just local.

        Nerd smarts seldom survive real world smarts. Reminds me of this: https://xkcd.com/538/

        • tokai an hour ago

          >A local model or Venice are still platforms, just local.

          Sure but you can delete the logs yourself.

          • wat10000 11 minutes ago

            Just make sure you do it as a matter of routine policy, rather than in response to a legal issue, lest you get hit with a destruction of evidence charge.

  • flufluflufluffy 2 hours ago

    This seems so obvious to me. Why would you ever put information regarding a legal case you’re party to into an AI chat

  • kube-system an hour ago

    This seems plainly obvious -- chat bots are not attorneys. Why would they be privileged as such? You don't get attorney-client privilege when you put your legal questions into Google, or to sending them to anyone or anything else other than an attorney...

  • anthonyskipper 3 hours ago

    The obvious business opportunity here is for some lawyer to start running an AI service to do these kinds of things. Anyone who subscribes is a client of the lawyer, who owns the chatbot infrastructure, which would be protected under attorney client privilege.

    • dlcarrier an hour ago

      It would have to be communications, to be protected.

    • lukan 2 hours ago

      The buisness opportunity is what they are advertising here, communication with lawyers is protected, continue to go pay real lawyers for every question and don't try yourself with AI, that is unfortunately not protected.

    • OutOfHere 2 hours ago

      There is such a thing as anonymous chat facilitated through local LLMs or through cryptocurrency.

    • jcranmer 2 hours ago

      ... that is not how attorney-client privilege works.

      • airstrike 2 hours ago

        just write PRIVILEGED AND CONFIDENTIAL in the system prompt

  • gdulli an hour ago

    An aspect of AI that's really underdiscussed is just the basic switch from doing all your searches logged out to now being forced to be logged in somewhere. That much alone is disqualifying for me.

    • dlcarrier an hour ago

      Just because you're not logged in doesn't mean that your searches aren't being stored and monitored nor that they can't be subpoenaed. It is possible to be pretty anonymous on the internet, but it's not easy.

    • mywittyname an hour ago

      You can use an offline model via ollama. I'm sure better tools will emerge for less technically-inclined individuals.

      Seems like there might be demand for chat clients with end-to-end encryption.

  • pvtmert 2 hours ago

    people point out in sibling comments that is phone call then be out of client-attorney privileges? since it goes through a "3rd party"? maybe not the call itself but the voicemail for example. can it be "extracted" for the same purpose? another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.

  • josefritzishere 3 hours ago

    Increasingly AI seems to be mostly downside. A legal chat bot without attorney-client privledge, also implies a medical chatbot may have no HIPAA protection. It renders the service unsafe and therefore unusable and maybe more importantly... unsalable.

    • Tostino 3 hours ago

      This is a court issue, not a technical one. This has so many side effects that weren't thought through (Using Gmail to draft a letter to your attorney, but gmail has enabled AI editing...).

      Seems dumb, and like it will cause quite a few issues until it is overturned.

      • jcranmer 2 hours ago

        Non-lawyer discussing their lawyer's communications with a third party has defeated attorney-client privilege for eons, and that's basically what happened here. Especially when you're sharing those communications with a third party who explicitly told you that they will share those communications with the government if the government asks. There's no reason to overturn this.

        • mywittyname an hour ago

          Well, calling Claude a "third-party communique" here is the stretch.

          Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.

          We call them "chatbots" and anthropomorphize LLMs, but, despite the name of Claude's parent company, Claude is not a person.

          • jcranmer 2 minutes ago

            > Well, calling Claude a "third-party communique" here is the stretch.

            Why? The privacy policy explicitly says that when you're using it, you're sending your data to Anthropic.

            > Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.

            Very possibly, actually. At the very least, I wouldn't assume that it's okay to do that without first consulting with a lawyer. I do know of at least one feature in Office (desktop, not the web version) that prompted lawyers to say "if you don't roll this back, we cannot legally use your product anymore and maintain attorney-client privilege." It depends a lot on the actual contractual agreements in the terms of service and privacy policy, and while I know most people don't read them, those things actually matter!

      • nozzlegear 3 hours ago

        Why would this be overturned? AI is not a lawyer, it can't have attorney-client privilege. In your scenario, you're sending an email to the attorney, not chatting with a chatbot about your case.

        • ScoobleDoodle 2 hours ago

          They’re saying it’s equivalent to writing a letter to their attorney in an online (Google) doc. Does that Google doc fall under attorney client privilege?

          If so, then does a Google doc for your attorney written with Google AI auto enabled have attorney client privilege?

          If so, the AI chats for figuring out what you want to say to your attorney would seem to fall under the same category. And so there is either a contradiction or an unintended widening of scope.

          • nozzlegear an hour ago

            Ah, it sounds like I don't understand how the Google AI works here. I thought it was just some kind of glorified auto-correct or maybe phrase suggestion at best.

          • Tostino 2 hours ago

            Exactly what I was trying to get at. Thanks.

      • rcxdude 3 hours ago

        Intent matters, though. Accidentally divulging information you intended to send to your attorney is one thing, but if you are deliberately sending it somewhere else it's something different entirely.

    • bpodgursky 3 hours ago

      Why are you taking what is clearly a legal problem and making it about the technology? The law could simply grant attorney-client privilege to chatbots. Nobody is arguing the advice was bad or more expensive than a real lawyer.

      • mywittyname an hour ago

        Because it's the law misunderstanding technology.

        Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.

        It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?

        I can't think of a reason this isn't about the technology.

      • rcxdude 3 hours ago

        The obligations placed on lawyers with regards to misrepresentation are a kind of check on the power of attorney-client privilege which would generally not exist for chatbots, so it's not obvious that this would be a good idea.

  • amelius 3 hours ago

    What if I let my claw bot chat online?

    • erdaniels 2 hours ago

      What about it? You are responsible for the software you run.

  • deadbabe 2 hours ago

    Could there be something like a VPN for AI models? VPP?

    You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?

  • OutOfHere 2 hours ago

    Of course lawyers want you to give up your power; they don't want you looking up information that they charge $500 an hour to give you.

    Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.

  • dlcarrier an hour ago

    tl;dr: privileged communications (see: https://law.usnews.com/law-firms/advice/articles/what-are-pr...) are protected only when they are communications between privileged parties. Everything else is can be used against you in a court of law.

    • nothinkjustai 22 minutes ago

      Communication requires two separate parties. Where was the second party here? An AI isn’t a person, it’s a computer program.