Building Private Processing for AI Tools on WhatsApp

(engineering.fb.com)

22 points | by 3s 2 days ago ago

15 comments

  • nl 14 hours ago

    Broadly similar to what Apple is trying with their private compute work.

    It's a great idea but the trust chains are so complex they are hard to reason about.

    In "simple" public key encryption reasonably technically literate people can reason about it ("not your key, not your X") but with private compute there are many layers, each of which works in a fairly complex way and AFAIK you always end up having to trust a root source of trust that certifies the trusted device.

    It's good in the sense it is trust minimization, but it's hard to explain and the cynicism (see HN comments similar to "you can't trust it because big tech/gov interference etc) means I am sadly pessimistic about the uptake.

    I wish it wasn't so though. The cynicism in particular I find disappointing.

    • squigz 12 hours ago

      Why do you find it disappointing? It seems quite appropriate to me.

      • brookst 11 hours ago

        Not GP, but to me it is also disappointing because it’s just the old “if seatbelts don’t prevent car accidents, why wear them?” argument.

        On the one hand you have systems where anyone at any company in the value chain can inspect your data ad hoc , with no auditing or notification.

        On the other hand, you have systems that prevent casual security / privacy violations but could still be subverted by a state actor or the company that has the root of trust.

        Neither is perfect. But it’s cynical and nihilistic to profess to see no difference.

        Risk reduction should be celebrated. Those who see no value in it come across as zealots.

  • grugagag 14 hours ago

    > We’re sharing an early look into Private Processing, an optional capability that enables users to initiate a request to a confidential and secure environment and use AI for processing messages where no one — including Meta and WhatsApp — can access them.

    What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.

    • justanotheratom 14 hours ago

      I don't understand the knee-jerk skepticism. This is something they are doing to gain trust and encourage users to use AI on WhatsApp.

      WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.

      • mhio 12 hours ago

        What's the motive "to gain trust and encourage users to use AI on WhatsApp"? Meta aren't a charity. You have to question their motives because their motive is to extract value out of their users who don't pay for a service, and I would say that whatsapp has proven to be a harder place to extract that value than their other ventures.

        btw whatsapp implemented the signal protocol around 2016.

        • justanotheratom 11 hours ago

          "motive is to extract value out of their users who don't pay for a service" that is called a business.

          if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla

          • echelon_musk 2 hours ago

            Privacy was reduced from where it already stood by the introduction of an AI assistant to an E2E messaging app.

            Had they not included it in the first place they would then not have to 'improve privacy' by reworking the AI.

            I agree with OP and am highly sceptical of Meta's motives.

    • ipsum2 14 hours ago

      Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.

      > This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.

      • ATechGuy 10 hours ago

        A few startups [1,2] also offer infra for private AI based on confidential computing from Nvidia and Intel/AMD.

        1. https://tinfoil.sh 2. https://www.privatemode.ai

      • brookst 11 hours ago

        We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..

        ..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.

    • asadm 13 hours ago

      I mean you are not forced to?

      If a company is trying to move their business to be more privacy focused, at least we can be non-dismissive.

  • cutler 4 hours ago

    Love the Accept-only cookie notice. A real trust builder.

  • 2Gkashmiri 12 hours ago

    So this is fb explaining how they are using your content from e2ee to cloud and back ? So not even Fb knows the content ?

    Simple question. What if csam is sent to ai. Would it stop, or report to authorities or allow processing ? Same for bad stuff.

    • brookst 11 hours ago

      See: how Apple tried to solve this and generated massive outrage.