Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
> There's literally NOTHING you can do to stop that specific attack vector.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
E2E does not protect metadata, at least not without significant overheads and system redesigns. And metadata is as important as data in messaging and storage.
> And metadata is as important as data in messaging and storage.
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.
Apple have name/address/credit-card/IMEI/IMSI tuples stored for every single Apple device. iMessage and FaceTime leak numbers, so they know who you talk to. They have real-time location data. They get constant pings when you do anything on your device. Their applications bypass firewalls and VPNs. If you don't opt out, they have full unencrypted device backups, chat logs, photos and files. They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network. Opting out of all tracking doesn't really do that. And even if you trust them despite all of this, they've repeatedly failed to protect users even from external threats. The endless parade of iMessage zero-click exploits was ridiculous and preventable, CKV only shipped this year and isn't even on by default, and so on.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
That really is not a valid argument, since Apple have grown to be "the phone".
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
But if you don't trust Google with your data, you can buy a device that runs Google's operating system, from Google, and flash somebody else's operating system onto it.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
That is good in theory. Reality, anyone you engage with that uses an Apple device has leaked your content / information to Apple. High confidence that Apple could easily build profiles on people that do not use their devices via this indirect action of having to communicate with Apple devices owners.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
Yes, if your thread model includes the provider of your operating system, then you cannot win. It's really that simple. You fundamentally need to trust your operating system because it can just lie to you
Depending on your social circle such exposure is not so hard to avoid. Maybe you cannot avoid it entirely but it may be low enough that it doesn't matter. I have older relatives with basically zero online presence.
That is an answer for an incredibly tiny fraction of the population. I'm not so much concerned about myself than society in general, and self-hosting just is not a viable solution to the problem at hand.
To be fair, it's much easier than one can imagine (try ollama on macOS for example). In the end, Apple wrote a lot of longwinded text, but the summary is "you have to trust us."
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
There are multiple threat models where you can't trust yourself.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
Nobody promised you that real solutions would work for everyone. Performing CPR to save a life is something "an incredibly tiny fraction of the population" is trained on, but it does work when circumstances call for it.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
Its not that they couldn't, its that they couldn't without a watcher knowing. And frankly this tradeoff is not new, nor is it unacceptable in anything other than "Muh Apple"
Indeed, the attestation process, as described by the article, is more geared towards unauthorized exfiltration of information or injection of malicious code. However, "authorized" activities are fully supported where that means signed by Apple. So, ultimately, users need to trust that Apple is doing the right thing, just like any other company. And yes, it means they can be forced (by law) not to do the right thing.
If Apple controls the root of trust, like the private keys in the CPU or security processor used to check the enclave (similar to how Intel and AMD do it with SEV-SNP and TDX), then technically, it's a "trust us" situation, since they likely use their own ARM silicon for that?
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
I don't think they do. Your phone cryptographically verifies that the software running on the servers is what it says it is, and you can't pull the keys out of the secure enclave. They also had independent auditors go over the whole thing and publish a report. If the chip is disconnected from the system it will dump its keys and essentially erase all data.
But since they also control the phone's operating system they can just make it lie to you!
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
I don't understand how publishing cryptographic signatures of the software is a guarantee? How do they prove it isn't keeping a copy of the code to make signatures from but actually running a malicious binary?
And the servers prove that by relying on a key stored in secure hardware. And that secure hardware is designed by Apple, who has a specific interest in convincing users of that attestation/proof. Do you see the conflict of interest now?
It was always "trust us". They make the silicon, and you have no hope of meaningfully reverse engineering it. Plus, iOS and macOS have silent software update mechanisms, and no update transparency.
Hey can you help me understand what you mean? There's an entry about "Hardware Root of Trust" in that document, but I don't see how that means Apple is avoiding stating, "we can't access your data" - the doc says it's not exportable.
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
Their threat model is described in their white papers.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
This is probably the best way to do cloud computation offoading, if one chooses to do it at all.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
Usually quite common when doing contract work, where externals have no access to anything besides a sandbox to play around with their contribution to the whole enterprise software jigsaw.
>No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
I will just use it, it’s Apple and all I need is to see the verifiable privacy thing and I let the researchers let me know red flags. You go on copilot, it says your code is private? Good luck
I'm glad that more and more people start to see through the thick Apple BS (in these comments). I don't expect them to back down from this but I hope there is enough pushback that they'll be forced to add a big opt-out for all cloud compute, however "private" they make it out to be.
The core of this article, if I understand it correctly, is that macOS pings Apple to make sure that apps you open are safe before opening them. This check contains some sort of unique string about the app being opened, and then there is a big leap to "this could be used by the government"
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
I really don’t care at all about this as the interactions that I’d have would be the speech to text, which sends all transcripts to Apple without the ability opt out.
Sibling comments point out (and I believe, corrections are welcome) that all that theater is still no protection against Apple themselves, should they want to subvert the system in an organized way. They’re still fully in control. There is, for example, as far as I understand it, still plenty of attack surface for them to run different software than they say they do.
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
As soon as you start going down the rabbit hole of state sponsored supply chain alteration, you might as well just stop the conversation. There's literally NOTHING you can do to stop that specific attack vector.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
> There's literally NOTHING you can do to stop that specific attack vector.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
E2E does not protect metadata, at least not without significant overheads and system redesigns. And metadata is as important as data in messaging and storage.
> And metadata is as important as data in messaging and storage.
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.
How is this relevant to the private cloud storage?
> History has shown, at least to date, Apple has been a good steward.
cough* HW backdoor in iPhone cough*
Apple have name/address/credit-card/IMEI/IMSI tuples stored for every single Apple device. iMessage and FaceTime leak numbers, so they know who you talk to. They have real-time location data. They get constant pings when you do anything on your device. Their applications bypass firewalls and VPNs. If you don't opt out, they have full unencrypted device backups, chat logs, photos and files. They made a big fuss about protecting you from Facebook and Google, then built their own targeted ad network. Opting out of all tracking doesn't really do that. And even if you trust them despite all of this, they've repeatedly failed to protect users even from external threats. The endless parade of iMessage zero-click exploits was ridiculous and preventable, CKV only shipped this year and isn't even on by default, and so on.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
As to the trust loss, we seem to be already past that. It seems to me they are now in the stage of faking it.
...in certain places: https://support.apple.com/en-us/111754
Just make absolutely sure you trust your government when using an iDevice.
>Just make absolutely sure you trust your government
This sentence stings right now. :-(
> that all that theater is still no protection against Apple themselves
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
You're getting taken in by a misdirection.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
Yep. If you don't trust apple with your data, don't buy a device that runs apples operating system
That really is not a valid argument, since Apple have grown to be "the phone".
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
But if you don't trust Google with your data, you can buy a device that runs Google's operating system, from Google, and flash somebody else's operating system onto it.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
That is good in theory. Reality, anyone you engage with that uses an Apple device has leaked your content / information to Apple. High confidence that Apple could easily build profiles on people that do not use their devices via this indirect action of having to communicate with Apple devices owners.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
Yes, if your thread model includes the provider of your operating system, then you cannot win. It's really that simple. You fundamentally need to trust your operating system because it can just lie to you
Depending on your social circle such exposure is not so hard to avoid. Maybe you cannot avoid it entirely but it may be low enough that it doesn't matter. I have older relatives with basically zero online presence.
Define "content / information".
Exactly. You can only trust yourself [1] and should self host.
[1] https://www.youtube.com/watch?v=g_JyDvBbZ6Q
That is an answer for an incredibly tiny fraction of the population. I'm not so much concerned about myself than society in general, and self-hosting just is not a viable solution to the problem at hand.
To be fair, it's much easier than one can imagine (try ollama on macOS for example). In the end, Apple wrote a lot of longwinded text, but the summary is "you have to trust us."
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
So you can only trust yourself. Period.
There are multiple threat models where you can't trust yourself.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
I'm sure there are other classes of self-failure.
The odds that I make a mistake in my security configuration are much higher than the odds that Apple is maliciously backdooring themselves.
The PCC model doesn't guarantee they can't backdoor themselves, but it does make it more difficult for them.
I don't even trust myself, I know that I'm going to mess up at some point or another.
Nobody promised you that real solutions would work for everyone. Performing CPR to save a life is something "an incredibly tiny fraction of the population" is trained on, but it does work when circumstances call for it.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
Can you trust the hardware?
If you make your own silicon can you trust that the sand hasnt been tampered with to breech your security?
There's a niche industry that works on that problem: looking for evidence of tampering down to the semiconductor level.
Notably https://www.bunniestudios.com/blog/2020/introducing-precurso...
Its not that they couldn't, its that they couldn't without a watcher knowing. And frankly this tradeoff is not new, nor is it unacceptable in anything other than "Muh Apple"
Indeed, the attestation process, as described by the article, is more geared towards unauthorized exfiltration of information or injection of malicious code. However, "authorized" activities are fully supported where that means signed by Apple. So, ultimately, users need to trust that Apple is doing the right thing, just like any other company. And yes, it means they can be forced (by law) not to do the right thing.
If Apple controls the root of trust, like the private keys in the CPU or security processor used to check the enclave (similar to how Intel and AMD do it with SEV-SNP and TDX), then technically, it's a "trust us" situation, since they likely use their own ARM silicon for that?
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
I don't think they do. Your phone cryptographically verifies that the software running on the servers is what it says it is, and you can't pull the keys out of the secure enclave. They also had independent auditors go over the whole thing and publish a report. If the chip is disconnected from the system it will dump its keys and essentially erase all data.
But since they also control the phone's operating system they can just make it lie to you!
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
How do you know the root enclave key isn't retained somewhere before it is written? You're still trusting Apple.
Key extraction is difficult but not impossible.
> Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
I don't understand how publishing cryptographic signatures of the software is a guarantee? How do they prove it isn't keeping a copy of the code to make signatures from but actually running a malicious binary?
The client will only talk to servers that can prove they're running the same software as the published signatures.
https://security.apple.com/documentation/private-cloud-compu...
And the servers prove that by relying on a key stored in secure hardware. And that secure hardware is designed by Apple, who has a specific interest in convincing users of that attestation/proof. Do you see the conflict of interest now?
It was always "trust us". They make the silicon, and you have no hope of meaningfully reverse engineering it. Plus, iOS and macOS have silent software update mechanisms, and no update transparency.
Hey can you help me understand what you mean? There's an entry about "Hardware Root of Trust" in that document, but I don't see how that means Apple is avoiding stating, "we can't access your data" - the doc says it's not exportable.
"Explain it like I'm a lowly web dev"
https://x.com/_saagarjha/status/1804130898482466923
https://x.com/frogandtoadbook/status/1734575421792920018
every entity you hand data to other than yourself is a "trust us" situation
+1 on your comment.
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
Their threat model is described in their white papers.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
They define their threat model in "Anticipating Attacks"
This is probably the best way to do cloud computation offoading, if one chooses to do it at all.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
Love this, but as an engineer, I would hate to get a bug report in that prod environment, 100% don't work on my machine and 0% reproducibility
Usually quite common when doing contract work, where externals have no access to anything besides a sandbox to play around with their contribution to the whole enterprise software jigsaw.
That's a strange point of view. Clearly one shouldn't use private information for testing in any production environment.
As a person who works on this kinda stuff I know what they mean. It’s very hard to debug things totally blind.
>No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
For the experts out there, how does this compare with AWS Nitro?
I will just use it, it’s Apple and all I need is to see the verifiable privacy thing and I let the researchers let me know red flags. You go on copilot, it says your code is private? Good luck
I've got a fully private LLM that's pretty good at coding built right into my head - I'll stick with that, thanks.
I'm glad that more and more people start to see through the thick Apple BS (in these comments). I don't expect them to back down from this but I hope there is enough pushback that they'll be forced to add a big opt-out for all cloud compute, however "private" they make it out to be.
Please don't fall for the cheap "Apple is pro privacy" veneer.
They cannot be trust any more. These "Private Compute" schemes are blatant lies. Maybe even scams at this point.
Learn more — https://sneak.berlin/20201112/your-computer-isnt-yours/
The core of this article, if I understand it correctly, is that macOS pings Apple to make sure that apps you open are safe before opening them. This check contains some sort of unique string about the app being opened, and then there is a big leap to "this could be used by the government"
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
Have you read the linked article?
I really don’t care at all about this as the interactions that I’d have would be the speech to text, which sends all transcripts to Apple without the ability opt out.