You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical relationship. The correlations are still there.
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
I'm a huge fan of the technical basis for this. I want services to attest themselves to me so I can verify that they're running the source code I can inspect. And, well, the combination of founders here? Good fucking lord. I'm really fascinated to see whether we can generate enough trust in the code to be able to overcome the complete lack of trust that these people deserve. I can't imagine a better way to troll me on this point.
>the complete lack of trust that these people deserve
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
The SGX TCB isn’t large enough to protect the really critical part of a private VPN: the source and destination of packets. Nothing stops them from sticking a user on their own enclave and monitoring all the traffic in-and-out.
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
Also, it requires me to trust Intel—an American company, to not have a backdoor in the SGX. That amounts to exactly no trust at all, so it’s a pass from me, and probably any non-US citizen.
Okay I don't have much information about this whole attestation flow and one question boggles my mind. If someone can explain this in simple terms, I'd be thankful:
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
"Ask a VP.NET server for the fingerprint it reports" is a little bit simplistic. The process for actually doing this involves you handing the server a random number, and it sending you back a signed statement including both the fingerprint and the random number you gave it. This prevents it simply reporting a fixed fingerprint statement every time someone asks. The second aspect of this is that the key used to sign the statement has a certificate chain that ties back to Intel, and which can be proven to be associated with an SGX enclave. Assuming you trust Intel, the only way for something to use this key to sign such a statement is for it to be a true representation of what that CPU is running inside SGX at the time.
How do I know I'm connecting to the WireGuard instance being attested and not something else? Could the host run one attestable instance, but then have users connect to a separate, malicious one?
The attestation covers the public key, so you would only connect to an instance which has that public key.
In order for a malicious instance to use the same public key as an attested one, they’d have to share the private key (for decryption to work). If you can verify that the SGX code never leaks the private key that was generated inside the enclave, then you can be reasonably sure that the private key can’t be shared to other servers or WG instances.
> how do I verify the server is reporting the fingerprint of the actual running code
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
Intel SGX/remote attestation for verifying that servers are running the code they say they are running is very interesting, I believe Signal talked about doing something similar for contact discovery, but at a base level it requires a lot of trust. How do I verify that the attestation I receive back is the one of the machine I am contacting? Can I know for sure that this isn't a compromised SGX configuration, since the system has been broken in the past? Furthermore, can I really be sure that I can trust SGX attestations if I can't actually verify SGX itself? Even if the code running under SGX is verifiable, as an ordinary user it's basically impossible to tell if there are bugs that would make it possible to compromise.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
Intel will not attest insecure configurations. Our client will automatically verify the attestation it receives to make sure the certificate isn't expired and has a proper signature under Intel's CA trust.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
That's a pretty big trust already. Intel has much to loose and would have no problem covering up bugs for government in SGX or certifying government-malware.
And intel had a LOT of successfull attacks and even with their cpu they are known to prefer speed than security.
I really am interested in how this works. How can the client software verify that the SGX attestation actually is from the same machine as the VPN connection? I guess there's probably an answer here, but I don't know enough about SGX.
The way this works is by generating a private key inside the enclave and having the CPU attest its public key.
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
Hmm. That really does seem pretty clever, and if it works the way it sounds like it does, obviously does resolve most of the concerns around how this would work and avoid obvious pitfalls. I still stand by the more obvious concern (paranoid people probably don't trust that Intel SGX isn't possible for powerful actors to compromise) but I do understand why one would pursue this avenue and find it valuable.
Intel audits configuration on system launch and verifies it runs something they know safe. That involves CPU, CPU microcode, BIOS version and a few other things (SGX may not work if you don't have the right RAM for example).
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
Because the code doesn't have any code to clone private keys.
The trust chain ends with you trusting Intel to only make CPUs that do what they say they do, so that if the code doesn't say to clone a private key, it won't.
(You also have to trust the owners to not correlate your traffic from outside the enclave, which is the same as every VPN, so this adds nothing)
One of the many reasons I love Mullvad (been using it for 4 years now) is their simple pricing—$5/month whether you subscribe monthly, yearly, or even 10 years out.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
The chief privacy officer of the company is the moron that destroyed Freenode. Of course, Libera lives on, but it is a transition we could’ve done without.
Someone had a comment here that just disappeared, mentioning it's by Mark Karpelès (yes, the same guy from MtGox) and Andrew Lee. Why did that remark get deleted?
These VPN's for privacy are so bad. You give your credit card (verified identity), default gateway and payload to foreign soil and feel safe. On top of that your packets clear text metadata verifies you with cryptographic accuracy.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
You're welcome to use cryptocurrencies (we have a page for that), and our system only links your identity at connection time to ensure you have a valid subscription. Your traffic isn't tied to your identity, and you can look at the code to verify that.
Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
> Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
Semi-serious: redeemable codes you can buy at a national retail chain, ostensibly using cash. It has the unfortunate side effect of training people to fall for scams, however. Bonus points if you can somehow make the codes worthless on the black market, I guess.
Some VPNs kind of offer that. I know at least one that sells physical redeemable cards you can buy - maybe physically in some countries, but in mine it's only available on Amazon. Even that option should be safe for keeping your identifying data from the VPN provider, even in the situation where they betray their promises on not holding onto your data. This is because Amazon can't know which exact code was sent out to you, and the provider in turn doesn't have any additional info to associate with that code, besides knowing if it's valid or not. The biggest downside is that now Amazon knows you paid for this service, even if they don't know the specifics.
There's also an option to just mail them cash, but some countries may seize all mailed cash if discovered.
They claim to allow anonymous sign up and payments, but requires an email,an account, zip code and name for Crypto payments, but fake info could be used I guess. I tried ordering via Crypto, but it constantly gives me this error: "Unable to load order information. Try again".
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
> Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
This VPN requires you to trust in Intel - a failing US megacorp desperate for money - as well as the guy who destroyed Mt Gox and the guy who destroyed Freenode. Personally, I'd rather trust in Mullvad.
This is cool, and I'm glad to see someone doing this, but I also feel obligated to mention that you can also just quickly deploy your own VPN server that only you have access to with AlgoVPN: https://github.com/trailofbits/algo
Yep! It’s very easy: rent any cloud server, stick a WireGuard/OpenVPN/ShadowSocks container on it, download the config, and you’re done. Since you’re not interested in compute, you can probably use the tiniest cloud server available to save costs.
I pay approximately 50¢/month for such a setup, and you can probably do it for free forever if you decide to be slightly abusive about it. However, be aware that you don’t really gain any real privacy since you’re effectively just changing your IP address; a real VPN provides privacy by mixing your traffic with that of a bunch of other clients.
Some services will also block cloud ranges to prevent e.g. geoblock evasion, although you’ll see a lot less blocking compared to a real VPN service or Tor.
the only next level VPN is the one where it shows each line of code being executed from its github repo while you connect to the server. There aren't many ways you can beat that level of verification
Cute idea. Bit worried about the owners here; rasengan doesn't have a stellar reputation after what happened with Freenode.
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
The US government might be able to pressure Intel into doing something with SGX, but there are way too many eyes on this for it to go unnoticed in my opinion, especially considering SGX has been around for so long and messed with by so many security researchers.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
What does the verifiable program do though? With a VPN, what I'm concerned about is my traffic not being sniffed and analyzed. This code seem to have something to do with keys but it's not clear how that helps...?
This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
> This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
(First off, duskwuff's attack is pretty epic. I do feel like there might be a way to ensure there is only exactly one giant server--not that that would scale well--but, it also sounds like you didn't deal with it ;P. The rest of my comment is going to assume that you only have a single instance.)
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
How does this attestation work? How can I be sure that this isn't just returning the fingerprint I expect without actually running in an enclave at all? Does Intel sign those messages?
Similar to TLS, the attestation includes a signature and a x509 certificate with a chain of trust to Intel's CA. The whole attestation is certified by Intel to be valid and details such as the enclave fingerprint (MRENCLAVE) are generated by the CPU to be part of the attestation.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
OVPN successfully evaded Hollywood(through pressure on Swedish institutions) 5 years ago when they were up ThePirateBays ass again.
You still have to trust them, you're not wrong but at some point I'll fall back to the common question security people(not me) tell paranoid doubters: Whats your threat model?
If you're running a global child-abusing ring through Mullvad or OVPN(offers static IPv4 for inbound traffic) I don't know what they'd do but they've proved themselves over and over to be organisations you can trust.
OVPN turns around about 1.2M$ with 0.8M$ profit (0), Mullvad turns around significantly more money but with less profit margin (1) (probably funneling profits to a tax haven) so the risk of someone buying out OVPN is there, but "you" are probably not worth it if the ones targeting TPB didn't figure out how to get through.
You can still run TOR over their VPNs as another layer if you're uncertain their reputation is trustworthy enough for your usecase but don't want TOR traffic originating from your IP.
One year later: VP.NET SGX code collision attack using lultzmann xyz math theory that allows the attacker to run different code with same sgx verifier!
In all seriousness, I don’t even trust intel to start with.
That's not what they;re trying to prove. Only one server is give the certificate to authenticate with you, you connect to that server, every message with that server is authenticated with that certificate.
They are proving that they are the ones hosting the VPN server - not some server that stole their software and are running a honeypot and that the hosting company has not tampered with it.
So in the end you still have to trust the company that they are not sharing the certificates with 3rd parties.
Seems fairly similar, ARM's response to TEE basically. We started with SGX because it is battle tested and has a lot of people still trying to find issues, meaning any issue is likely solved quickly, however we are planning to also evaluate and support other solutions. Information is restricted and cannot leave the enclave unless the code running in there allows it to in both cases.
As far as I know SGX has no 0-day exploits live today. sgx.fail was the largest collection of attacks and have all been resolved.
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
And once a CPU is attacked with a voltage glitching type attack, the compromise is so complete that the secret seeds burned into the hardware are leaked.
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
> Build the code we published, get the fingerprint it produces, ask a VP.NET server for the fingerprint it reports, and compare the two. If they match, the server is running the exact code you inspected. No trust required.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
Intel SGX comes with an attestation process aiming at exactly that. The attestation contains a number of details, such as the hardware configuration (cpu microcode version, BIOS, etc) and the hash of the enclave code. At system startup the CPU gets a certificate from Intel confirming the configuration is known safe, which is used by the CPU to in turn certify the enclave is indeed running code with a given fingerprint.
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
Even assuming SGX is to be trusted and somehow the government would not be able to peek inside (implausible), this "tech" will not solve legal problems... see "tornado cash" for an example. "No logs" is either impossible, illegal or a honeypot.
Yes crypto payments, a bit difficult to find since you need to look at the bottom of the page however, but we have some plans to improve that in the coming days.
Hard disagree... not only is SGX deprecated and was also removed from recent processors due to security issues IIRC, it still can't prove that your requests are actually being served by the code they say they're running. The machine/keys you get back from their server could be from anywhere and might be completely unrelated.
> A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generation Intel Core processors, but development continues on Intel Xeon for cloud and enterprise use.
SGX's original goal of being used for DVD DRMs has been deprecated because it turns out people don't keep their BIOS up to date and didn't all get Intel's latest CPUs, making the use of SGX as a client side feature not useful. Turns out it's a lot more useful server-side, and while it had its share of issues (see sgx.fail) Intel addressed these (also see sgx.fail). It can prove your requests are actually being served by the code due to the way the TLS attestation works.
Not an oversight, one of SGX's features is MRENCLAVE measurement, a hash of the code running inside the enclave that can be compared with the value obtained at build time.
Looks neat, but how can you tell the fingerprint the server returns was actually generated by the enclave server, and isn't just a hardcoded response to match the expected signature of the published code (or that you're not talking to compromised box that's simply proxying over a signature from a legit, uncompromised SGX container)?
The enclave fingerprint is generated as part of the attestation.
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
> the connection is guaranteed to be with a host running the software in question
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Let me clarify, it guarantees your connection is being served by the enclave itself. The TLS encryption keys are kept inside the enclave, so whatever data is exchanged with the host, it can only be read from within the secure encrypted enclave.
> it guarantees your connection is being served by the enclave itself
Served by an enclave, but there's no guarantee it's the one actually handling your VPN requests at that moment, right?
And even if it was, my understanding is this still wouldn't prevent other network-level devices from monitoring/logging traffic before/after it hits the VPN server.
Saying "we don't log" doesn't mean someone else isn't logging at the network level.
I think SGX also wouldn't protect against kernel-level request logging such as via eBPF or nftables.
The attestation guarantees it's the one serving the request, and the encryption/decryption and NAT occurs inside the enclave so it's definitely private.
This is honestly less trustworthy than nordvpn from my POV. The problem with confidential compute is that given enough technical expertise (ime exploits) all these systems are possible to compromise which is perfect for honeypots. Kinda sounds like one of those interpol plots with phones designed for criminals.
I always found confidential compute to be only good to isolate the hosting company from risk - not the customer!
> No trust required.
You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical relationship. The correlations are still there.
(latter quotes are from their github readme https://github.com/vpdotnet/vpnetd-sgx )
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
I'm a huge fan of the technical basis for this. I want services to attest themselves to me so I can verify that they're running the source code I can inspect. And, well, the combination of founders here? Good fucking lord. I'm really fascinated to see whether we can generate enough trust in the code to be able to overcome the complete lack of trust that these people deserve. I can't imagine a better way to troll me on this point.
>the complete lack of trust that these people deserve
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
The SGX TCB isn’t large enough to protect the really critical part of a private VPN: the source and destination of packets. Nothing stops them from sticking a user on their own enclave and monitoring all the traffic in-and-out.
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
Also, it requires me to trust Intel—an American company, to not have a backdoor in the SGX. That amounts to exactly no trust at all, so it’s a pass from me, and probably any non-US citizen.
The backdoor is as simple as “Intel has all the signing keys for the hardware root of trust so they can sign anything they want” :)
Okay I don't have much information about this whole attestation flow and one question boggles my mind. If someone can explain this in simple terms, I'd be thankful:
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
"Ask a VP.NET server for the fingerprint it reports" is a little bit simplistic. The process for actually doing this involves you handing the server a random number, and it sending you back a signed statement including both the fingerprint and the random number you gave it. This prevents it simply reporting a fixed fingerprint statement every time someone asks. The second aspect of this is that the key used to sign the statement has a certificate chain that ties back to Intel, and which can be proven to be associated with an SGX enclave. Assuming you trust Intel, the only way for something to use this key to sign such a statement is for it to be a true representation of what that CPU is running inside SGX at the time.
How do I know I'm connecting to the WireGuard instance being attested and not something else? Could the host run one attestable instance, but then have users connect to a separate, malicious one?
The attestation covers the public key, so you would only connect to an instance which has that public key.
In order for a malicious instance to use the same public key as an attested one, they’d have to share the private key (for decryption to work). If you can verify that the SGX code never leaks the private key that was generated inside the enclave, then you can be reasonably sure that the private key can’t be shared to other servers or WG instances.
> how do I verify the server is reporting the fingerprint of the actual running code
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
Intel SGX/remote attestation for verifying that servers are running the code they say they are running is very interesting, I believe Signal talked about doing something similar for contact discovery, but at a base level it requires a lot of trust. How do I verify that the attestation I receive back is the one of the machine I am contacting? Can I know for sure that this isn't a compromised SGX configuration, since the system has been broken in the past? Furthermore, can I really be sure that I can trust SGX attestations if I can't actually verify SGX itself? Even if the code running under SGX is verifiable, as an ordinary user it's basically impossible to tell if there are bugs that would make it possible to compromise.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
Intel will not attest insecure configurations. Our client will automatically verify the attestation it receives to make sure the certificate isn't expired and has a proper signature under Intel's CA trust.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
> has a proper signature under Intel's CA trust.
That's a pretty big trust already. Intel has much to loose and would have no problem covering up bugs for government in SGX or certifying government-malware.
And intel had a LOT of successfull attacks and even with their cpu they are known to prefer speed than security.
I really am interested in how this works. How can the client software verify that the SGX attestation actually is from the same machine as the VPN connection? I guess there's probably an answer here, but I don't know enough about SGX.
The way this works is by generating a private key inside the enclave and having the CPU attest its public key.
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
Hmm. That really does seem pretty clever, and if it works the way it sounds like it does, obviously does resolve most of the concerns around how this would work and avoid obvious pitfalls. I still stand by the more obvious concern (paranoid people probably don't trust that Intel SGX isn't possible for powerful actors to compromise) but I do understand why one would pursue this avenue and find it valuable.
> Can I know for sure that this isn't a compromised SGX configuration, since the system has been broken in the past?
As far as I'm aware, no. Any network protocol can be spoofed, with varying degrees of difficulty.
I would love to be wrong.
Intel audits configuration on system launch and verifies it runs something they know safe. That involves CPU, CPU microcode, BIOS version and a few other things (SGX may not work if you don't have the right RAM for example).
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
Depends on your threat model. You cannot, under any circumstance, prove (mathematically) that a peer is the only controller of a private key.
Again, I would love to know if I'm wrong.
The fact that no publicly disclosed threat actor has been identified says nothing.
Proving a negative that information has not been shared has been a challenge from the beginning of information.
Are you suggesting a solution for this situation?
> Any network protocol can be spoofed, with varying degrees of difficulty.
Because of the cryptographic verifications, the communication cannot be spoofed.
Pray tell how a black box peer can validate its not had its private keys cloned?
Because the code doesn't have any code to clone private keys.
The trust chain ends with you trusting Intel to only make CPUs that do what they say they do, so that if the code doesn't say to clone a private key, it won't.
(You also have to trust the owners to not correlate your traffic from outside the enclave, which is the same as every VPN, so this adds nothing)
One of the many reasons I love Mullvad (been using it for 4 years now) is their simple pricing—$5/month whether you subscribe monthly, yearly, or even 10 years out.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
The chief privacy officer of the company is the moron that destroyed Freenode. Of course, Libera lives on, but it is a transition we could’ve done without.
Someone had a comment here that just disappeared, mentioning it's by Mark Karpelès (yes, the same guy from MtGox) and Andrew Lee. Why did that remark get deleted?
And that's the PIA Andrew Lee, not the Firebase Andrew Lee.
Also known as the freenode Andrew Lee/rasengan.
I'm assuming OP is Mark Karpeles, MagicalTux is a well-known username for him.
These VPN's for privacy are so bad. You give your credit card (verified identity), default gateway and payload to foreign soil and feel safe. On top of that your packets clear text metadata verifies you with cryptographic accuracy.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
You're welcome to use cryptocurrencies (we have a page for that), and our system only links your identity at connection time to ensure you have a valid subscription. Your traffic isn't tied to your identity, and you can look at the code to verify that.
Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
> Cryptocurrencies? Aka the least private form of transactions, where not only the sender and receiver know, but the whole block chain immutably stores for everyone else to view?
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
cryptocurrency != bitcoin. monero has solved this issue for almost a decade.
they just got exploited a few days ago
So what do you suggest instead?
Semi-serious: redeemable codes you can buy at a national retail chain, ostensibly using cash. It has the unfortunate side effect of training people to fall for scams, however. Bonus points if you can somehow make the codes worthless on the black market, I guess.
Some VPNs kind of offer that. I know at least one that sells physical redeemable cards you can buy - maybe physically in some countries, but in mine it's only available on Amazon. Even that option should be safe for keeping your identifying data from the VPN provider, even in the situation where they betray their promises on not holding onto your data. This is because Amazon can't know which exact code was sent out to you, and the provider in turn doesn't have any additional info to associate with that code, besides knowing if it's valid or not. The biggest downside is that now Amazon knows you paid for this service, even if they don't know the specifics.
There's also an option to just mail them cash, but some countries may seize all mailed cash if discovered.
You cannot offer service for money with user anonymity. Your legal knows that.
> And don't even mention TOR, pls.
What's your issue with tor?
Mullvad VPN allows you to pay with cash in an envelop with no name etc
Yes. But your cash gets attributed by your origin IP address. Which you pay with your identity available.
They claim to allow anonymous sign up and payments, but requires an email,an account, zip code and name for Crypto payments, but fake info could be used I guess. I tried ordering via Crypto, but it constantly gives me this error: "Unable to load order information. Try again".
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
> Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
This VPN requires you to trust in Intel - a failing US megacorp desperate for money - as well as the guy who destroyed Mt Gox and the guy who destroyed Freenode. Personally, I'd rather trust in Mullvad.
Being outside the US doesn’t shield you from it.
And worse, it is harder for the American government to eavesdrop on US soil than it is outside America.
Of course, if a national spying apparatus is after you, regardless of the nation, pretty good chance jurisdiction doesn’t matter.
> And worse, it is harder for the American government to eavesdrop on US soil
The GP mentioned Snowden and yet you say this. What material and significant changes have happened since 2013 to make this claim?
This is cool, and I'm glad to see someone doing this, but I also feel obligated to mention that you can also just quickly deploy your own VPN server that only you have access to with AlgoVPN: https://github.com/trailofbits/algo
I’ve recently become interested in hosting my own VPN, due to the amount of websites that require me to disable my VPN when visiting their site.
I imagine those websites block IP ranges of popular VPN providers.
Am I right in thinking that hosting my own VPN would resolve this issue?
Yep! It’s very easy: rent any cloud server, stick a WireGuard/OpenVPN/ShadowSocks container on it, download the config, and you’re done. Since you’re not interested in compute, you can probably use the tiniest cloud server available to save costs.
I pay approximately 50¢/month for such a setup, and you can probably do it for free forever if you decide to be slightly abusive about it. However, be aware that you don’t really gain any real privacy since you’re effectively just changing your IP address; a real VPN provides privacy by mixing your traffic with that of a bunch of other clients.
Some services will also block cloud ranges to prevent e.g. geoblock evasion, although you’ll see a lot less blocking compared to a real VPN service or Tor.
> built in the usa. backed by the constitution.
Old copy? Might need an update.
the only next level VPN is the one where it shows each line of code being executed from its github repo while you connect to the server. There aren't many ways you can beat that level of verification
Cute idea. Bit worried about the owners here; rasengan doesn't have a stellar reputation after what happened with Freenode.
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
One of the main use cases of a VPN is against governments, but a government making Intel compromise SGX is plausible given what we know from Snowden.
The US government might be able to pressure Intel into doing something with SGX, but there are way too many eyes on this for it to go unnoticed in my opinion, especially considering SGX has been around for so long and messed with by so many security researchers.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
What does the verifiable program do though? With a VPN, what I'm concerned about is my traffic not being sniffed and analyzed. This code seem to have something to do with keys but it's not clear how that helps...?
This is the server-side part of things. It receives encrypted traffic from your (and other customers) device, and routes it to the Internet.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
> This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
(First off, duskwuff's attack is pretty epic. I do feel like there might be a way to ensure there is only exactly one giant server--not that that would scale well--but, it also sounds like you didn't deal with it ;P. The rest of my comment is going to assume that you only have a single instance.)
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
How does this attestation work? How can I be sure that this isn't just returning the fingerprint I expect without actually running in an enclave at all? Does Intel sign those messages?
Similar to TLS, the attestation includes a signature and a x509 certificate with a chain of trust to Intel's CA. The whole attestation is certified by Intel to be valid and details such as the enclave fingerprint (MRENCLAVE) are generated by the CPU to be part of the attestation.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
Remember that this only works if the cpu can be trusted! The hardware still has to be secure.
That's very informative, thanks!
Slightly tangential question - what gui framework did you use to build the apps?
I have no relationship with OVPN but after watching their server deployment on YouTube I have to say I do like their approach to security / privacy.
Servers that don't log and can't without hard drives, ports physically glued shut.
https://www.ovpn.com/en/security
Ah, but how can you tell that you’re connecting to a server that was actually configured that way?
Answer: no you can’t, you still have to trust them. At the end of the day, you always just have to trust the provider, somewhere.
OVPN successfully evaded Hollywood(through pressure on Swedish institutions) 5 years ago when they were up ThePirateBays ass again.
You still have to trust them, you're not wrong but at some point I'll fall back to the common question security people(not me) tell paranoid doubters: Whats your threat model?
If you're running a global child-abusing ring through Mullvad or OVPN(offers static IPv4 for inbound traffic) I don't know what they'd do but they've proved themselves over and over to be organisations you can trust.
OVPN turns around about 1.2M$ with 0.8M$ profit (0), Mullvad turns around significantly more money but with less profit margin (1) (probably funneling profits to a tax haven) so the risk of someone buying out OVPN is there, but "you" are probably not worth it if the ones targeting TPB didn't figure out how to get through.
You can still run TOR over their VPNs as another layer if you're uncertain their reputation is trustworthy enough for your usecase but don't want TOR traffic originating from your IP.
https://claude.ai/share/a47c19f7-8782-4a9f-ae26-2d2adb52eaed
0: https://www.allabolag.se/foretag/ovpn-integritet-ab/-/konsul... 1: https://www.allabolag.se/foretag/mullvad-vpn-ab/g%C3%B6tebor...
You can look up any Swedish company through sites like allabolag or merinfo if you're curious... until they grow into tax-evading evil megacorps :)
One year later: VP.NET SGX code collision attack using lultzmann xyz math theory that allows the attacker to run different code with same sgx verifier!
In all seriousness, I don’t even trust intel to start with.
I don't buy this.
They could run one secure enclave runningng the legit version of code and one insecure hardware running insecure software.
Then they put a load balancer in front of both.
When people ask for the attestation the LB sends traffic to the secure enclave, so you get the attestation back and all seems good.
When people send vpn traffic the loadbalancer sends them to the insecure hardware with insecure software.
So sgx proves nothing..
That's not what they;re trying to prove. Only one server is give the certificate to authenticate with you, you connect to that server, every message with that server is authenticated with that certificate.
They are proving that they are the ones hosting the VPN server - not some server that stole their software and are running a honeypot and that the hosting company has not tampered with it.
So in the end you still have to trust the company that they are not sharing the certificates with 3rd parties.
How does Intel SGX compare with e.g. ARM Trustzone? Seems like its similar in that information access is restricted from certain privilege levels?
Seems fairly similar, ARM's response to TEE basically. We started with SGX because it is battle tested and has a lot of people still trying to find issues, meaning any issue is likely solved quickly, however we are planning to also evaluate and support other solutions. Information is restricted and cannot leave the enclave unless the code running in there allows it to in both cases.
> battle tested
lol
SGX has been broken time and again
SGX has 0-day exploits live in the wild as we speak
so... valiant attempt in terms of your product... but utterly unsuitable foundation
As far as I know SGX has no 0-day exploits live today. sgx.fail was the largest collection of attacks and have all been resolved.
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
And once a CPU is attacked with a voltage glitching type attack, the compromise is so complete that the secret seeds burned into the hardware are leaked.
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
"But you know the old Russian proverb. 'Trust, but verify.' And the Americans think Ronald Reagan thought that up. Can you imagine?"
> Build the code we published, get the fingerprint it produces, ask a VP.NET server for the fingerprint it reports, and compare the two. If they match, the server is running the exact code you inspected. No trust required.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
Intel SGX comes with an attestation process aiming at exactly that. The attestation contains a number of details, such as the hardware configuration (cpu microcode version, BIOS, etc) and the hash of the enclave code. At system startup the CPU gets a certificate from Intel confirming the configuration is known safe, which is used by the CPU to in turn certify the enclave is indeed running code with a given fingerprint.
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
> how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
It's signed by Intel and thus, guaranteed to come from the enclave!
Even assuming SGX is to be trusted and somehow the government would not be able to peek inside (implausible), this "tech" will not solve legal problems... see "tornado cash" for an example. "No logs" is either impossible, illegal or a honeypot.
No crypto payments or ability to mail cash?
Yes crypto payments, a bit difficult to find since you need to look at the bottom of the page however, but we have some plans to improve that in the coming days.
Hard disagree... not only is SGX deprecated and was also removed from recent processors due to security issues IIRC, it still can't prove that your requests are actually being served by the code they say they're running. The machine/keys you get back from their server could be from anywhere and might be completely unrelated.
> A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generation Intel Core processors, but development continues on Intel Xeon for cloud and enterprise use.
https://en.wikipedia.org/wiki/Software_Guard_Extensions
SGX's original goal of being used for DVD DRMs has been deprecated because it turns out people don't keep their BIOS up to date and didn't all get Intel's latest CPUs, making the use of SGX as a client side feature not useful. Turns out it's a lot more useful server-side, and while it had its share of issues (see sgx.fail) Intel addressed these (also see sgx.fail). It can prove your requests are actually being served by the code due to the way the TLS attestation works.
Wikipedia says it's still on Xeons, so not sure what that says. But I agree with the rest of your point, it seems like an oversight to me.
Not an oversight, one of SGX's features is MRENCLAVE measurement, a hash of the code running inside the enclave that can be compared with the value obtained at build time.
Looks neat, but how can you tell the fingerprint the server returns was actually generated by the enclave server, and isn't just a hardcoded response to match the expected signature of the published code (or that you're not talking to compromised box that's simply proxying over a signature from a legit, uncompromised SGX container)?
The enclave fingerprint is generated as part of the attestation.
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
> the connection is guaranteed to be with a host running the software in question
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Let me clarify, it guarantees your connection is being served by the enclave itself. The TLS encryption keys are kept inside the enclave, so whatever data is exchanged with the host, it can only be read from within the secure encrypted enclave.
> it guarantees your connection is being served by the enclave itself
Served by an enclave, but there's no guarantee it's the one actually handling your VPN requests at that moment, right?
And even if it was, my understanding is this still wouldn't prevent other network-level devices from monitoring/logging traffic before/after it hits the VPN server.
Saying "we don't log" doesn't mean someone else isn't logging at the network level.
I think SGX also wouldn't protect against kernel-level request logging such as via eBPF or nftables.
The attestation guarantees it's the one serving the request, and the encryption/decryption and NAT occurs inside the enclave so it's definitely private.
This is honestly less trustworthy than nordvpn from my POV. The problem with confidential compute is that given enough technical expertise (ime exploits) all these systems are possible to compromise which is perfect for honeypots. Kinda sounds like one of those interpol plots with phones designed for criminals.
I always found confidential compute to be only good to isolate the hosting company from risk - not the customer!