I was going to go on a little rant about public audit reports that say stuff like "this company is very secure and is doing things great and this audit confirms that" --- not at all an x41-specific complaint, virtually all assessment firms are guilty of it, some much more than x41.
But: they found a triggerable heap corruption vulnerability in a Rust program, which is a nice catch.
I do think giving the vulnerability that follows that one a sev:hi, despite it being both theoretical (I don't think they have a POC) and not corrupting memory, is grade inflation though.
4.1.1 MLLVD-CR-24-01: Signal Handler’s Alternate Stack Too Small
4.1.2 MLLVD-CR-24-02: Signal Handler Uses Non-Async-Safe Functions
4.1.3 MLLVD-CR-24-03: Virtual IP Address of Tunnel Device Leaks to Net-
work Adjacent Participant
4.1.4 MLLVD-CR-24-04: Deanonymization Through NAT
4.1.5 MLLVD-CR-24-05: Deanonymization Through MTU
4.1.6 MLLVD-CR-24-06: Sideloading Into Setup Process
All pretty straightforward IMO. They lean on "DAITA" aka Defence against AI Traffic Analysis pretty heavily, which I don't fully understand yet, but is probably worth some further reading.
Safe signal handling has so many footguns that it seems worth re-considering the entire API.
Even OpenSSH has had issues with it [1].
It seems very difficult to build good abstractions for it in any programming language, without introducing some function colouring mechanism explicitly for this. Maybe a pure language like Haskell could do it.
Haskell's runtime is so complex that I don't think you can write signal handling functions in Haskell. The best you can do is to mark a sigatomic boolean inside the real signal handler and arrange the runtime for check for that boolean outside the signal handler.
Yup: see https://hackage.haskell.org/package/ghc-internal-9.1001.0/do... where it is clear that setting a handler simply writes to an array inside an MVar. And when the signal handler is run, the runtime starts a green thread to run it, which means user Haskell code does not need to worry about signal handler safe functions at all, since from the OS perspective the signal handler has returned. The user handler function simply runs as a new green thread independent of other threads.
But I like the fact that you brought up this idea. Haskell can't do it but in a parallel universe if there were another language with no runtime but with monads, we can actually solve this.
In fish-shell we have to forego using the niceties of the rust standard library and make very carefully measured calls to libc posix functions directly, with extra care taken to make sure so memory used (eg for formatting errors or strings) was allocated beforehand.
Or it's nearly impossible for a pure functional language if the result of the async signal means you need to mutate some state elsewhere in the program to deal with the issue.
I think that’s slightly orthogonal. It would still be safe, because you’d design around this restriction from the start, rather than accidentally call or mutate something you were not supposed to.
The problem with safe signal handling is that you need to verify that your entire signal handler call stack is async safe. Assuming purity is a stronger property, signal handling is a safe API without any more work.
The inflexibility due to the purity might cause other issues but that’s more a language level concern. If the signal handling API is safe and inflexible, it still seems better for a lot of use cases than an unsafe by default one.
This is a nice audit report. The dedicated threat model section is something that a lot of auditing outfits skip over in their reports. While I'm positive Cure53, Assured, and Atredis (the previous auditors) established an appropriate threat model with Mullvad prior to engagement, it's not explicitly written out for the reader, which opens up room for misinterpretation of the findings.
> established an appropriate threat model with Mullvad prior to engagement
Doesn't this make it kinda pointless? If the target has a say in how they should perform their audit/attack, how does that not produce results biased to the targets favor? Wouldn't the most unbiased way to do such a thing would be for the target to have zero idea what the auditor would be doing?
> which opens up room for misinterpretation of the findings
If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
Because the client often has actual knowledge of their design and the places where they want force to be applied to find weaknesses, because they're trying to evaluate the results with regards to specific outcomes, not every possible open-ended question you can think up. On top of that there is a reasonable limit in terms of time/money/staff/resources that can be spent on these kinds of audits, etc. For example, if you're using a cloud provider it's not like you're going to pay them infinity money to compromise GCP over the course of 9 months through a background operator compromise or some nation-state attack. You're not going to pay them to spend all day finding 0days in OpenSSL when your business runs a Django app. You're going to set baseline rules like "You need to compromise our account under some approaches, like social engineering of our own employees, or elevating privileges by attacking the software and pivoting."
It's mostly just a matter of having a defined scope. They could of course say "You can only attack this one exact thing" that makes them look good, but this is true of many things.
Defining the threat model is standard in the infosec auditing/pentest world, FWIW.
> If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
That's only true if your threat model is "literally every possible thing that could ever happen", which is so broad to be meaningless and impossible to test anyway.
Computer programmers also do not typically design their programs under the assumption that someone stuffed newspaper between their CPU and heatsink and it caught on fire. They work on the assumption the computer is not on fire.
To do an audit you have to audit against some sort of pre-established criteria. That is how audits work. In security, that will typically be a standard (or set of standards) alongside a threat model. In finances, you audit against what is legal in the areas you operate.
>[...] zero idea what the auditor would be doing?
That's a practical impossibility. From the client side you want to be able to evaluate quotes, stay within a budget, etc. You don't want to pay good money (audits are really expensive!) for areas that you are works-in-progress, or non-applicable threat models (e.g. lots of security software explicitly does not protect against nation-state actors, so they don't do audits from the perspective of a nation-state actor).
From the auditor side, you want to know what staff to assign (according to their expertise), how to schedule your staff, etc.
>If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
Not at all. The company says "This is the set of standards we are auditing against and our threat model. This is how we performed". The results are useful for everything covered by those standards and threat model. By explicitly stating the threat model, you as a consumer can compare your threat model to the one that was audited and make an informed decision.
Say I manufacture door locks, and I ask you to audit the security of my system. Wouldn't it make sense to agree with you that stuff like lockpicking is fine, but going around the building, breaking a window and entering the room doesn't count as "breaking the lock security"?
That's the whole point of a threat model: Mullvad has a threat model, and they build a product resistant to that. When someone audits the product, they should audit it against the threat model.
No, the results would be worthless only if your threat model were significantly different than the one that Mullvad was operating under. In which case, having the threat model detailed explicitly is already valuable to you.
For example, this X41's threat model only supposes that an attacker could execute code on the system as a different, unprivileged user. They don't consider the situation where an attacker might have an administrative account on the system.
For my personal devices today, this matches my threat model. If an attacker has an administrative account on my machine, I assume that my VPN isn't going to be able to protect my traffic from them. There's no need to worry about laying out all the ways this could impact Mullvad's client.
I’ve never understood the neighbor approach. What’s the logic for that? Instead of your skin, it’s a person one door down from you, that was generous enough to share their connection with you? That’s not anonymity, that’s just outsourcing the identity to someone that probably extended trust to you. And if other things like Tor remove that connection, then what was the point of using a neighbor in the first place?
Is there any serious website that reviews (rank list) these VPNs? I say this because it is always difficult to find information that is not sponsored on the internet. In fact, I've always heard that Mullvad is one of the best, even supporting P2P
These rankings are going to be meaningless and littered with blog spam. VPNs as a category are mostly snake oil. The only real world use for vpns is circumventing censorship if you live in a place that censors. The only privacy you're gaining is that from your ISP.
If you’re lucky enough to structure your entire app in advance to keep in mind how sync signals are delivered, you can ususllly get away with only setting an atomic Boolean, incrementing an atomic int, or setting a binary semaphore.
The presence of signals in UNIX made me reach the following conclusion: event loop should be mandatory (or at least opt-out), something setup in the CRT before main(). Of course, we're not living in such a well-made C world.
> Virtual IP Address of Tunnel Device Leaks to Network Adjacent Participant
> X41 recommends to mitigate the issue by setting the kernel parameter arp_ignore to 1 on Linux.
> It is also recommended to randomize the virtual IP address for each user on each connection if possible.
... isn't randomizing the virtual IP address makes the situation worse? sounds like the best solution would be just give every user the same boring static IP address like 169.254.199.1/30.
For each session. Keys are rotated frequently, so a lot of noise could be produced. The only and one address is a good strategy for anti fingerprint though, but it is not easy to achieve for WG tunnels and pure L3 routing.
Personally I don't really get their multi hop when you connect on a predefined port on an ingress server to get redirected to egress in a different region. Easy guessable for a powerful observer.
Anyway any VPN is only an encryption tool, not an anonymizer.
and they've been accepting bitcoin since 2010. I assume they've done very well from that (I'm afraid to calculate what the present value of my mullvad subscription would be)
Nit: they have a partnership with Tailscale to offer the VPN as a part of a tailnet that subscribes to the service.
But, it's not white label. White label implies it would be Tailscale VPN (or similar) with no reference to Mullvalad in their docs or marketing. But that's not what is happening with their offering.
One of my use cases for VPN is to watch free, legal anime on YouTube from Muse-Asia. I use a VPN to connect to Indonesia, which allows me to watch anime like Dandadan. a US IP won't show anything on their Youtube page. I'm using Mullvad VPN.
Have you cared to check the tiers they offer?
Hint: not that many, and no free ones.
And knowing that mullvad doesn’t come close to the mainstream marketing others (well in essence one) VPN providers, your comment comes of as malicious.
I was going to go on a little rant about public audit reports that say stuff like "this company is very secure and is doing things great and this audit confirms that" --- not at all an x41-specific complaint, virtually all assessment firms are guilty of it, some much more than x41.
But: they found a triggerable heap corruption vulnerability in a Rust program, which is a nice catch.
I do think giving the vulnerability that follows that one a sev:hi, despite it being both theoretical (I don't think they have a POC) and not corrupting memory, is grade inflation though.
Direct link to the PDF report:
https://x41-dsec.de/static/reports/X41-Mullvad-Audit-Public-...
Titles of issues they found:
4.1.1 MLLVD-CR-24-01: Signal Handler’s Alternate Stack Too Small
4.1.2 MLLVD-CR-24-02: Signal Handler Uses Non-Async-Safe Functions
4.1.3 MLLVD-CR-24-03: Virtual IP Address of Tunnel Device Leaks to Net- work Adjacent Participant
4.1.4 MLLVD-CR-24-04: Deanonymization Through NAT
4.1.5 MLLVD-CR-24-05: Deanonymization Through MTU
4.1.6 MLLVD-CR-24-06: Sideloading Into Setup Process
All pretty straightforward IMO. They lean on "DAITA" aka Defence against AI Traffic Analysis pretty heavily, which I don't fully understand yet, but is probably worth some further reading.
https://mullvad.net/en/vpn/daita
Safe signal handling has so many footguns that it seems worth re-considering the entire API.
Even OpenSSH has had issues with it [1].
It seems very difficult to build good abstractions for it in any programming language, without introducing some function colouring mechanism explicitly for this. Maybe a pure language like Haskell could do it.
[1]: https://blog.qualys.com/vulnerabilities-threat-research/2024...
Haskell's runtime is so complex that I don't think you can write signal handling functions in Haskell. The best you can do is to mark a sigatomic boolean inside the real signal handler and arrange the runtime for check for that boolean outside the signal handler.
Yup: see https://hackage.haskell.org/package/ghc-internal-9.1001.0/do... where it is clear that setting a handler simply writes to an array inside an MVar. And when the signal handler is run, the runtime starts a green thread to run it, which means user Haskell code does not need to worry about signal handler safe functions at all, since from the OS perspective the signal handler has returned. The user handler function simply runs as a new green thread independent of other threads.
But I like the fact that you brought up this idea. Haskell can't do it but in a parallel universe if there were another language with no runtime but with monads, we can actually solve this.
In fish-shell we have to forego using the niceties of the rust standard library and make very carefully measured calls to libc posix functions directly, with extra care taken to make sure so memory used (eg for formatting errors or strings) was allocated beforehand.
Or it's nearly impossible for a pure functional language if the result of the async signal means you need to mutate some state elsewhere in the program to deal with the issue.
I think that’s slightly orthogonal. It would still be safe, because you’d design around this restriction from the start, rather than accidentally call or mutate something you were not supposed to.
The problem with safe signal handling is that you need to verify that your entire signal handler call stack is async safe. Assuming purity is a stronger property, signal handling is a safe API without any more work.
The inflexibility due to the purity might cause other issues but that’s more a language level concern. If the signal handling API is safe and inflexible, it still seems better for a lot of use cases than an unsafe by default one.
I think the paper is easier to follow
https://dl.acm.org/doi/pdf/10.1145/3603216.3624953
This is a nice audit report. The dedicated threat model section is something that a lot of auditing outfits skip over in their reports. While I'm positive Cure53, Assured, and Atredis (the previous auditors) established an appropriate threat model with Mullvad prior to engagement, it's not explicitly written out for the reader, which opens up room for misinterpretation of the findings.
> established an appropriate threat model with Mullvad prior to engagement
Doesn't this make it kinda pointless? If the target has a say in how they should perform their audit/attack, how does that not produce results biased to the targets favor? Wouldn't the most unbiased way to do such a thing would be for the target to have zero idea what the auditor would be doing?
> which opens up room for misinterpretation of the findings
If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
Because the client often has actual knowledge of their design and the places where they want force to be applied to find weaknesses, because they're trying to evaluate the results with regards to specific outcomes, not every possible open-ended question you can think up. On top of that there is a reasonable limit in terms of time/money/staff/resources that can be spent on these kinds of audits, etc. For example, if you're using a cloud provider it's not like you're going to pay them infinity money to compromise GCP over the course of 9 months through a background operator compromise or some nation-state attack. You're not going to pay them to spend all day finding 0days in OpenSSL when your business runs a Django app. You're going to set baseline rules like "You need to compromise our account under some approaches, like social engineering of our own employees, or elevating privileges by attacking the software and pivoting."
It's mostly just a matter of having a defined scope. They could of course say "You can only attack this one exact thing" that makes them look good, but this is true of many things.
Defining the threat model is standard in the infosec auditing/pentest world, FWIW.
> If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
That's only true if your threat model is "literally every possible thing that could ever happen", which is so broad to be meaningless and impossible to test anyway.
Computer programmers also do not typically design their programs under the assumption that someone stuffed newspaper between their CPU and heatsink and it caught on fire. They work on the assumption the computer is not on fire.
>Doesn't this make it kinda pointless?
To do an audit you have to audit against some sort of pre-established criteria. That is how audits work. In security, that will typically be a standard (or set of standards) alongside a threat model. In finances, you audit against what is legal in the areas you operate.
>[...] zero idea what the auditor would be doing?
That's a practical impossibility. From the client side you want to be able to evaluate quotes, stay within a budget, etc. You don't want to pay good money (audits are really expensive!) for areas that you are works-in-progress, or non-applicable threat models (e.g. lots of security software explicitly does not protect against nation-state actors, so they don't do audits from the perspective of a nation-state actor).
From the auditor side, you want to know what staff to assign (according to their expertise), how to schedule your staff, etc.
>If Mullvad dictated how to do things or imposed limits on the reach of the testing, the results are worthless anyway
Not at all. The company says "This is the set of standards we are auditing against and our threat model. This is how we performed". The results are useful for everything covered by those standards and threat model. By explicitly stating the threat model, you as a consumer can compare your threat model to the one that was audited and make an informed decision.
Say I manufacture door locks, and I ask you to audit the security of my system. Wouldn't it make sense to agree with you that stuff like lockpicking is fine, but going around the building, breaking a window and entering the room doesn't count as "breaking the lock security"?
That's the whole point of a threat model: Mullvad has a threat model, and they build a product resistant to that. When someone audits the product, they should audit it against the threat model.
No, the results would be worthless only if your threat model were significantly different than the one that Mullvad was operating under. In which case, having the threat model detailed explicitly is already valuable to you.
For example, this X41's threat model only supposes that an attacker could execute code on the system as a different, unprivileged user. They don't consider the situation where an attacker might have an administrative account on the system.
For my personal devices today, this matches my threat model. If an attacker has an administrative account on my machine, I assume that my VPN isn't going to be able to protect my traffic from them. There's no need to worry about laying out all the ways this could impact Mullvad's client.
Link to Mullvad's blog post: https://mullvad.net/en/blog/the-report-for-the-2024-security...
The Mullvad VPN app. Not the service.
There was an audit of the VPN servers earlier this year:
https://mullvad.net/en/blog/fourth-infrastructure-audit-comp...
This is relevant to folks evaluating VPN providers as the app is most users' entrypoint to the service.
Of course, but that doesn't make the title less misleading.
Thanks for helping me not waste my time
dang "X41 audited the Mullvad VPN app" might be a clearer title.
This seems to be mostly a test of the VPN client application, not the VPN service. However, "Deanonymization Through NAT" is about the VPN service.
I use mullvad VPN with wireguard on OpenBSD (man wg). Works great. You can buy months with bitcoin for anonymity.
Became a fan of Mullvad when I visited China. It was the most reliable VPN app I tested and you can have up to 5 devices per account.
Even if you buy it with BTC surely you're still connecting with your real IP?
not if he is using his neighors
maybe he is using tor on top of it
who knows
I’ve never understood the neighbor approach. What’s the logic for that? Instead of your skin, it’s a person one door down from you, that was generous enough to share their connection with you? That’s not anonymity, that’s just outsourcing the identity to someone that probably extended trust to you. And if other things like Tor remove that connection, then what was the point of using a neighbor in the first place?
Is there any serious website that reviews (rank list) these VPNs? I say this because it is always difficult to find information that is not sponsored on the internet. In fact, I've always heard that Mullvad is one of the best, even supporting P2P
These rankings are going to be meaningless and littered with blog spam. VPNs as a category are mostly snake oil. The only real world use for vpns is circumventing censorship if you live in a place that censors. The only privacy you're gaining is that from your ISP.
The go-to used to be the website of "that one privacy guy". Now, on who is this guy, and whether this is really his site, I have no idea.
https://thatoneprivacysite.xyz/#detailed-vpn-comparison
> (Data last updated on 20/07/19)
Port forwarding was removed a year ago which handicapped P2P.
https://mullvad.net/en/blog/2023/5/29/removing-the-support-f...
You heard wrong. Mullvad is the best ;)
I'm convinced signal handlers are nearly impossible to write without introducing terribly gnarly race conditions.
If you’re lucky enough to structure your entire app in advance to keep in mind how sync signals are delivered, you can ususllly get away with only setting an atomic Boolean, incrementing an atomic int, or setting a binary semaphore.
The presence of signals in UNIX made me reach the following conclusion: event loop should be mandatory (or at least opt-out), something setup in the CRT before main(). Of course, we're not living in such a well-made C world.
> Virtual IP Address of Tunnel Device Leaks to Network Adjacent Participant > X41 recommends to mitigate the issue by setting the kernel parameter arp_ignore to 1 on Linux. > It is also recommended to randomize the virtual IP address for each user on each connection if possible.
... isn't randomizing the virtual IP address makes the situation worse? sounds like the best solution would be just give every user the same boring static IP address like 169.254.199.1/30.
For each session. Keys are rotated frequently, so a lot of noise could be produced. The only and one address is a good strategy for anti fingerprint though, but it is not easy to achieve for WG tunnels and pure L3 routing.
Personally I don't really get their multi hop when you connect on a predefined port on an ingress server to get redirected to egress in a different region. Easy guessable for a powerful observer.
Anyway any VPN is only an encryption tool, not an anonymizer.
Worse how?
Where does Mullvad get all this money? I've seen physical ads in different places in the world, audits, etc.
I'm not suggesting a conspiracy, but is the VPN business that good? Are they funded by a privacy group?
Since they're a Swedish company, their yearly report is public: [1]. 25% profit margin (Vinstmarginal) does sound quite nice.
[1]: https://www.bolagsfakta.se/5592384001-Mullvad_VPN_AB
They provide white label for Mozilla, Tailscale and may be some others I am not aware of. Plus they really sell a lot of subscriptions.
and they've been accepting bitcoin since 2010. I assume they've done very well from that (I'm afraid to calculate what the present value of my mullvad subscription would be)
Nit: they have a partnership with Tailscale to offer the VPN as a part of a tailnet that subscribes to the service.
But, it's not white label. White label implies it would be Tailscale VPN (or similar) with no reference to Mullvalad in their docs or marketing. But that's not what is happening with their offering.
>is the VPN business that good?
One of my use cases for VPN is to watch free, legal anime on YouTube from Muse-Asia. I use a VPN to connect to Indonesia, which allows me to watch anime like Dandadan. a US IP won't show anything on their Youtube page. I'm using Mullvad VPN.
Dandadan is on Netflix... and Crunchy Roll.
Have you cared to check the tiers they offer? Hint: not that many, and no free ones.
And knowing that mullvad doesn’t come close to the mainstream marketing others (well in essence one) VPN providers, your comment comes of as malicious.
I don't think its helpful to say that the comment you responded to was in any way malicious. It was a reasonable question.
> Where does Mullvad get all this money?
From their customers.