As a security person it is tiring to see so many people here either directly claim or at least allude to the claim that this is somehow much less scary because the _published_ exploit does not bypass ASLR. The writeup claims there is a way to reliably bypass ASLR with this attack. And that is a good default assumption I would be willing to believe without evidence.
ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.
No remotely reachable vuln should be taken lightly.
At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.
There can be situations where you set some variables on top level and then override those in the location block with rewrite. These variables could be then used e.g. in log lines or in other "global" contexts.
yeah when I read these RCE reports about public-facing software that I know about I usually upgrade them within minutes of reading the report that's why I read these reports and you really have to take them seriously because otherwise your machine gets compromised, sooner rather than later... it seems like lately there's been no advance notice on a lot of these RCE exploits that are publicly released, I mean come on guys at least give us a few minutes to upgrade our software before releasing the exploit, it feels like the late 1980s early 1990s when there was no guardrails on disclosure, i.e. all the remotely exploitable sendmail bugs. people who fail to read these reports or read them too late wind up having millions of machines being compromised because of it. currently nginx has about a 39% - 43% share of the public facing web server market today, so its pretty serious.
> and saying this is extremely harmful for anyone that trusts claims like that.
Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.
What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.
> ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.
I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.
This one's pretty bad but there are some preconditions.
Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).
Not the person you asked but I am not aware of any that disable ASLR by default, though most default to 1 which only enables ASLR for applications compiled to enable it vs 2 forcing it on or 3 on some distributions that use a hardened kernel. Rather than trusting any assumptions I prefer to run checksec [1] on every OS I touch. It's an old script but works just as well today as it did long ago. One may find that some applications are missing some basic hardening compile time options. The script is not an exhaustive test of all modern hardening options. Example of ASLR being forced on:
This invocation will list the status of RELRO, Stack Canary, NX/PaX, PIE of all running daemons. My CachyOS installation for example is missing Stack Canaries for all daemons.
checksec.sh --fortify-proc 732
* Process name (PID) : sshd (732)
* FORTIFY_SOURCE support available (libc) : Yes
* Binary compiled with FORTIFY_SOURCE support: N
Some additional compile time hardening options [2] and discussion [3]. Even Rust apparently has some compile time security related options.
Worker processes are forked from the master, which means they receive the same memory layout. You get unlimited crashes against the worker. There's probably a way to exploit that to get a read oracle. At the very least this is a reliable denial of service.
I doubt it: aslr is not as easy to break on modern Linux as everyone in this thread wants to pretend it is. And anybody who actually cares so much about security that a compromised web frontend is the end of the world should be doing other things which would additionally mitigate this...
I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.
Is there a good alternative to Apache and Nginx that's written in a memory-safe language and not full of security holes? I briefly looked at Jetty (written in Java) and Caddy (written in Go) but they seem to have a history of vulnerabilities of other types (e.g. shell injection in Jetty) so I'm not sure they would be any better.
Memory safety is good, but does not protect from every threat. In this day and age infrastructure operators should familiarize themselves with proactive defenses, MAC: SElinux and AppArmor. It required much friction earlier, but there are more tools to ease the usage today.
Any software used at the scale of Apache and nginx will have a history of vulnerabilities. The fact they both survived with their market share for so long is a good sign
On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.
On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.
Caddy been a breeze to use, bit sucky model with "we have thousands of binaries depending on what combination of plugins you want" instead of a proper plugin system, but if you're building it from source, it's pretty nifty and simple anyways.
Recompiling with the features you want is a great model for a free software project. So much simpler to write and maintain compared to a plugin system that it really makes more sense in a lot of cases.
I've switched to using traefik from caddy. For simple use cases it's a little more verbose in the configuration, but for more involved things like multiple load balancing backends, rewriting paths and headers and so on I've found it really good.
Apache and I think Nginx have a huge list of features and stuff. Most alternate http servers limit the scope a lot, so you'd need to specify what features you're interested in.
But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.
I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.
Just as a PSA, I found that "nginx -v" was not detailed about the version sufficient to check, but "apt list nginx" gave the full version number that was checkable, and indeed the 24.04 version of this morning (1.24.0-2ubuntu7.8) is patched.
I find it very unlikely that anyone using nginx does NOT use `set` at least.
Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'
edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.
tl;dr If you don't use ngx_http_rewrite_module, you're fine
Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.
Your opinion is that if, for a godforsaken reason, someone needs to rewrite URLs in their web server, they should avoid PCRE (something designed for string manipulation) because it's overkill, and they should use Lua (a full programming language) instead?
We do this for 3 sub-domains of ardour.org; there's no application code involved, because we're rewriting historical URLs to their current form, and the "application" doesn't do that or need to do that or need to know about that.
Just saw this pop up — full public PoC for CVE-2026-42945 ("NGINX Rift"), a heap buffer overflow in NGINX's ngx_http_rewrite_module that's been there since 0.6.27 (2008).
It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).
The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.
Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.
The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.
Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?
The exploit they chose assumes ASLR is disabled for simplicity's sake, but if you read the full writeup they say they could've used the vulnerability to map memory layout. It's nice to have ASLR but some types of vulnerabilities can be used to bypass it.
Wow, coming from the webdev world. It is so funny seeing NGINX, one of the widest used web servers in the world, on version 1.x. React is on version 19. Really shows how differently new vs. old software is designed and built, and not necessarily in a good way.
anyone can choose any version string convention they want for their project. Comparing two different pieces of software by their version string doesn't make sense.
I chalk that up more to different versioning schemes rather than how much work is being done. If nginx changed whole numbers like react did, I bet it would be even higher.
As a security person it is tiring to see so many people here either directly claim or at least allude to the claim that this is somehow much less scary because the _published_ exploit does not bypass ASLR. The writeup claims there is a way to reliably bypass ASLR with this attack. And that is a good default assumption I would be willing to believe without evidence.
ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.
No remotely reachable vuln should be taken lightly.
At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.
There can be situations where you set some variables on top level and then override those in the location block with rewrite. These variables could be then used e.g. in log lines or in other "global" contexts.
Not extremely common, but it does happen.
yeah when I read these RCE reports about public-facing software that I know about I usually upgrade them within minutes of reading the report that's why I read these reports and you really have to take them seriously because otherwise your machine gets compromised, sooner rather than later... it seems like lately there's been no advance notice on a lot of these RCE exploits that are publicly released, I mean come on guys at least give us a few minutes to upgrade our software before releasing the exploit, it feels like the late 1980s early 1990s when there was no guardrails on disclosure, i.e. all the remotely exploitable sendmail bugs. people who fail to read these reports or read them too late wind up having millions of machines being compromised because of it. currently nginx has about a 39% - 43% share of the public facing web server market today, so its pretty serious.
> and saying this is extremely harmful for anyone that trusts claims like that.
Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.
What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.
> ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.
I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.
This one's pretty bad but there are some preconditions.
Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).
Also the POC assumes ASLR is disabled.
Example: https://github.com/DepthFirstDisclosures/Nginx-Rift/blob/mai...
I think "rewrite" is rarely used nowadays? Isn't it something from old days of PHP and Apache?
Does any distro disable ASLR by default?
If you were to do it by hand, nginx doesn't come to mind as a likely candidate.
Not the person you asked but I am not aware of any that disable ASLR by default, though most default to 1 which only enables ASLR for applications compiled to enable it vs 2 forcing it on or 3 on some distributions that use a hardened kernel. Rather than trusting any assumptions I prefer to run checksec [1] on every OS I touch. It's an old script but works just as well today as it did long ago. One may find that some applications are missing some basic hardening compile time options. The script is not an exhaustive test of all modern hardening options. Example of ASLR being forced on:
Typical invocation: This invocation will list the status of RELRO, Stack Canary, NX/PaX, PIE of all running daemons. My CachyOS installation for example is missing Stack Canaries for all daemons. Some additional compile time hardening options [2] and discussion [3]. Even Rust apparently has some compile time security related options.[1] - https://www.trapkit.de/tools/checksec/ # some Linux repositories already contain "checksec".
[2] - https://best.openssf.org/Compiler-Hardening-Guides/Compiler-...
[3] - https://news.ycombinator.com/item?id=43533516
The official F5 page is here: https://my.f5.com/manage/s/article/K000161019
As noted elsewhere, ASLR protects you. While you are waiting for your affected platform to get the fix, they note the mitigation:
"use named captures instead of unnamed captures in rewrite definition"
"To mitigate this vulnerability for this example, replace $1 and $2 with the appropriate named captures, $user_id and $section"
F5 patched 1.31.0 and 1.30.1.
OpenResty has a patch for 1.27 and 1.29: https://github.com/openresty/openresty/commit/ee60fb9cf645c9...
You can track OpenResty's (a Lua application server based on Nginx) progress here: https://github.com/openresty/openresty/issues/1119
The POC disables aslr: https://github.com/DepthFirstDisclosures/Nginx-Rift/blob/mai...
Worker processes are forked from the master, which means they receive the same memory layout. You get unlimited crashes against the worker. There's probably a way to exploit that to get a read oracle. At the very least this is a reliable denial of service.
Depth First's full writeup: https://depthfirst.com/research/nginx-rift-achieving-nginx-r...
Sure, but I think the github README ought to make it more clear the POC as-is doesn't work against nginx on any current Linux distro.
So you're not vulnerable to script-kiddies running the published PoC. Still probably vulnerable to to a sufficiently-motivated attacker.
I doubt it: aslr is not as easy to break on modern Linux as everyone in this thread wants to pretend it is. And anybody who actually cares so much about security that a compromised web frontend is the end of the world should be doing other things which would additionally mitigate this...
I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.
Is there a good alternative to Apache and Nginx that's written in a memory-safe language and not full of security holes? I briefly looked at Jetty (written in Java) and Caddy (written in Go) but they seem to have a history of vulnerabilities of other types (e.g. shell injection in Jetty) so I'm not sure they would be any better.
Memory safety is good, but does not protect from every threat. In this day and age infrastructure operators should familiarize themselves with proactive defenses, MAC: SElinux and AppArmor. It required much friction earlier, but there are more tools to ease the usage today.
https://presentations.nordisch.org/apparmor/
https://github.com/nobody43/apparmor-profiles/blob/master/ng...
https://github.com/nobody43/apparmor-suggest
Disclaimer: I'm the author of both repos.
Any software used at the scale of Apache and nginx will have a history of vulnerabilities. The fact they both survived with their market share for so long is a good sign
Right, that's essentially what I'm thinking.
On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.
On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.
Caddy been a breeze to use, bit sucky model with "we have thousands of binaries depending on what combination of plugins you want" instead of a proper plugin system, but if you're building it from source, it's pretty nifty and simple anyways.
Recompiling with the features you want is a great model for a free software project. So much simpler to write and maintain compared to a plugin system that it really makes more sense in a lot of cases.
Can often also be noticeably more performant.
I've switched to using traefik from caddy. For simple use cases it's a little more verbose in the configuration, but for more involved things like multiple load balancing backends, rewriting paths and headers and so on I've found it really good.
Go doesn't support runtime linking, which is why "no plugins" (even though Go docs claim it does, no it doesn't).
nginx had this defect for a long time too!
Apache and I think Nginx have a huge list of features and stuff. Most alternate http servers limit the scope a lot, so you'd need to specify what features you're interested in.
But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.
I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.
Does Debian 12 have this patched? But I guess I'm not affected if I don't use `rewrite` or `set` anywhere?
https://security-tracker.debian.org/tracker/CVE-2026-42945
Ubuntu has patched as of this morning. Debian doesn't look like they've patched trixie yet.
Just as a PSA, I found that "nginx -v" was not detailed about the version sufficient to check, but "apt list nginx" gave the full version number that was checkable, and indeed the 24.04 version of this morning (1.24.0-2ubuntu7.8) is patched.
I find it very unlikely that anyone using nginx does NOT use `set` at least.
Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'
edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.
The default NGINX PHP integration uses this:
Good to know, thanks. Wondering how long to the next.
Better links:
https://depthfirst.com/research/nginx-rift-achieving-nginx-r... (https://news.ycombinator.com/item?id=48126029)
https://depthfirst.com/nginx-rift (https://news.ycombinator.com/item?id=48123365)
Someone tell LowLevel
tl;dr If you don't use ngx_http_rewrite_module, you're fine
Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.
Your opinion is that if, for a godforsaken reason, someone needs to rewrite URLs in their web server, they should avoid PCRE (something designed for string manipulation) because it's overkill, and they should use Lua (a full programming language) instead?
Am I understanding you correctly?
Yes.
We do this for 3 sub-domains of ardour.org; there's no application code involved, because we're rewriting historical URLs to their current form, and the "application" doesn't do that or need to do that or need to know about that.
Why not 302 instead?
Just saw this pop up — full public PoC for CVE-2026-42945 ("NGINX Rift"), a heap buffer overflow in NGINX's ngx_http_rewrite_module that's been there since 0.6.27 (2008).
It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).
The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.
- Repo + Python exploit: https://github.com/DepthFirstDisclosures/Nginx-Rift - Full technical write-up: https://depthfirst.com/research/nginx-rift-achieving-nginx-r... - F5 advisory + patches (1.31.0 / 1.30.1 for OSS, plus Plus updates): https://my.f5.com/manage/s/article/K000160932 (or the latest K000161019)
Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.
The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.
Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?
Crap
Given it relies on ASLR being disabled, it's extremely unlikely you're at any risk from this.
The exploit they chose assumes ASLR is disabled for simplicity's sake, but if you read the full writeup they say they could've used the vulnerability to map memory layout. It's nice to have ASLR but some types of vulnerabilities can be used to bypass it.
That‘s wishful thinking
I read that in my own voice, so relatable hahahaha
Looks into the CVE, ah an heap memory corruption, business as usual.
Wow, coming from the webdev world. It is so funny seeing NGINX, one of the widest used web servers in the world, on version 1.x. React is on version 19. Really shows how differently new vs. old software is designed and built, and not necessarily in a good way.
https://world.hey.com/dhh/finished-software-8ee43637 https://josem.co/the-beauty-of-finished-software/
That's because nginx doesn't break things for end user every release, so there is no reason to bump major version.
I bet nginx doesn't even follow semantic versioning, which you seem to be talking about.
Don't have to bet: Nginx doesn't follow it. It has its own linux-kernel (odd vs evens) inspired convention.
Doesn't change the fact that only "breaking" changes in 1.x.x line are changes to defaults.
anyone can choose any version string convention they want for their project. Comparing two different pieces of software by their version string doesn't make sense.
I guess someone need to update https://0ver.org/ then.
I chalk that up more to different versioning schemes rather than how much work is being done. If nginx changed whole numbers like react did, I bet it would be even higher.
lighttpd still around too, on 1.4.82, not too much changed there.
They've been working on version 2.0 for many years now as well, I wonder when they think a release might happen.
> not necessarily in a good way
How do you think versioning works? You know that it's completely arbitrary and up to the author, right? Very ironic comment.