> In extreme cases, on purely CPU bound benchmarks, we’re seeing a jump from < 1Gbit/s to 4 Gbit/s. Looking at CPU flamegraphs, the majority of CPU time is now spent in I/O system calls and cryptography code.
400% increase in throughput, which should translate to a proportionate reduction in CPU utilization for UDP network activity. That's pretty cool, especially for better power efficiency on portable clients (mobile and notebook).
I found this presentation refreshing. Too often, claims about transition to "modern" stacks are treated as being inherently good and do not come with the data to back it up.
4 Gbit/s is on our rather dated benchmark machines. If you run the below command on a modern laptop, you likely reach higher throughput. (Consider disabling PMTUD to use a realistic Internet-like MTU. We do the same on our benchmark machines.)
Shared ring buffers for IO exist in Linux, I don't think we'll ever see it extend to DMA for the NIC due to the rearchitecture of security required. However if the NIC is smart enough and the rules simple maybe.
There are systems that move the NIC control to user space entirely. For example Snabb has an Intel 10g Ethernet controller driver that appears to use a ring buffer on DMA memory.
RDMA offers that. The NIC can directly access user space buffers. It does require that the buffers are “registered” first but applications usually aim to do that once up front.
sure, but what about some kind of generalized cross-context ipc primitive towards a zero copy messaging mechanism for high performance multiprocessing microkernels?
While their improvements are real and necessary for actual high speed (100 Gb/s and up), 4 Gb/s is not fast. That is only 500 MB/s. Something somewhere, likely not in their code, is terribly slow. I will explain.
As the author cited, kernel context switch is only on the order of 1 us (which seems too high for a system call anyways). You can reach 500 MB/s even if you still call sendmsg() on literally every packet as long as you average ~500 bytes/packet which is ~1/3 of the standard 1500 bytes MTU. So if you average MTU sized packets, you get 2 us of processing in addition to a full system call to reach 4 Gb/s.
The old number of 1 Gb/s could be reached with a average of ~125 bytes/packet, ~1/12 of the MTU or ~11 us of processing.
“But there are also memory copies in the network stack.” A trivial 3 instruction memory copy will go ~10-20 GB/s, 80–160 Gb/s. In 2 us you can drive 20-40 KB of copies. You are arguing the network stack does 40-80(!) copies to put a UDP packet, a thin veneer over a literal packet, into a packet. I have written commercial network drivers. Even without zero-copy, with direct access you can shovel UDP packets into the NIC buffers at basically memory copy speeds.
“But encryption is slow.” Not that slow. Here is some AES-128 GCM performance done what looks like over 5 years ago. [1] The Intel i5-6500, a midline processor from 8 years ago, averages 1729 MB/s. It can do the encryption for a 500 byte packet in 300 ns, 1/6 of the remaining 2 us budget. Modern processors seem to be closer to 3-5 GB/s per core, or about 25-40 Gb/s, 6-10x the stated UDP throughput.
> you get 2 us of processing in addition to a full system call to reach 4 Gb/s
TCP has route binding, UDP does not (connect(2) helps one side, but not both sides).
> “But encryption is slow.” Not that slow.
Encryption _is slow_ for small PDUs, at least the common constructions we're currently using. Everyone's essentially been optimizing for and benchmarking TCP with large frames.
If you hot loop the state as the micro-benchmarks do you can do better, but you still see a very visible cost of state setup that only starts to amortize decently well above 1024 byte payloads. Eradicate a bunch of cache efficiency by removing the tightness of the loop and this amortization boundary shifts quite far to the right, up into tens of kilobytes.
---
All of the above, plus the additional framing overheads come into play. Hell even the OOB data blocks are quite expensive to actually validate, it's not a good API to fix this problem, it's just the API we have shoved over bsd sockets.
And we haven't even gotten to buffer constraints and contention yet, but the default UDP buffer memory available on most systems is woefully inadequate for these use cases today. TCP buffers were scaled over time, but UDP buffers basically never were, they're still conservative values from the late 90s/00s really.
The API we really need for this kind of UDP setup is one where you can do something like fork the fd, connect(2) it with a full route bind, and then fix the RSS/XSS challenges that come from this splitting. After that we need a submission queue API rather than another bsd sockets ioctl style mess (uring, rio, etc). Sadly none of this is portable.
On the crypto side there are KDF approaches which can remove a lot of the state cost involved, it's not popular but some vendors are very taken with PSP for this reason - but PSP becoming more well known or used was largely suppressed by its various rejections in the ietf and in linux. Vendors doing scale tests with it have clear numbers though, under high concurrency you can scale this much better than the common tls or tls like constructions.
I just measured. On my Ryzen 7 9700X, with Linux 6.12, it's about 50ns to call syscall(__NR_gettimeofday). Even post-spectre, entering the kernel isn't so expensive.
Are you sure that system call actually enters the kernel mode? It might be one of the special ones where kernel serves it from user space, forgot their name.
50ns on a 3GHz CPU core is ~150 cycles. Pushing and popping back the registers to L1 cache is 5-10 cycles each. With having to handle 16 general purpose registers on x86-64 this is already close to or even more than 150 cycles, no?
Also: register renaming is a thing, as is write combining and pipelining. You're not flushing to L1 synchronously for every register, or ordinary userspace function calls would regularly take hundreds of cycles for handling saved registers. They don't.
I'm on my mobile. Store to L1 width is typically 32B and you're probably right that CPU will take advantage of it and pack as much registers as it can. This still means 4x store and 4x load for 16 registers. This is ~40 cycles. So 100 cycles for the rest? Still feels minimal.
A modern x86 processor has about 200 physical registers that get mapped to the 16 architectural registers, with similar for floating point registers. It's unlikely that anything is getting written to cache. Additionally, any writes, absent explicit synchronization or dependencies, will be pipelined.
It's easy to measure how long it takes to push and pop all registers, as well as writing a moderate number of entries to the stack. It's very cheap.
As far as switching into the kernel -- the syscall instruction is more or less just setting a few permission bits and acting as a speculation barrier; there's no reason for that to be expensive. I don't have information on the cost in isolation, but it's entirely unsurprising to me that the majority of the cost is in shuffling around registers. (The post-spectre TLB flush has a cost, but ASIDs mitigate the cost, and measuring the time spent entering and exiting the kernel wouldn't show it even if ASIDs weren't in use)
Where is the state/registers written to then if not L1? I'm confused.
What do you say about the measurements from https://gms.tf/on-the-costs-of-syscalls.html? Table suggests that the cost is by a magnitude larger, depending on the CPU host, from 250 to 620ns.
As far as that article, it's interesting that the numbers vary between 76 and 560 ns; the benchmark itself has an order of magnitude variation. It also doesn't say what syscall is being done -- __NR_clock_gettime is very cheap, but, for example, __NR_sched_yield will be relatively expensive.
That makes me suspect something else is up in that benchmark.
For what it's worth, here's some more evidence that touching the stack with easily pipelined/parallelized MOV is very cheap. 100 million calls to this assembly costs 200ms, or about 2ns/call:
Benchmark is simple but I find it worthwhile because of the fact that (1) it is run across 15 different platforms (different CPUs, libc's) and results are pretty much reproducible, and (2) it is run through gbenchmark which has a mechanism to make the measurements statistically significant.
Interesting thing that enforces their hypothesis, and measurements, is the fact that, for example, getpid and clock_gettime_mono_raw on some platforms run much faster (vDSO) than on the rest.
Also, the variance between different CPUs is what IMO is enforcing their results and not the other way around - I don't expect the same call to have the same cost on different CPU models. Different CPUs, different cores, different clock frequencies, different tradeoffs in design, etc.
syscall() row invokes a simple syscall(423) and it seems to be expensive. Other calls such as close(999), getpid(), getuid(), clock_gettime(CLOCK_MONOTONIC_RAW, &ts), and sched_yield() are also producing the similar results. All of them basically an order of magnitude larger than 50ns.
As for the register renaming, I know what this is, but I still don't get it what register renaming has to do with making the state (registers) storage a cheaper operation.
This is from Intel manual:
Instructions following a SYSCALL may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSCALL have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible).
So, I wrongly assumed that the core has to wait before the data is completely written but it seems it acts more like a memory barrier but with relaxed properties - instructions are serialized but the data written doesn't have to become globally visible.
I think the most important aspect of it is "until all instructions prior to the SYSCALL have completed". This means that the whole pipeline has to be drained. With 20+ deep instruction pipeline, and whatnot instructions in it, I can imagine that this can likely become the most expensive part of the syscall.
You are basically saying: “It is slow because of all these system/protocol decisions that mismatch what you need to get high performance out of the primitives.”
Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
sure, i mean i have no goal of alignment or misalignment, i'm just trying to provide more insights into what's going on based on my observations of this from having also worked on this udp path.
> Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
yes, though this basically means we're talking about throwing out chunks of the os, the crypto design, the protocol, and a whole lot of tuning at each layer.
the only vendor in a good position to do this is apple (being the only vendor that owns every involved layer in a single product chain), and they're failing to do so as well.
the alternative is a long old road, where folks make articles like this from time to time, we share our experiences and hope that someone is inspired enough reading it to be sniped into making incremental progress. it'd be truly fantastic if we sniped a group with the vigor and drive that the mptcp folks seem to have, as they've managed to do an unusually broad and deep push across a similar set of layered challenges (though still in progress).
There is no indication what class the CPU they're benchmarking on. Additionally, this is presumably including the overhead of managing the QUIC protocol as well given they mention encryption which isn't relevant for raw UDP. And QUIC is known to not have a good story of NIC offload for encryption at the moment the way you can do kTLS offload for TCP streams.
Encryption is unlikely to be relevant. As I pointed out, doing it on any modern desktop CPU with no offload gets you 25-40 Gb/s, 6-10x faster than the benchmarked throughput. It is not the bottleneck unless it is being done horribly wrong or they do not have access to AES instructions.
“It is slow because it is being layered over QUIC.” Then why did you layer over a bottleneck that slows you down by 25x. Second of all, they did not used to do that and they still only got 1 Gb/s previously which is abysmal.
Third of all, you can achieve QUIC feature parity (minus encryption which will be your per-core bottleneck) at 50-100 Gb/s per core, so even that is just a function of using a slow protocol.
Finally, CPU class used in benchmarking is largely irrelevant because I am discussing 20x per-core performance bottlenecks. You would need to be benchmarking on a desktop CPU from 25 years ago to get that degree of single-core performance difference. We are talking iPhone 6, a decade old phone, territory for a efficient implementation to bottleneck on the processor at just 4 Gb/s.
But again, it is probably not a problem with their code. It is likely something else stupid happening on the network stack or protocol side of which they are merely a client.
io_uring doesn't really have equivalent[1], it can't batch multiple UDP diagrams, best it can do is batch multiple sendmsgs and recvmsgs. GSO/GRO is the way to go. sendmmsg/recvmmsg are indeed very old, and some kernel devs wish they could sunset them :)
> After many hours of back and forth with the reporter, luckily a Mozilla employee as well, I ended up buying the exact same laptop, same color, in a desperate attempt to reproduce the issue.
Glad to know that networking still produces insanity trying to reproduce issues à la https://xkcd.com/2259/.
Factorio's dev blog is a great deal of fun. It's on pause at the moment after the release of 2.0, but if you go through the archives there's great stuff in there. A lot of it is about optimizations which only matter once you're building 10,000+ SPM gigafactories, which casual players will never even come close to, but since crazy excess is practically what defines hardcore Factorio players it's cool to see the devs putting in the work to make the experience shine for their most devoted fans.
Each planet has its own gimmick which throws a spanner into standard builds in its own unique way - one planet is essentially a farm where your factory is growing and processing fruits, which will rot and spoil if they aren't processed immediately -- so you need to design a factory which processes small packets at high speed without any buffering.
0x0000 is a special value for some NICs meaning please calculate for me.
One NIC years ago would set 0xFFFF for bad checksum. At first we thought this was horrifyingly broken. But really you can just fallback to software verification for the handful of legitimate and bad packets that arrive with that checksum.
It is funnier if you've ever dealt with mystery packet runts, as most network appliances still do not handle them very cleanly.
UDP/QUIC can DoS any system not based on a cloud deployment large enough to soak up the peak traffic. It is silly, but it pushes out any hosting operation that can't reach a disproportionate bandwidth asymmetry with the client traffic. i.e. fine for FAANG, but a death knell for most other small/medium organizations.
This is why many LAN still drop most UDP traffic, and rate-limit the parts needed for normal traffic. Have a nice day =3
Why are they supporting Android 5? It’s over 10 years old, the devices running it after updates even older. Mobile devices from that era must have a real tough time to browse the modern bloated web. It shouldn’t even be possible to publish to Play store when targeting such an old API level. Who is the user base? Hackers who refurbished their old OnePlus, run it with charger always plugged in, didn’t upgrade to a newer LineageOS, and installed an alternative App Store, just for the sake of it? While novel, it’s a steep price to pay, as we see here it is slowing down development for the rest of us.
Can someone explain how UDP GSO/GRO works in detail? Since UDP packets can arrive out-or-order, how does a single large QUIC packet be split into multiple smaller UDP packets without any header sequence number, and how does the receiving side know the order of the UDP packets to merge?
QUIC does not depend on UDP datagrams to be delivered in order. Re-ordering happens on the QUIC layer. Thus, when receiving, the kernel passes a batch (i.e. segmented super datagram) of potentially out-of-order datagrams to the QUIC layer. QUIC reorders them.
Thanks! The Cloudflare blog article explained GSO pretty well: application must send a contiguous data buffer with a fixed segment size (except for the tail of the buffer) for GSO to split into smaller packets. But how does GRO work on the receiving side?
For example GSO might split a 3.5KB data buffer into 4 UDP datagrams: U1, U2, U3, and U4, with U1/U2/U3 being 1KB and U4 being 512B. When U1~4 arrives on the receiving host, how does GRO deal with the different permutations of orderingof the four packets (assuming no loss) and pass them to the QUIC layer? Like if U1/U2/U3/U4 come in the original sending order GRO can batch nicely. But what if they come in the order U1/U4/U3/U2? How does GRO deal with the fact that U4 is shorter?
I think as an application, when receiving packets you never really see a coalesced UDP datagrams when GRO is active.
It’s more like the kernel puts multiple datagrams into a single structure and passes that around between layers, maintaining the boundaries between them in that structure (sk_buff data fragments?)
Not an expert, but I tried looking at how this works and stumbled upon [0].
> Instead of starting from scratch, we built on top of quinn-udp, the UDP I/O library of the Quinn project, a QUIC implementation in Rust. This sped up our development efforts significantly. Big thank you to the Quinn project.
It's always interesting how these large organizations can bring in tens of millions of dollars in excess of expenses, yet still manage to "have no money"
Why bother sponsoring any open source projects when they can throw a few extra million into their CEO's salary, while that CEO is running their flagship product (Firefox) into the ground?
It's true, but since this is a Firefox project, it is relevant since rust was largely developed for years specifically for (re)writing exactly this kind of code in Firefox.
Except for, you know, the majority of Rust projects which reach the HN front page and don't, like the stories on PopOS, Redox, and the Wild linker from the past day.
To be fair that video game was released (in early access) during Rust 0.8 - the language was already popular on HN I think, but not as a "you should use this in prod" type of thing.
In fairness, many language/framework communities often have project names that are related or tongue in cheek, and not just to advertise that its x language; Python comes to mind
This is cope. Functionally nobody remembers enough high school chemistry to remember what a redox reaction is, let alone associate that with Rust, and such a naming convention is hardly worthy of the petulant dismissal expressed by the original comment.
And while we're on the topic, more Rust projects on the HN front page that don't mention Rust in the past day were Typst and the Cloudflare thing. Turns out, there's just a ton of good Rust projects out there, to the surprise of clueless HN commenters.
Not innately, no, but the kinds of optimizations they’re talking about batching operations and avoiding copies are certainly safer to make using a memory safe language.
Wow! Does this mean that Firefox can re-enable self-signed certs for it's HTTP/3 stack since it's using a custom implementation and not someone elses big QUIC lib and default build flags anymore? That'd be a huge win for human people and their typical LAN use cases. Even if the corporate use cases don't want it for 'security' reasons.
You can still have self-signed certs, you just have to actually set up your own CA and import it as trusted in the relevant trust store so it can be verified.
You can't just have some random router, printer, NAS, etc. generate its own cert out of thin air and tell the browser to ignore the fact that it can't be verified.
IMO this is a good thing. The way browsers handle HTTPS on older protocols is a result of the number of legacy badly configured systems there are out there which browser vendors don't want to break. Anywhere someone's supporting HTTP/3 they're doing something new, so enforcing a "do it right or don't do it at all" policy is possible.
Which also means it's impossible to host a visitable webserver for random people on HTTP/3 without the continued permission of a third party corporation. Do it "right" means "Do it for the corps' use cases only" to most people it seems.
Certificate verification in Firefox happens at a layer way above HTTP and TLS (for those who care, it's in PSM), so which QUIC library is used is basically not relevant.
The reason that Firefox -- and other major browsers -- make self-signed certs so difficult to use is that allowing users to override certificate checks weakens the security of HTTPS, which otherwise relies on certificates being verifiable against the trust anchor list. It's true that this makes certain cases harder, but the judgement of the browser community was that that wasn't worth the security tradeoff. In other words, it's a policy decision, not a technical one.
It's a pretty bad one, though. It massively undermines the security of connections to local devices for a slight improvement in security on the open internet. It's very frustrating how browser vendors don't even seem to consider it something worth solving, even if e.g. the way it is presented to the user is different. At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs), but if you try to use HTTPs then you get a million 'this is really inesecure' warnings. There's no 'use HTTPs but treat it like HTTP' option.
Either you really are secure, or ideally you should not be able to even pretend you are secure. Allowing "pretend it's secure" downgrades the security in all contexts.
IMHO they should gradually lock all dynamic code execution such as dynamic CSS and javascript behind a explicit toggle for insecure http sites.
> It massively undermines the security of connections to local devices
No, you see the prompt, it is insecure. If the network admin wants it secure, it means either a internal CA, or a literally free cert from let's encrypt. As the network admin did not care, it's insecure.
"but I have legacy garbage with hardcoded self-signed certs" then reverse proxy that legacy garbage with Caddy?
I'm talking about situations where you have nontechnical users that need to connect to the device, neither the client nor the device have necessarily an internet connection, and the connection is often via a local IP address. None of your proposed solutions are appropriate for that situation. And basically all I'm asking is that the connection be at least encrypted (meaning that eavesdropping is not enough: you need to construct a man in the middle), even if it's not presented to the user as secure.
(An option to get some authentication, and one that I think chrome have kind of started to figure out, is to allow a PWA to connect to a local device and authenticate with its own keys. This still means you need to connect to the internet once with the client device, but at least from that point onwards it can work without internet. But then you need to have a whole other flow so that random sites can't just connect to your local devices...)
How often are you offline like that but on a network you can trust isn’t malicious? If I’m at home, my printer is more protected from eavesdropping by the WiFi password than a self-signed certificate. If I’m at the coffee shop, it’s insecure because I can’t trust the dozens of other people not to be malicious or compromised, and the answer is to clearly tell me that it’s unsafe.
However, it's not an entirely trivial problem to get it right, especially because how how deeply the scheme is tied into the Web security model. Your example here is a good one of what I'm talking about:
> At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs),
You're right that being served over HTTPS doesn't make the site trustworthy, but what it does do is provide integrity for the identity of the server. So, for instance, the user might look at the URL and decide that the server is trustworthy and can be allowed to use the camera or microphone. However, if you use HTTPS but without verifying the certificate, then an attacker might in the future substitute themselves and take advantage of that
camera and microphone access. Another example is when the user enters their password.
Rather than saying that browser vendors don't think this is worth solving in the abstract I would say that it's not very high on the priority list, especially because most of the ideas people have proposed don't work very well.
I think self-signed certs should be possible on principal, but is there a reason to use HTTP/3 on LAN use cases? In low-latency situations, there's barely any advantage to using HTTP3 over http/2, and even HTTP 1.1 is good enough for most use cases (and will outperform the other options in terms of pure throughput).
The key takeaway is hidden in the middle:
> In extreme cases, on purely CPU bound benchmarks, we’re seeing a jump from < 1Gbit/s to 4 Gbit/s. Looking at CPU flamegraphs, the majority of CPU time is now spent in I/O system calls and cryptography code.
400% increase in throughput, which should translate to a proportionate reduction in CPU utilization for UDP network activity. That's pretty cool, especially for better power efficiency on portable clients (mobile and notebook).
I found this presentation refreshing. Too often, claims about transition to "modern" stacks are treated as being inherently good and do not come with the data to back it up.
Any guesses on whether they have other cases where they get more than 4 Gbps but wasn't CPU bound or was this the fastest they got?
_Author here_.
4 Gbit/s is on our rather dated benchmark machines. If you run the below command on a modern laptop, you likely reach higher throughput. (Consider disabling PMTUD to use a realistic Internet-like MTU. We do the same on our benchmark machines.)
https://github.com/mozilla/neqo
cargo bench --features bench --bench main -- "Download"
i wonder if we'll ever see hardware accelerated cross-context message passing for user and system programs.
Shared ring buffers for IO exist in Linux, I don't think we'll ever see it extend to DMA for the NIC due to the rearchitecture of security required. However if the NIC is smart enough and the rules simple maybe.
There are systems that move the NIC control to user space entirely. For example Snabb has an Intel 10g Ethernet controller driver that appears to use a ring buffer on DMA memory.
https://github.com/snabbco/snabb/blob/master/src/apps/intel/...
"(You could think of it as a [busybox](https://en.wikipedia.org/wiki/BusyBox#Single_binary) for networking.)"
They sugggest thinking of busybox
But if using busybox, their Makefile will fail
Using toybox instead will work
There is AMD's onload https://github.com/Xilinx-CNS/onload. It works with Solarflare, Xilinx but also generic NIC support via AF_XDP.
The price of doing that is losing OS controls over emitted packets. For servers fine. Browsers not so much.
RDMA offers that. The NIC can directly access user space buffers. It does require that the buffers are “registered” first but applications usually aim to do that once up front.
sure, but what about some kind of generalized cross-context ipc primitive towards a zero copy messaging mechanism for high performance multiprocessing microkernels?
While their improvements are real and necessary for actual high speed (100 Gb/s and up), 4 Gb/s is not fast. That is only 500 MB/s. Something somewhere, likely not in their code, is terribly slow. I will explain.
As the author cited, kernel context switch is only on the order of 1 us (which seems too high for a system call anyways). You can reach 500 MB/s even if you still call sendmsg() on literally every packet as long as you average ~500 bytes/packet which is ~1/3 of the standard 1500 bytes MTU. So if you average MTU sized packets, you get 2 us of processing in addition to a full system call to reach 4 Gb/s.
The old number of 1 Gb/s could be reached with a average of ~125 bytes/packet, ~1/12 of the MTU or ~11 us of processing.
“But there are also memory copies in the network stack.” A trivial 3 instruction memory copy will go ~10-20 GB/s, 80–160 Gb/s. In 2 us you can drive 20-40 KB of copies. You are arguing the network stack does 40-80(!) copies to put a UDP packet, a thin veneer over a literal packet, into a packet. I have written commercial network drivers. Even without zero-copy, with direct access you can shovel UDP packets into the NIC buffers at basically memory copy speeds.
“But encryption is slow.” Not that slow. Here is some AES-128 GCM performance done what looks like over 5 years ago. [1] The Intel i5-6500, a midline processor from 8 years ago, averages 1729 MB/s. It can do the encryption for a 500 byte packet in 300 ns, 1/6 of the remaining 2 us budget. Modern processors seem to be closer to 3-5 GB/s per core, or about 25-40 Gb/s, 6-10x the stated UDP throughput.
[1] https://calomel.org/aesni_ssl_performance.html
> which seems too high for a system call anyways
spectre & meltdown.
> you get 2 us of processing in addition to a full system call to reach 4 Gb/s
TCP has route binding, UDP does not (connect(2) helps one side, but not both sides).
> “But encryption is slow.” Not that slow.
Encryption _is slow_ for small PDUs, at least the common constructions we're currently using. Everyone's essentially been optimizing for and benchmarking TCP with large frames.
If you hot loop the state as the micro-benchmarks do you can do better, but you still see a very visible cost of state setup that only starts to amortize decently well above 1024 byte payloads. Eradicate a bunch of cache efficiency by removing the tightness of the loop and this amortization boundary shifts quite far to the right, up into tens of kilobytes.
---
All of the above, plus the additional framing overheads come into play. Hell even the OOB data blocks are quite expensive to actually validate, it's not a good API to fix this problem, it's just the API we have shoved over bsd sockets.
And we haven't even gotten to buffer constraints and contention yet, but the default UDP buffer memory available on most systems is woefully inadequate for these use cases today. TCP buffers were scaled over time, but UDP buffers basically never were, they're still conservative values from the late 90s/00s really.
The API we really need for this kind of UDP setup is one where you can do something like fork the fd, connect(2) it with a full route bind, and then fix the RSS/XSS challenges that come from this splitting. After that we need a submission queue API rather than another bsd sockets ioctl style mess (uring, rio, etc). Sadly none of this is portable.
On the crypto side there are KDF approaches which can remove a lot of the state cost involved, it's not popular but some vendors are very taken with PSP for this reason - but PSP becoming more well known or used was largely suppressed by its various rejections in the ietf and in linux. Vendors doing scale tests with it have clear numbers though, under high concurrency you can scale this much better than the common tls or tls like constructions.
> spectre & meltdown.
I just measured. On my Ryzen 7 9700X, with Linux 6.12, it's about 50ns to call syscall(__NR_gettimeofday). Even post-spectre, entering the kernel isn't so expensive.
Are you sure that system call actually enters the kernel mode? It might be one of the special ones where kernel serves it from user space, forgot their name.
Those are only served from userspace if you call the libc wrappers. The syscall() function bypasses the wrappers.
VDSO
If it isn't a vDSO call, I think 50ns figure shouldn't be possible.
No need to guess, it's 10 lines of code. And you can use bpftrace to watch the test program enter the kernel.
Using the libc wrapper will use the vdso. Using syscall() will enter the kernel.
I haven't measured, but calling the vdso should be closer to 5ns.
Someone else did more detailed measurements here:
https://arkanis.de/weblog/2017-01-05-measurements-of-system-...
50ns on a 3GHz CPU core is ~150 cycles. Pushing and popping back the registers to L1 cache is 5-10 cycles each. With having to handle 16 general purpose registers on x86-64 this is already close to or even more than 150 cycles, no?
When you measure, what numbers do you get?
Also: register renaming is a thing, as is write combining and pipelining. You're not flushing to L1 synchronously for every register, or ordinary userspace function calls would regularly take hundreds of cycles for handling saved registers. They don't.
I'm on my mobile. Store to L1 width is typically 32B and you're probably right that CPU will take advantage of it and pack as much registers as it can. This still means 4x store and 4x load for 16 registers. This is ~40 cycles. So 100 cycles for the rest? Still feels minimal.
A modern x86 processor has about 200 physical registers that get mapped to the 16 architectural registers, with similar for floating point registers. It's unlikely that anything is getting written to cache. Additionally, any writes, absent explicit synchronization or dependencies, will be pipelined.
It's easy to measure how long it takes to push and pop all registers, as well as writing a moderate number of entries to the stack. It's very cheap.
As far as switching into the kernel -- the syscall instruction is more or less just setting a few permission bits and acting as a speculation barrier; there's no reason for that to be expensive. I don't have information on the cost in isolation, but it's entirely unsurprising to me that the majority of the cost is in shuffling around registers. (The post-spectre TLB flush has a cost, but ASIDs mitigate the cost, and measuring the time spent entering and exiting the kernel wouldn't show it even if ASIDs weren't in use)
Where is the state/registers written to then if not L1? I'm confused.
What do you say about the measurements from https://gms.tf/on-the-costs-of-syscalls.html? Table suggests that the cost is by a magnitude larger, depending on the CPU host, from 250 to 620ns.
The architectural registers can be renamed to physical registers. https://en.wikipedia.org/wiki/Register_renaming
As far as that article, it's interesting that the numbers vary between 76 and 560 ns; the benchmark itself has an order of magnitude variation. It also doesn't say what syscall is being done -- __NR_clock_gettime is very cheap, but, for example, __NR_sched_yield will be relatively expensive.
That makes me suspect something else is up in that benchmark.
For what it's worth, here's some more evidence that touching the stack with easily pipelined/parallelized MOV is very cheap. 100 million calls to this assembly costs 200ms, or about 2ns/call:
Benchmark is simple but I find it worthwhile because of the fact that (1) it is run across 15 different platforms (different CPUs, libc's) and results are pretty much reproducible, and (2) it is run through gbenchmark which has a mechanism to make the measurements statistically significant.
Interesting thing that enforces their hypothesis, and measurements, is the fact that, for example, getpid and clock_gettime_mono_raw on some platforms run much faster (vDSO) than on the rest.
Also, the variance between different CPUs is what IMO is enforcing their results and not the other way around - I don't expect the same call to have the same cost on different CPU models. Different CPUs, different cores, different clock frequencies, different tradeoffs in design, etc.
The code is here: https://github.com/gsauthof/osjitter/blob/master/bench_sysca...
syscall() row invokes a simple syscall(423) and it seems to be expensive. Other calls such as close(999), getpid(), getuid(), clock_gettime(CLOCK_MONOTONIC_RAW, &ts), and sched_yield() are also producing the similar results. All of them basically an order of magnitude larger than 50ns.
As for the register renaming, I know what this is, but I still don't get it what register renaming has to do with making the state (registers) storage a cheaper operation.
This is from Intel manual:
So, I wrongly assumed that the core has to wait before the data is completely written but it seems it acts more like a memory barrier but with relaxed properties - instructions are serialized but the data written doesn't have to become globally visible.I think the most important aspect of it is "until all instructions prior to the SYSCALL have completed". This means that the whole pipeline has to be drained. With 20+ deep instruction pipeline, and whatnot instructions in it, I can imagine that this can likely become the most expensive part of the syscall.
I think you are just agreeing with me?
You are basically saying: “It is slow because of all these system/protocol decisions that mismatch what you need to get high performance out of the primitives.”
Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
> I think you are just agreeing with me?
sure, i mean i have no goal of alignment or misalignment, i'm just trying to provide more insights into what's going on based on my observations of this from having also worked on this udp path.
> Which is my point. They are leaving, by my estimation, 10-20x performance on the floor due to external factors. They might be “fast given that they are bottlenecked by low performance systems”, which is good as their piece is not the bottleneck, but they are not objectively “fast” as the primitives can be configured to solve a substantially similar problem dramatically faster if integrated correctly.
yes, though this basically means we're talking about throwing out chunks of the os, the crypto design, the protocol, and a whole lot of tuning at each layer.
the only vendor in a good position to do this is apple (being the only vendor that owns every involved layer in a single product chain), and they're failing to do so as well.
the alternative is a long old road, where folks make articles like this from time to time, we share our experiences and hope that someone is inspired enough reading it to be sniped into making incremental progress. it'd be truly fantastic if we sniped a group with the vigor and drive that the mptcp folks seem to have, as they've managed to do an unusually broad and deep push across a similar set of layered challenges (though still in progress).
There is no indication what class the CPU they're benchmarking on. Additionally, this is presumably including the overhead of managing the QUIC protocol as well given they mention encryption which isn't relevant for raw UDP. And QUIC is known to not have a good story of NIC offload for encryption at the moment the way you can do kTLS offload for TCP streams.
Encryption is unlikely to be relevant. As I pointed out, doing it on any modern desktop CPU with no offload gets you 25-40 Gb/s, 6-10x faster than the benchmarked throughput. It is not the bottleneck unless it is being done horribly wrong or they do not have access to AES instructions.
“It is slow because it is being layered over QUIC.” Then why did you layer over a bottleneck that slows you down by 25x. Second of all, they did not used to do that and they still only got 1 Gb/s previously which is abysmal.
Third of all, you can achieve QUIC feature parity (minus encryption which will be your per-core bottleneck) at 50-100 Gb/s per core, so even that is just a function of using a slow protocol.
Finally, CPU class used in benchmarking is largely irrelevant because I am discussing 20x per-core performance bottlenecks. You would need to be benchmarking on a desktop CPU from 25 years ago to get that degree of single-core performance difference. We are talking iPhone 6, a decade old phone, territory for a efficient implementation to bottleneck on the processor at just 4 Gb/s.
But again, it is probably not a problem with their code. It is likely something else stupid happening on the network stack or protocol side of which they are merely a client.
It’s crazy thar sendmmsg/recvmmsg are considered “modern”… i mean, they’ve been around for quite a while.
I was expecting to see io_uring mentioned somewhere in the linux section of the article.
io_uring doesn't really have equivalent[1], it can't batch multiple UDP diagrams, best it can do is batch multiple sendmsgs and recvmsgs. GSO/GRO is the way to go. sendmmsg/recvmmsg are indeed very old, and some kernel devs wish they could sunset them :)
1: https://github.com/axboe/liburing/discussions/1346
Will ZCRX help here? I’m not sure it supports UDP. It should provide great speed-ups but it requires hardware support which is very scarce for now.
I really liked this. All Mozilla content should be like this. Technical content written by literate engineers. No alegria.
> After many hours of back and forth with the reporter, luckily a Mozilla employee as well, I ended up buying the exact same laptop, same color, in a desperate attempt to reproduce the issue.
Glad to know that networking still produces insanity trying to reproduce issues à la https://xkcd.com/2259/.
For that matter, a fun read in the "The map download struggle, part 2 (Technical)" section at https://www.factorio.com/blog/post/fff-176 (end of the document).
Factorio's dev blog is a great deal of fun. It's on pause at the moment after the release of 2.0, but if you go through the archives there's great stuff in there. A lot of it is about optimizations which only matter once you're building 10,000+ SPM gigafactories, which casual players will never even come close to, but since crazy excess is practically what defines hardcore Factorio players it's cool to see the devs putting in the work to make the experience shine for their most devoted fans.
This is how I find out there's a 2.0 Factorio? What am I doing with my life??
Not only that, there's also a DLC with 4 new planets.
Well there goes the rest of the year...
Be careful, some of these new planets can spoil the fun.
Oh? Tell me more.
Each planet has its own gimmick which throws a spanner into standard builds in its own unique way - one planet is essentially a farm where your factory is growing and processing fruits, which will rot and spoil if they aren't processed immediately -- so you need to design a factory which processes small packets at high speed without any buffering.
That's what I asked after downloading it.
Could be related to UDP checksum offload.
0x0000 is a special value for some NICs meaning please calculate for me.
One NIC years ago would set 0xFFFF for bad checksum. At first we thought this was horrifyingly broken. But really you can just fallback to software verification for the handful of legitimate and bad packets that arrive with that checksum.
It is funnier if you've ever dealt with mystery packet runts, as most network appliances still do not handle them very cleanly.
UDP/QUIC can DoS any system not based on a cloud deployment large enough to soak up the peak traffic. It is silly, but it pushes out any hosting operation that can't reach a disproportionate bandwidth asymmetry with the client traffic. i.e. fine for FAANG, but a death knell for most other small/medium organizations.
This is why many LAN still drop most UDP traffic, and rate-limit the parts needed for normal traffic. Have a nice day =3
Why are they supporting Android 5? It’s over 10 years old, the devices running it after updates even older. Mobile devices from that era must have a real tough time to browse the modern bloated web. It shouldn’t even be possible to publish to Play store when targeting such an old API level. Who is the user base? Hackers who refurbished their old OnePlus, run it with charger always plugged in, didn’t upgrade to a newer LineageOS, and installed an alternative App Store, just for the sake of it? While novel, it’s a steep price to pay, as we see here it is slowing down development for the rest of us.
Note that I (author) made a mistake. We (Mozilla) recently raised the minimum Android version off of 5. See https://blog.mozilla.org/futurereleases/2025/09/15/raising-t... for details.
https://bugzilla.mozilla.org/show_bug.cgi?id=1979683
Still seeing this in Firefox with Cloudflare-hosted sites on both macOS and Fedora.
Interesting I was not aware of GSO/GRO equivalent on Windows and MacOS, though unfortunate that they seem buggy.
I wonder why Microsoft and Apple do not care about the proper functioning of their network stacks.
Pretty sure GSO/GRO aren't the only buggy parts either.
Can someone explain how UDP GSO/GRO works in detail? Since UDP packets can arrive out-or-order, how does a single large QUIC packet be split into multiple smaller UDP packets without any header sequence number, and how does the receiving side know the order of the UDP packets to merge?
Author here.
QUIC does not depend on UDP datagrams to be delivered in order. Re-ordering happens on the QUIC layer. Thus, when receiving, the kernel passes a batch (i.e. segmented super datagram) of potentially out-of-order datagrams to the QUIC layer. QUIC reorders them.
Maybe https://blog.cloudflare.com/accelerating-udp-packet-transmis... brings some clarity.
Thanks! The Cloudflare blog article explained GSO pretty well: application must send a contiguous data buffer with a fixed segment size (except for the tail of the buffer) for GSO to split into smaller packets. But how does GRO work on the receiving side?
For example GSO might split a 3.5KB data buffer into 4 UDP datagrams: U1, U2, U3, and U4, with U1/U2/U3 being 1KB and U4 being 512B. When U1~4 arrives on the receiving host, how does GRO deal with the different permutations of orderingof the four packets (assuming no loss) and pass them to the QUIC layer? Like if U1/U2/U3/U4 come in the original sending order GRO can batch nicely. But what if they come in the order U1/U4/U3/U2? How does GRO deal with the fact that U4 is shorter?
I think as an application, when receiving packets you never really see a coalesced UDP datagrams when GRO is active.
It’s more like the kernel puts multiple datagrams into a single structure and passes that around between layers, maintaining the boundaries between them in that structure (sk_buff data fragments?)
Not an expert, but I tried looking at how this works and stumbled upon [0].
[0]: https://lwn.net/Articles/768995/
> Instead of starting from scratch, we built on top of quinn-udp, the UDP I/O library of the Quinn project, a QUIC implementation in Rust. This sped up our development efforts significantly. Big thank you to the Quinn project.
Awesome, so you sponsored them right?
https://opencollective.com/quinn-rs
When I asked about financial support, the Senior Principal Software Engineer from Mozilla I talked to said "Mozilla has no money".
To be fair, we've gotten a great amount of code contributions from the Mozilla folks, so it's not like they haven't contributed anything.
(I am one of the Quinn maintainers.)
It's always interesting how these large organizations can bring in tens of millions of dollars in excess of expenses, yet still manage to "have no money"
Source: https://assets.mozilla.net/annualreport/2024/b200-mozilla-fo...
It is true, Mozilla has no money (except for paying execs)
> Awesome, so you sponsored them right?
Why bother sponsoring any open source projects when they can throw a few extra million into their CEO's salary, while that CEO is running their flagship product (Firefox) into the ground?
They contributed in other ways?
Wonder if this will lead to BitTorrent in the browser.
idk if author reads this, but
> The combination of the two did cost me a couple of days, resulting in this (basically single line) change in quinn-udp.
2 hyper-links here probably were meant to be different, but got copy pasted the same link
Fixed. Thank you!
[flagged]
[dupe]
[flagged]
It's true, but since this is a Firefox project, it is relevant since rust was largely developed for years specifically for (re)writing exactly this kind of code in Firefox.
Except for, you know, the majority of Rust projects which reach the HN front page and don't, like the stories on PopOS, Redox, and the Wild linker from the past day.
> Redox
Any project who's name alludes to oxidation or crustations is a Rust project so its already in the title by default.
There's a hugely popular video game called Rust not written in Rust.
To be fair that video game was released (in early access) during Rust 0.8 - the language was already popular on HN I think, but not as a "you should use this in prod" type of thing.
In fairness, many language/framework communities often have project names that are related or tongue in cheek, and not just to advertise that its x language; Python comes to mind
This is cope. Functionally nobody remembers enough high school chemistry to remember what a redox reaction is, let alone associate that with Rust, and such a naming convention is hardly worthy of the petulant dismissal expressed by the original comment.
And while we're on the topic, more Rust projects on the HN front page that don't mention Rust in the past day were Typst and the Cloudflare thing. Turns out, there's just a ton of good Rust projects out there, to the surprise of clueless HN commenters.
How do you know someone is bothered by headline? They will write comment!
Yeah. Rust, good or bad, affords no special performance advantage for IO performance.
Not innately, no, but the kinds of optimizations they’re talking about batching operations and avoiding copies are certainly safer to make using a memory safe language.
correct, stable, fast <- rust's whole deal is giving normal people a chance of building something that gets you all 3.
I touch rust every day, but you should also mention the priority of those three things are also in that order.
Agreed. This could have been done in C or anything else for that matter
The people who actually wrote it seem to disagree.
[flagged]
Yeah about that... https://chromium.googlesource.com/chromium/src/+/refs/heads/...
Wow! Does this mean that Firefox can re-enable self-signed certs for it's HTTP/3 stack since it's using a custom implementation and not someone elses big QUIC lib and default build flags anymore? That'd be a huge win for human people and their typical LAN use cases. Even if the corporate use cases don't want it for 'security' reasons.
You can still have self-signed certs, you just have to actually set up your own CA and import it as trusted in the relevant trust store so it can be verified.
You can't just have some random router, printer, NAS, etc. generate its own cert out of thin air and tell the browser to ignore the fact that it can't be verified.
IMO this is a good thing. The way browsers handle HTTPS on older protocols is a result of the number of legacy badly configured systems there are out there which browser vendors don't want to break. Anywhere someone's supporting HTTP/3 they're doing something new, so enforcing a "do it right or don't do it at all" policy is possible.
Which also means it's impossible to host a visitable webserver for random people on HTTP/3 without the continued permission of a third party corporation. Do it "right" means "Do it for the corps' use cases only" to most people it seems.
Author here. You can find details on why we disable HTTP/3 on self-signed certs here: https://bugzilla.mozilla.org/show_bug.cgi?id=1985341#c7
Certificate verification in Firefox happens at a layer way above HTTP and TLS (for those who care, it's in PSM), so which QUIC library is used is basically not relevant.
The reason that Firefox -- and other major browsers -- make self-signed certs so difficult to use is that allowing users to override certificate checks weakens the security of HTTPS, which otherwise relies on certificates being verifiable against the trust anchor list. It's true that this makes certain cases harder, but the judgement of the browser community was that that wasn't worth the security tradeoff. In other words, it's a policy decision, not a technical one.
It's a pretty bad one, though. It massively undermines the security of connections to local devices for a slight improvement in security on the open internet. It's very frustrating how browser vendors don't even seem to consider it something worth solving, even if e.g. the way it is presented to the user is different. At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs), but if you try to use HTTPs then you get a million 'this is really inesecure' warnings. There's no 'use HTTPs but treat it like HTTP' option.
Either you really are secure, or ideally you should not be able to even pretend you are secure. Allowing "pretend it's secure" downgrades the security in all contexts.
IMHO they should gradually lock all dynamic code execution such as dynamic CSS and javascript behind a explicit toggle for insecure http sites.
> It massively undermines the security of connections to local devices
No, you see the prompt, it is insecure. If the network admin wants it secure, it means either a internal CA, or a literally free cert from let's encrypt. As the network admin did not care, it's insecure.
"but I have legacy garbage with hardcoded self-signed certs" then reverse proxy that legacy garbage with Caddy?
I'm talking about situations where you have nontechnical users that need to connect to the device, neither the client nor the device have necessarily an internet connection, and the connection is often via a local IP address. None of your proposed solutions are appropriate for that situation. And basically all I'm asking is that the connection be at least encrypted (meaning that eavesdropping is not enough: you need to construct a man in the middle), even if it's not presented to the user as secure.
(An option to get some authentication, and one that I think chrome have kind of started to figure out, is to allow a PWA to connect to a local device and authenticate with its own keys. This still means you need to connect to the internet once with the client device, but at least from that point onwards it can work without internet. But then you need to have a whole other flow so that random sites can't just connect to your local devices...)
How often are you offline like that but on a network you can trust isn’t malicious? If I’m at home, my printer is more protected from eavesdropping by the WiFi password than a self-signed certificate. If I’m at the coffee shop, it’s insecure because I can’t trust the dozens of other people not to be malicious or compromised, and the answer is to clearly tell me that it’s unsafe.
You could be in any of those situations, is my point. I fail to see any situation where some encryption is worse than no encryption.
I don't think it's correct to say that browser vendors don't think it's worth solving. For instance, Martin Thomson from Mozilla has done some thinking about it. https://docs.google.com/document/u/0/d/170rFC91jqvpFrKIqG4K8....
However, it's not an entirely trivial problem to get it right, especially because how how deeply the scheme is tied into the Web security model. Your example here is a good one of what I'm talking about:
> At the moment if you just use plain HTTP then things do mostly work (apart from some APIs which are somewhat arbitrarily locked to 'secure contexts' which means very little about the trustworthiness of the code that does or does not have access to those APIs),
You're right that being served over HTTPS doesn't make the site trustworthy, but what it does do is provide integrity for the identity of the server. So, for instance, the user might look at the URL and decide that the server is trustworthy and can be allowed to use the camera or microphone. However, if you use HTTPS but without verifying the certificate, then an attacker might in the future substitute themselves and take advantage of that camera and microphone access. Another example is when the user enters their password.
Rather than saying that browser vendors don't think this is worth solving in the abstract I would say that it's not very high on the priority list, especially because most of the ideas people have proposed don't work very well.
I'm pretty sure private PKIs are an option that is pretty straightforward to use.
Security is still a lot better because the root is communicated out of band.
I think self-signed certs should be possible on principal, but is there a reason to use HTTP/3 on LAN use cases? In low-latency situations, there's barely any advantage to using HTTP3 over http/2, and even HTTP 1.1 is good enough for most use cases (and will outperform the other options in terms of pure throughput).