QUIC is not quick enough over fast internet

(arxiv.org)

166 points | by carlos-menezes 7 hours ago ago

151 comments

  • cletus 4 hours ago

    At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

    I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

    Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

    But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

    Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

    I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

    My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

    [1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

    [2]: https://bencane.com/simulating-network-latency-for-testing-i...

    • bdd8f1df777b 2 minutes ago

      As a Chinese whose latency to servers outside China often exceeds 300ms, I'm a staunch supporter of QUIC. The difference is night and day.

    • skissane 4 hours ago

      > Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace

      That’s not an inherent property of the QUIC protocol, it is just an implementation decision - one that was very necessary for QUIC to get off the ground, but now it exists, maybe it should be revisited? There is no technical obstacle to implementing QUIC in the kernel, and if the performance benefits are significant, almost surely someone is going to do it sooner or later.

      • lttlrck 3 hours ago

        For Linux that's true. But Microsoft never added SCTP to Windows; not being beholden to Microsoft and older OS must have been part of the calculus?

        • skissane 3 hours ago

          > But Microsoft never added SCTP to Windows

          Windows already has an in-kernel QUIC implementation (msquic.sys), used for SMB/CIFS and in-kernel HTTP. I don’t think it is accessible from user-space - I believe user-space code uses a separate copy of the same QUIC stack that runs in user-space (msquic.dll), but there is no reason in-principle why Microsoft couldn’t expose the kernel-mode implementation to user space

      • ants_everywhere 3 hours ago

        Is this something you could use ebpf for?

    • klabb3 3 hours ago

      I did a bunch of real world testing of my file transfer app[1]. Went in with the expectation that Quic would be amazing. Came out frustrated for many reasons and switched back to TCP. It’s obvious in hindsight, but with TCP you say “hey kernel send this giant buffer please” whereas UDP is packet switched! So even pushing zeroes has a massive CPU cost on most OSs and consumer hardware, from all the mode switches. Yes, there are ways around it but no they’re not easy nor ready in my experience. Plus it limits your choice of languages/libraries/platforms.

      (Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

      Secondly, quic does congestion control poorly (I was using quic-go so mileage may vary). No tuning really helped, and TCP streams would take more bandwidth if both were present.

      Third, the APIs are weird man. So, quic itself has multiple streams, which makes it non-drop in replacement with TCP. However, the idea is to have HTTP/3 be drop-in replaceable at a higher level (which I can’t speak to because I didn’t do). But worth keeping in mind if you’re working on the stream level.

      In conclusion I came out pretty much defeated but also with a newfound respect for all the optimizations and resilience of our old friend tcp. It’s really an amazing piece of tech. And it’s just there, for free, always provided by the OS. Even some of the main issues with tcp are not design faults but conservative/legacy defaults (buffer limits on Linux, Nagle, etc). I really just wish we could improve it instead of reinventing the wheel..

      [1]: https://payload.app/

    • pests 3 hours ago

      The Network tab in the Chrome console allows you to degrade your connection. There are presets for Slow/Fast 4G, 3G, or you can make a custom present where you can specify download and upload speeds, latency in ms, a packet loss percent, a packet queue length and can enable packet reordering.

    • reshlo 2 hours ago

      > Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating.

      When I used to (try to) play online games in NZ a few years ago, RTT to US West servers sometimes exceeded 200ms.

      • indrora an hour ago

        When I was younger, I played a lot of cs1.6 and hldm. Living in rural New Mexico, my ping times were often 150-250ms.

        DSL kills.

    • ec109685 4 hours ago

      For reasonably long downloads (so it has a chance to calibrate), why don't congestion algorithms increase the number of inflight packets to a high enough number that bandwidth is fully utilized even over high latency connections?

      It seems like it should never be the case that two parallel downloads will preform better than a single one to the same host.

      • dan-robertson 3 hours ago

        There are two places a packet can be ‘in-flight’. One is light travelling down cables (or the electrical equivalent) or in memory being processed by some hardware like a switch, and the other is sat in a buffer in some networking appliance because the downstream connection is busy (eg sending packets that are further up the queue, at a slower rate than they arrive). If you just increase bandwidth it is easy to get lots of in-flight packets in the second state which increases latency (admittedly that doesn’t matter so much for long downloads) and the chance of packet loss from overly full buffers.

        CUBIC tries to increase bandwidth until it hits packet loss, then cuts bandwidth (to drain buffers a bit) and ramps up and hangs around close to the rate that led to loss, before it tries sending at a higher rate and filling up buffers again. Cubic is very sensitive to packet loss, which makes things particularly difficult on very high bandwidth links with moderate latency as you need very low rates of (non-congestion-related) loss to get that bandwidth.

        BBR tries to do the thing you describe while also modelling buffers and trying to keep them empty. It goes through a cycle of sending at the estimated bandwidth, sending at a lower rate to see if buffers got full, and sending at a higher rate to see if that’s possible, and the second step can be somewhat harmful if you don’t need the advantages of BBR.

        I think the main thing that tends to prevent the thing you talk about is flow control rather than congestion control. In particular, the sender needs a sufficiently large send buffer to store all unacked data (which can be a lot due to various kinds of ack-delaying) in case it needs to resend packets, and if you need to resend some then your send buffer would need to be twice as large to keep going. On the receive size, you need big enough buffers to be able to fill up those buffers from the network while waiting for an earlier packet to be retransmitted.

        On a high-latency fast connection, those buffers need to be big to get full bandwidth, and that requires (a) growing a lot, which can take a lot of round-trips, and (b) being allowed by the operating system to grow big enough.

      • gmueckl 4 hours ago

        Larger windows can reduce the maximum number of simultaneous connections on the sender side.

      • Veserv 4 hours ago

        You can in theory. You just need a accurate model of your available bandwidth and enough buffering/storage to avoid stalls while you wait for acknowledgement. It is, frankly, not even that hard to do it right. But in practice many implementations are terrible, so good luck.

    • api 3 hours ago

      A major problem with TCP is that the limitations of the kernel network stack and sometimes port allocation place absurd artificial limits on the number of active connections. A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.

      • toast0 an hour ago

        > A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.

        Inbound connections? You don't need to do anything other than make sure your fd limit is high and maybe not be ipv4 only and have too many users behind the same cgnat.

        Outbound connections is harder, but hopefully you don't need millions of connections to the same destination, or if you do, hopefully they support ipv6.

        When I ran millions of connections through HAproxy (bare tcp proxy, just some peaking to determine the upstream), I had to do a bunch of work to make it scale, but not because of port limits.

  • jrpelkonen 5 hours ago

    Curl creator/maintainer Daniel Stenberg blogged about HTTP/3 in curl a few months ago: https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...

    One of the things he highlighted was the higher CPU utilization of HTTP/3, to the point where CPU can limit throughput.

    I wonder how much of this is due to the immaturity of the implementations, and how much this is inherit due to way QUIC was designed?

    • dan-robertson 4 hours ago

      Two recommendations are for improving receiver-side implementations – optimising them and making them multithreaded. Those suggest some immaturity of the implementations. A third recommendation is UDP GRO, which means modifying kernels and ideally NIC hardware to group received UDP packets together in a way that reduces per-packet work (you do lots of per-group work instead of per-packet work). This already exists in TCP and there are similar things on the send side (eg TSO, GSO in Linux), and feels a bit like immaturity but maybe harder to remedy considering the potential lack of hardware capabilities. The abstract talks about the cost of how acks work in QUIC but I didn’t look into that claim.

      Another feature you see for modern tcp-based servers is offloading tls to the hardware. I think this matters more for servers that may have many concurrent tcp streams to send. On Linux you can get this either with userspace networking or by doing ‘kernel tls’ which will offload to hardware if possible. That feature also exists for some funny stuff in Linux about breaking down a tcp stream into ‘messages’ which can be sent to different threads, though I don’t know if it allows eagerly passing some later messages when earlier packets were lost.

    • cj 4 hours ago

      I’ve always been under the impression that QUIC was designed for connections that aren’t guaranteed to be stable or fast. Like mobile networks.

      I never got the impression that it was intended to make all connections faster.

      If viewed from that perspective, the tradeoffs make sense. Although I’m no expert and encourage someone with more knowledge to correct me.

      • dan-robertson 4 hours ago

        I think that’s a pretty good impression. Lots of features for those cases:

        - better behaviour under packet loss (you don’t need to read byte n before you can see byte n+1 like in tcp)

        - better behaviour under client ip changes (which happen when switching between cellular data and wifi)

        - moving various tricks for getting good latency and throughput in the real world into user space (things like pacing, bbr) and not leaving enough unencrypted information in packets for middleware boxes to get too funky

      • therealmarv 3 hours ago

        It makes everything faster, it's an evolvement of HTTP/2 in many ways. I recommend watching

        https://www.youtube.com/watch?v=cdb7M37o9sU

    • therealmarv 4 hours ago

      "immaturity of the implementations" is a funny wording here. QUIC was created because there is absolutely NO WAY that all internet hardware (including all middleware etc) out there will support a new TCP or TLS standard. So QUIC is an elegant solution to get a new transport standard on top of legacy internet hardware (on top of UDP).

      In an ideal World we would create a new TCP and TLS standard and replace and/or update all internet routers and hardware everywhere World Wide so that it is implemented with less CPU utilization ;)

      • api 3 hours ago

        A major mistake in IP’s design was to allow middle boxes. The protocol should have had some kind of minimal header auth feature to intentionally break them. It wouldn’t have to be strong crypto, just enough to make middle boxes impractical.

        It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured with local firewalls and better software instead of middle boxes.

        The Internet would be so much simpler, faster, and more capable. Peer to peer would be trivial. Everything would just work. Protocol innovation would be possible.

        Of course tech is full of better roads not taken. We are prisoners of network effects and accidents of history freezing ugly hacks into place.

        • ocdtrekkie 8 minutes ago

          This ignores... a lot of reality. Like the fact that when IP was designed, the idea of every individual network device having to run its own firewall was impractical performance-wise, and decades later... still not really ideal.

          There's definitely some benefits to glean from a zero trust model, but putting a moat around your network still helps a lot and NAT is probably the best accidental security feature to ever exist. Half the cybersecurity problems we have are because the cloud model has normalized routing sensitive behavior out to the open Internet instead of private networks.

          My middleboxes will happily be configured to continue to block any traffic that refuses to obey them. (QUIC and ECH inclusive.)

        • johncolanduoni 2 hours ago

          Making IPv4 headers resistant to tampering wouldn't have helped with IPv6 rollout, as routers (both customer and ISP) would still need to be updated to be able to understand how to route packets with the new headers.

          • ajb an hour ago

            The GP's point is that if middle boxes couldn't rewrite the header, NAt would be impossible. And if NAT were impossible, ipV4 would have died several years ago because NAT allowed more computers than addresses.

    • paulddraper 4 hours ago

      Those performance results surprised me too.

      His testing has CPU-bound quiche at <200MB/s and nghttp2 was >900MB/s.

      I wonder if the CPU was throttled.

      Because if HTTP 3 impl took 4x CPU that could be interesting but not necessarily a big problem if the absolute value was very low to begin with.

  • lysace 6 hours ago

    > We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart.

    Haven't read the whole paper yet, but below 600 Mbit/s is implied as being "Slow Internet" in the intro.

    • cj 4 hours ago

      In other words:

      Enable http/3 + quic between client browser <> edge and restrict edge <> origin connections to http/2 or http/1

      Cloudflare (as an example) only supports QUIC between client <> edge and doesn’t support it for connections to origin. Makes sense if the edge <> origin connection is reusable, stable, and “fast”.

      https://developers.cloudflare.com/speed/optimization/protoco...

    • dathinab 5 hours ago

      They also mainly identified a throughput reduction due to latency issues caused by ineffective/too many syscalls in how browsers implement it.

      But such a latency issue isn't majorly increasing battery usage (compared to a CPU usage issue which would make CPUs boost). Nor is it an issue for server-to-server communication.

      It basically "only" slows down high bandwidth transmissions on end user devices with (for 2024 standards) very high speed connection (if you take effective speeds from device to server, not speeds you where advertised to have bough and at best can get when the server owner has a direct pairing agreement with you network provider and a server in your region.....).

      Doesn't mean the paper is worthless, browser should improve their impl. and it highlights it.

      But the title of the paper is basically 100% click bait.

      • ec109685 4 hours ago

        How is it clickbait? The title implies that QUIC isn't as fast as other protocols over fast internet connections.

        • dathinab 4 hours ago

          Because it's QUIC _implementations of browser_ not being as fast as the non quick impl of browsers on connections most people would not just call fast but very fast (in context of browser usage) while still being definitely 100% fast enough for all browser use case done today (sure it theoretically might reduce video bit rate, that is, if it isn't already capped to a anyway smaller rate, which AFIK it basically always is).

          So "Not Quick Enough" is plain out wrong, it is fast enough.

          The definition of "Fast Internet" misleading.

          And even "QUIC" is misleading as it normally refers to the protocol while the benchmarked protocol is HTTP/3 over QUIC and the issue seem to be mainly in the implementations.

    • Dylan16807 6 hours ago

      Just as important is > we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC's user-space ACKs

      It doesn't sound like there's a fundamental issue with the protocol.

    • Aurornis 6 hours ago

      Internet access is only going to become faster. Switching to a slower transport just as Gigabit internet is proliferating would be a mistake, obviously.

      • ratorx 5 hours ago

        It depends on whether it’s meaningfully slower. QUIC is pretty optimized for standard web traffic, and more specifically for high-latency networks. Most websites also don’t send enough data for throughput to be a significant issue.

        I’m not sure whether it’s possible, but could you theoretically offload large file downloads to HTTP/2 to get best of both worlds?

        • pocketarc 5 hours ago

          > could you theoretically offload large file downloads to HTTP/2

          Yes, you can! You’d have your websites on servers that support HTTP/3 and your large files on HTTP/2 servers, similar to how people put certain files on CDNs. It might well be a great solution!

        • kijin 30 minutes ago

          High-latency networks are going away, too, with Cloudflare eating the web alive and all the other major clouds adding PoPs like crazy.

      • tomxor 6 hours ago

        In terms of maximum available throughput it will obviously become greater. What's less clear is if the median and worst throughput available throughout a nation or the world will continue to become substantially greater.

        It's simply not economical enough to lay fibre and put 5G masts everywhere (5G LTE bands covers less area due to being higher frequency, and so are also limited to being deployed in areas with a higher enough density to be economically justifiable).

        • nine_k 4 hours ago

          Fiber is the most economical solution, it's compact, cheap, not susceptible to electromagnetic interference from thunderstorms, not interesting for metal thieves, etc.

          Most importantly, it can be heavily over-provisioned for peanuts, so your cable is future-proof, and you will never have dig the same trenches again.

          Copper only makes sense if you already have it.

      • jiggawatts 6 hours ago

        Here in Australia there’s talk of upgrading the National Broadband Network to 2.5 Gbps to match modern consumer Ethernet and WiFi speeds.

        I grew up with 2400 baud modems as the super fast upgrade, so talk of multiple gigabits for consumers is blowing my mind a bit.

        • Kodiack 5 hours ago

          Meanwhile here in New Zealand we can get 10 Gbps FTTH already.

          Sorry about your NBN!

          • wkat4242 5 hours ago

            Here in Spain too.

            I don't see a need for it yet though. I'm a really heavy user (it specialist with more than a hundred devices in my networks) and I really don't need it.

            • jiggawatts 28 minutes ago

              These things are nice-to-have until they become sufficiently widespread that typical consumer applications start to require the bandwidth. That comes much later.

              E.g.: 8K 60 fps video streaming benefits from data rates up to about 1 Gbps in a noticeable way, but that's at least a decade away form mainstream availability.

        • TechDebtDevin 5 hours ago

          Is Australia's ISP infrastructure nationalized?

          • jiggawatts 5 hours ago

            It's a long story featuring nasty partisan politics, corrupt incumbents, Rupert Murdoch, and agile upstarts doing stealth rollouts at the crack of dawn.

            Basically, the old copper lines were replaced by the NBN, which is a government-owned corporation that sells wholesale networking to telcos. Essentially, the government has a monopoly, providing the last-mile fibre links. They use nested VLANs to provide layer-2 access to the consumer telcos.

            Where it got complicated was that the right-wing government was in the pocket of Rupert Murdoch, who threatened them with negative press before an upcoming election. They bent over and grabbed their ankles like the good little Christian school boys they are, and torpedoed the NBN network technology to protect the incumbent Fox cable network. Instead of fibre going to all premises, the NBN ended up with a mix of technologies, most of which don't scale to gigabit. It also took longer and cost more, despite the government responsible saying they were making these cuts to "save taxpayer money".

            Also for political reasons, they were rolling it out starting at the sparse rural areas and leaving the high-density CBD regions till last. This made it look bad, because if they spent $40K digging up the long rural dirt roads to every individual farmhouse, it obviously won't have much of a return on the taxpayer's investment... like it would have if deployed to areas with technology companies and their staff.

            Some existing smaller telcos noticed that there was a loophole in the regulation that allowed them to connect the more lucrative tech-savvy customers to their own private fibre if it's within 2km of an existing line. Companies like TPG had the entire CBD and inner suburban regions of every major city already 100% covered by this radius, so they proceeded to leapfrog the NBN and roll out their own 100 Mbps fibre-to-the-building service half a decade ahead. I saw their unmarked white vans stealthily rolling out extra fibre at like 3am to extend their coverage area before anyone in the government noticed.

            The funny part was that FttB uses VDSL2 boxes in the basement for the last 100m going up to apartments, but you can only have one per building because they use active cross-talk cancellation. So by the time the NBN eventually got around to wiring the CBD regions, they got to the apartments to discover that "oops, too late", private telcos had gotten there first!

            There were lawsuits... which the government lost. After all, they wrote the legislation, they were just mad that they hadn't actually understood it.

            Meanwhile, some other incumbent fibre providers that should have disappeared persisted like a stubborn cockroach infestation. I've just moved to an apartment serviced by OptiComm, which has 1.1 out of 5 stars on Google... which should tell you something. They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font so that during a whirlwind apartment inspection you might not notice that you're not going to be on the same high-speed Internet as the rest of the country.

            • dbaggerman 3 hours ago

              To clarify, NBN is a monopoly on the last mile infrastructure which is resold to private ISPs that sell internet services.

              The history there is that Australia used to have a government run monopoly on telephone infrastructure and services (Telecom Australia), which was later privatised (and rebranded to Telstra). The privatisation left Telstra with a monopoly on the infrastructure, but also a requirement that they resell the last mile at a reasonable rate to allow for some competition.

              So Australia already had an existing industry of ISPs that were already buying last mile access from someone else. The NBN was just a continuation of the existing status quo in that regard.

              > They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font

              Early in my career I worked for one of those smaller telcos trying to race to get services into buildings before the NBN. I left around the time they were talking about introducing an LBNCo brand (only one of the reasons I left). At the time, they weren't part of Opticomm, but did partner with them in a few locations. If the brand is still around, I guess they must have been acquired at some point.

              • jiggawatts 27 minutes ago

                I heard from several sources that what they do is give the apartment builder a paper bag of cash in exchange for the right to use their wires instead of the NBN. Then they gouge the users with higher monthly fees.

    • Fire-Dragon-DoL 6 hours ago

      That is interesting though. 1gbit is becoming more common

      • schmidtleonard 6 hours ago

        It's wild that 1gbit LAN has been "standard" for so long that the internet caught up.

        Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

        • Aurornis 5 hours ago

          > Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

          You really can’t think of any major difference between 10G Ethernet and all of those other standards that might be responsible for the price difference?

          Look at the supported lengths and cables. 10G Ethernet over copper can go an order of magnitude farther over relatively generic cables. Your USB-C or HDMI connections cannot go nearly as far and require significantly more tightly controlled cables and shielding.

          That’s the difference. It’s not easy to accomplish what they did with 10G Ethernet over copper. They used a long list of tricks to squeeze every possible dB of SNR out of those cables. You pay for it with extremely complex transceivers that require significant die area and a laundry list of complex algorithms.

          • schmidtleonard 4 hours ago

            There was a time when FFE, DFE, CTLE, and FEC could reasonably be considered an extremely complex bag of tricks by the standards of the competition. That time passed many years ago. They've been table stakes for a while in every other serial standard. Wifi is beating ethernet at the low end, ffs, and you can't tell me that air is a kinder channel. A low-end PC will ship with a dozen transceivers implementing all of these tricks sitting idle, while it'll be lucky to have a single 2.5Gbe port and you'll have to pay extra for the privilege.

            No matter, eventually USB4NET will work out of the box. The USB-IF is a clown show and they have tripped over their shoelaces every step of the way, but consumer Ethernet hasn't moved in 20 years so this horse race still has a clear favorite, lol.

          • reshlo 2 hours ago

            You explained why 10G Ethernet cables are expensive, but why should it be so expensive to put a 10G-capable port on the computer compared to the other ports?

            • kccqzy 2 hours ago

              Did you completely misunderstand OP? The 10G Ethernet cables are not expensive. In a pinch, even your Cat 5e cable is capable of 10G Ethernet albeit at a shorter distance than Cat 6 cable. Even then, it can be at least a dozen times longer than a similar USB or HDMI or DisplayPort cable.

        • jsheard 5 hours ago

          Those very fast consumer interconnects are distinguished from ethernet by very limited cable lengths though, none of them are going to push 10gbps over tens of meters nevermind a hundred. DisplayPort is up to 80gbps now but in that mode it can barely even cross 1.5m of heavily shielded copper before the signal dies.

          In a perfect world we would start using fiber in consumer products that need to move that much bandwidth, but I think the standards bodies don't trust consumers with bend radiuses and dust management so instead we keep inventing new ways to torture copper wires.

          • crote 4 hours ago

            > In a perfect world we would start using fiber in consumer products that need to move that much bandwidth

            We are already doing this. USB-C is explicitly designed to allow for cables with active electronics, including conversion to & from fiber. You could just buy an optical USB-C cable off Amazon, if you wanted to.

            • Dylan16807 3 hours ago

              When you make the cable do the conversion, you go from two expensive transceivers to six expensive transceivers. And if the cable breaks you need to throw out four of them. It's a poor replacement for direct fiber use.

          • schmidtleonard 5 hours ago

            Sure you need fiber for long runs at ultra bandwidth, but short runs are common and fiber is not a good reason for DAC to be expensive. Not within an order of magnitude of where it is.

            • Dylan16807 3 hours ago

              These days, passive cables that support ultra bandwidth are down to like .5 meters.

              For anything that wants 10Gbps lanes or less, copper is fine.

              For ultra bandwidth, going fiber-only is a tempting idea.

        • michaelt 5 hours ago

          Agree that a widespread faster ethernet is long overdue.

          But bear in mind, standards like USB4 only support very short cables. It's impressive that USB4 can offer 40 Gbps - but it can only do so on 1m cables. On the other hand, 10 gigabit ethernet claims to go 100m on CAT6A.

          • crote 4 hours ago

            USB4 does support longer distances, but those cables need active electronics to guarantee signal integrity. That's how you end up with Apple's $160 3-meter cable.

        • nijave 5 hours ago

          2.5Gbps is becoming pretty common and fairly affordable, though

          My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

          When I was looking to upgrade at home, I had to get expensive PoE+ injectors and splitters to power the switch in the closet (where there's no outlet) and 10Gbps SFP+ transceivers are like $10 for fiber or $40 for Ethernet. The Ethernet transceivers hit like 40-50C

          • crote 5 hours ago

            The main issue is switches, really. 5Gbps USB NICs are available for $30 on Amazon, or $20 on AliExpress. 10Gbps NICS are $60, so not exactly crazy expensive either.

            But switches haven't really kept up. A simple unmanaged 5-port or 8-port 2.5GigE isn't too bad, but anything beyond that gets tricky. 5GigE switches don't seem to exist, and you're already paying $500 for a budget-brand 10GigE switch with basic VLAN support. You want PoE? Forget it.

            The irony is that at 10Gbps fiber suddenly becomes quite attractive. A brand-new SFP+ NIC can be found for $30, with DACs only $5 (per side) and transceivers $30 or so. You can get an actually-decent switch from Mikrotik for less than $300.

            Heck, you can even get brand-new dualport SFP28 NICs for $100, or as little as $25 on Ebay! Switch-wise you can get 16 ports of 25Gbps out of a $800 Mikrotik switch: not exactly cheap, but definitely within range for a very enthusiastic homelabber.

            The only issue is that wiring your home for fiber is stupidly expensive, and you can't exactly use it to power access points either.

            • spockz 40 minutes ago

              Apparently there is the https://store.ui.com/us/en/products/us-xg-6poe from Ubiquity. It only has 4 10GbE ports but they all have PoE.

            • maccard 3 hours ago

              > The only issue is that wiring your home for fiber is stupidly expensive

              What do you mean by that? My home isnt wired for ethernet. I can buy 30m of CAT6 cable for £7, or 30m of fibre for £17. For a home use, that's a decent amount of cable, and even spending £100 on cabling will likely run cables to even the biggest of houses.

              • hakfoo 27 minutes ago

                Isn't the expensive part more the assembly aspect? For Cat 6 the plugs and keystone jacks add up to a few dollars per port, and the crimper is like $20. I understand building your own fibre cables-- if you don't want to thread them through walls without the heads pre-attached, for example-- involves more sophisticated glass-fusion tools that are fairly expensive.

                A rental service might help there, or a call-in service-- the 6 hours of drilling holes and pulling fibre can be done by yourself, and once it's all cut to rough length, bring out a guy who can fuse on 10 plugs in an hour for $150.

          • cyberax 5 hours ago

            40-50C? What is the brand?

            Mine were over 90C, resulting in thermal shutdowns. I had to add an improvised heat exchanger to lower it down to ~70C: https://pics.archie.alex.net/share/U0G1yiWzShqOGXulwe1AetDjR...

          • Dylan16807 2 hours ago

            > My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

            If you decide you only need 50 meters, that reduces both power and cable requirements by a lot. Did we decide to ignore the easy solution in favor of stagnation?

          • akira2501 5 hours ago

            Ironically.. 2.5 Gbps is created by taking a 10GBASE-T module and effectively underclocking it. I wonder if "automatic speed selection" is around the corner with modules that automatically connect at 100Mbps to 10Gbps based on available cable quality.

            • cyberax 5 hours ago

              My 10G modules automatically drop down to 2.5G or 1G if the cable is not good enough. There's also 5G, but I have never seen it work better than 2.5G.

              • akira2501 3 hours ago

                Oh man. I've been off the IT floor for too long. Time to change my rhetoric, ya'll have been around the corner for a while.

                Aging has it's upsides and downsides I guess.

        • Fire-Dragon-DoL 4 hours ago

          It passed it! Here there are offers up to 3gbit residential (Vancouver). I had 1.5 bit for a while. Downgraded to 1gbit because while I love fast internet, right now nobody in the home uses it enough to affect 1gbit speed

        • Dalewyn 2 hours ago

          There is an argument to be made that gigabit ethernet is "good enough" for Joe Average.

          Gigabit ethernet is ~100MB/s transfer speed over copper wire or ~30MB/s over wireless accounting for overhead and degradation. That is more than fast enough for most people.

          10gbit is seemingly made from unicorn blood and 2.5gbit is seeing limited adoption because there simply isn't demand for them outside of enterprise who have lots of unicorn blood in their banks.

    • nine_k 4 hours ago

      Gigabit connections are widely available in urban areas. The problem is not theoretical, but definitely is pretty recent / nascent.

      • Dylan16807 4 hours ago

        A gigabit connection is just one prerequisite. The server also has to be sending very big bursts of foreground/immediate data or you're very unlikely to notice anything.

    • wkat4242 5 hours ago

      For local purposes that's certainly true. It seems that quic trades a faster connection establishment for lower throughput. I personally prefer tcp anyway.

    • nh2 5 hours ago

      In Switzerland you get 25 Gbit/s for $60/month.

      In 30 years it will be even faster. It would be silly to have to use older protocols to get line speed.

      • 77pt77 5 hours ago

        Now do the same in Germany...

    • paulddraper 4 hours ago

      > below 600 Mbit/s is implied as being "Slow Internet" in the intro

      Or rather, not "Fast Internet"

  • Tempest1981 6 hours ago

    From September:

    QUIC is not quick enough over fast internet (acm.org)

    https://news.ycombinator.com/item?id=41484991 (327 comments)

    • lysace 6 hours ago

      My personal takeaway from that: Perhaps we shouldn't let Google design and more or less unilaterally dictate and enforce internet protocol usage via Chromium.

      Brave/Vivaldi/Opera/etc: You should make a conscious choice.

      • ratorx 5 hours ago

        Having read through that thread, most of the (top) comments are somewhat related to the lacking performance of the UDP/QUIC stack and thoughts on the meaningfulness of the speeds in the test. There is a single comment suggesting HTTP/2 was rushed (because server push was later deprecated).

        QUIC is also acknowledged as being quite different from the Google version, and incorporating input from many different people.

        Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards? None of the changes in protocol seem objectively wrong (except possibly Server Push).

        Disclaimer: Work at Google on networking, but unrelated to QUIC and other protocol level stuff.

        • lysace 5 hours ago

          > Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards?

          I guess I'm just generally disgusted in the way Google is poisoning the web in the worst way possible: By pushing ever more complex standards. Imagine the complexity of the web stack in 2050 if we continue to let Google run things. It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

          In short: it's not you, it's your manager's manager's manager's manager's strategy that is messed up.

          • ratorx 5 hours ago

            This is making a pretty big assumption that the web is perfectly fine the way it is and never needs to change.

            In reality, there are perfectly valid reasons that motivate QUIC and HTTP/2 and I don’t think there is a reasonable argument that they are objectively bad. Now, for your personal use case, it might not be worth it, but that’s a different argument. The standards are built for the majority.

            All systems have tradeoffs. Increased complexity is undesirable, but whether it is bad or not depends on the benefits. Just blanket making a statement that increasing complexity is bad, and the runaway effects of that in 2050 would be worse does not seem particularly useful.

            • lysace 4 hours ago

              Nothing is perfect. But gigantic big bang changes (like from HTTP 1.1 to 2.0) enforced by a browser mono culture and a dominant company with several thousands of individually well-meaning Chromium software engineers like yourself - yeah, pretty sure that's bad.

              • jsnell 4 hours ago

                Except that HTTP/1.1 to HTTP/2 was not a big bang change on the ecosystem level. No server or browser was forced to implement HTTP/2 to remain interoperable[0]. I bet you can't point any of this "enforcement" you claim happened. If other browser implemented HTTP/2, it was because they thought that the benefits of H2 outweighed any downsides.

                [0] There are non-browser protocols that are based on H2 only, but since your complaint was explicitly about browsers, I know that's not what you had in mind.

                • lysace 4 hours ago

                  You are missing the entire point: Complexity.

                  It's not your fault, in case you were working on this. It was likely the result a strategy thing being decided at Google/Alphabet exec level.

                  Several thousand very competent C++ software engineers don't come cheap.

                  • jsnell 3 hours ago

                    I mean, the reason I was discussing those specific aspects is that you're the one brought them up. You made the claim about how HTTP/2 was a "big bang" change. You're the one who made the claim that HTTP/2 was enforced on the ecosystem by Google.

                    And it seems that you can't support either of those claims in any way. In fact, you're just pretending that you never made those comments at all, and have once again pivoted to a new grievance.

                    But the new grievance is equally nonsensical. HTTP/2 is not particularly complex, and nobody on either the server or browser side was forced to implement it. Only those who thought the minimal complexity was worth it needed to do it. Everyone else remained fully interoperable.

                    I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

                    • lysace 3 hours ago

                      Edit: this whole comment is incorrect. I was really thinking about HTTP 3.0, not 2.0.

                      HTTP/2 is not "particularly complex?" Come on! Do remember where we started.

                      > I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

                      "Such minor amounts of complexity". Ahem.

                      I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit. I do believe it benefitted Google.

                      • jsnell 3 hours ago

                        "We" started from you making outlandish claims about HTTP/2 and immediately pivoting to a new complaint when rebutted rather than admit you were wrong.

                        Yes, HTTP/2 is not really complex as far as these things go. You just keep making that assertion as if it was self-evident, but it isn't. Like, can you maybe just name the parts you think are unnecessary complex? And then we can discuss just how complex they really are, and what the benefits are.

                        (Like, sure, having header compression is more complicated than not having it. But it's also an amazingly beneficial tradeoff, so it can't be what you had in mind.)

                        > I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit.

                        So why did Firefox implement it? Safari? Basically all the production level web servers? Google didn't force them to do it. The developers of all of that software had agency, evaluated the tradeoffs, and decided it was worth implementing. What makes you a better judge of the tradoffs than all of these non-Google entities?

                        • lysace 2 hours ago

                          Yeah, sorry, I mixed up 2.0 (the one that still uses TCP) with 3.0. Sorry for wasting your time.

          • bawolff 5 hours ago

            > It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

            It literally is not.

            • lysace 5 hours ago

              Because?

              Edit: I'm not the first person to make this comparison. Witness the Chrome section in this article:

              https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

              • ratorx 4 hours ago

                Contributing to an open standard seems to be the opposite of the classic example.

                Assume that change X for the web is positive overall. Currently Google’s strategy is to implement in Chrome and collect data on usefulness, then propose a standard and have other people contribute to it.

                That approach seems pretty optimal. How else would you do it?

                • lysace 4 hours ago

                  When was the last time the Chromium team fit in a single hotel? How many are you now? 3k? certainly not 4k browser engineers?

                  • ratorx 4 hours ago

                    How does this have any relevance to my comment?

                    • lysace 4 hours ago

                      How does your comment have any relevance to what we are discussing throughout this thread?

              • bawolff 4 hours ago

                Well it may be possible to make the comparison in other things google does (they have done a lot of things) it makes no sense for quic/http3.

                What are they extending in this analogy? Http3 is not an extension of http. What are they extinguishing? There is no plan to get rid of http1/2, since you still need it in lots of networks that dont allow udp.

                Additionally, its an open standard, with an rfc, and multiple competing implementations (including firefox and i believe experimental in safari). The entire point of embrace, extend, extinguish is that the extension is not well specified making it dufficult for competitors to implement. That is simply not what is happening here.

                • lysace 4 hours ago

                  What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium:

                  They have several thousand C++ browser engineers (and as many web standards people as they could get their hands on, early on). Combined with a dominant browser market share, this has let them dominate browser standards, and even internet protocols. They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla. It's quite clever.

                  • Dylan16807 2 hours ago

                    > What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium

                    I think this argument is reasonable, but QUIC isn't part of the problem.

                  • jauntywundrkind 3 hours ago

                    Microsoft just did shit, whatever they wanted. Google has worked with all the w3c committees and other browsers with tireless commitment to participation, with endless review.

                    It's such a tired sad trope of people disaffected with the web because they can't implement it by themselves easily. I'm so exhausted by this anti-progress terrorism; the world's shared hypermedia should be rich and capable.

                    We also see lots of strong progress these days from newcomers like Ladybird, and Servo seems gearing up to be more browser like.

                    • lysace 3 hours ago

                      Yes, Google found the loophole: brute-force standards complexity by hiring thousands of very competent engineers eager to leave their mark on the web and eager to get promoted. The only thing they needed was lots of money, and they had just that.

                      I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

      • GuB-42 5 hours ago

        Maybe, but QUIC is not bad as a protocol. The problem here is that OSes are not as well optimized for QUIC as they are for TCP. Just give it time, the paper even has suggestions.

        QUIC has some debatable properties, like mandatory encryption, or the use of UDP instead of being a protocol under IP like TCP, but there are good reasons for it, related to ossification.

        Yes, Google pushed for it, but I think it deserves its approval as a standard. It is not perfect but it is practical, they don't want another IPv6 situation.

      • vlovich123 6 hours ago

        So because the Linux kernel isn’t as optimized for QUIC as it has been for TCP we shouldn’t design new protocols? Or it should be restricted to academics that had tried and failed for decades and would have had all the same problems even if they succeeded? And all of this only in a data center environment really and less about the general internet Quic was designed for?

        This is an interesting hot take.

        • lysace 6 hours ago

          I'm struggling to parse my comment in the way you seem to think it did. In what way did or would my comment restrict your ability to design new protocols? Please explain.

          • vlovich123 3 hours ago

            Because you imply in that comment that it should be someone other than Google developing new protocols while in another you say that the protocols are already too complex implying stasis is the preferred state.

            You’re also factually incorrect in a number of ways such as claiming that HTTP/2 was a Google project (it’s not and some of the poorly thought out ideas like push didn’t come from Google).

            The fact of the matter is that other attempts at “next gen” protocols had taken place. Google is the only one that won out. Part of it is because they were one of the few properties that controlled enough web traffic to try something. Another is that they explicitly learned from mistakes that the academics had been doing and taken market effects into account (ie not requiring SW updates of middleware boxes). I’d say all things considered Internet connectivity is better that QUIC got standardized. Papers like this simply point to current inefficiencies of today’s implementation - those can be fixed. These aren’t intractable design flaws of the protocol itself.

            But you seem to really hate Google as a starting point so that seems to color your opinion of anything they produce rather than engaging with the technical material in good faith.

            • lysace 2 hours ago

              I don't hate Google. I admire it what for what it is; an extremely efficient and inherently scalable corporate structure designed to exploit the Internet and the web in the most brutal and profitable way imaginable.

              It's just that their interests in certain aspects don't align with ours.

  • kachapopopow 6 hours ago

    This sounds really really wrong. I've achieved 900mbps speeds on quic+http3 and just quic... Seems like a bad TLS implementation? Early implementation that's not efficient? The CPU usage seemed pretty avg at around 5% on gen 2 epyc cores.

    • kachapopopow 5 hours ago

      This is actually very well known: current QUIC implementation in browsers is *not stable* and is built of either rustls or in another similar hacky way.

      • AlienRobot 31 minutes ago

        Why am I beta testing unstable software?

  • spott 6 hours ago

    Here “fast internet” is 500Mbps, and the reason is that quic seems to be cpu bound above that.

    I didn’t look closely enough to see what their test system was to see if this is basic consumer systems or is still a problem for high performance desktops.

  • teleforce 2 hours ago

    Previous post on HN (326 comments - 40 days ago):

    QUIC is not quick enough over fast internet:

    https://news.ycombinator.com/item?id=41484991

  • p1necone 6 hours ago

    I thought QUIC was optimized for latency - loading lots of little things at once on webpages and video games (which send lots of tiny little packets - low overall throughput but highly latency senstive) and such. I'm not surprised that it falls short when overall throughput is the only thing being measured.

    I wonder if this can be optimized at the protocol level by detecting usage patterns that look like large file transfers or very high bandwidth video streaming and swapping over to something less cpu intensive.

    Or is this just a case of less hardware/OS level optimization of QUIC vs TCP because it's new?

    • zamalek 5 hours ago

      It seems that syscalls might be the culprit (ACKs occur completely inside the kernel for TCP, where anything UDP acks from userspace). I wonder if BGP could be extended for protocol development.

  • exabrial 6 hours ago

    I wish QUIC had a non-TLS mode... if I'm developing locally I really just want to see whats going over the wire sometimes and this adds a lot of un-needed friction.

    • guidedlight 6 hours ago

      QUIC reuses parts of the TLS specification (e.g. handshake, transport state, etc).

      So it can’t function without it.

    • krater23 6 hours ago

      You can add the private key of your server in wireshark and it will automatically decrypt the packets.

      • jborean93 5 hours ago

        This only works tor RSA keys and I believe ciphers that do not have forward secrecy. Quic is TLS 1.3 and all the ciphers in that protocol do forward secrecy so cannot be decrypted in this way. You’ll have to use a tool that provides the TLS session info through the SSLKEYLOGFILE format.

  • skybrian 6 hours ago

    Looking at Figure 5, Chrome tops out at ~500 Mbps due to CPU usage. I don't think many people care about these speeds? Perhaps not using all available bandwidth for a few speedy clients is an okay compromise for most websites? This inadvertent throttling might improve others' experiences.

    But then again, being CPU-throttled isn't great for battery life, so perhaps there's a better way.

  • 10000truths 5 hours ago

    TL;DR: Nothing that's inherent to QUIC itself, it's just that current QUIC implementations are CPU-bound because hardware GRO support has not yet matured in commodity NICs.

    But throughput was never the compelling aspect of QUIC in the first place. It was always the reduced latency. A 1-RTT handshake including key/cert exchange is nothing to scoff at, and the 2-RTT request/response cycle that HTTP/3-over-QUIC offers means that I can load a blog page from a rinky-dink server on the other side of the world in < 500 ms. Look ma, no CDN!

    • o11c 5 hours ago

      There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection. TLS only can add Confidentiality and Integrity, it can do nothing about the missing Availability.

      • ChocolateGod 4 hours ago

        > There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection

        I am unsure how this is a security flaw of TCP? Any middleman could block UDP packets too and get the same effect, or modify UDP packets in an attempt to cause the receiving application to crash.

        • o11c 3 hours ago

          In order to attack UDP, you have to block all routes through which traffic might flow. This is hard; remember, the internet tries to be resilient.

          In order to attack TCP, all you have to do is spy on a single packet (very easy) to learn the sequence number, then you can inject a wrench into the cogs and the endpoints will reject all legitimate traffic from each other.

      • suprjami 5 hours ago

        What does that have to do with anything here? This post is about QUIC performance, not TCP packet injection.

        • o11c 4 hours ago

          "Accept worse performance in order to fix security problems" is a standard tradeoff.

          • suprjami 2 hours ago

            QUIC was invented to provide better performance for multiplexed HTTP/3 streams and the bufferbloat people love that it avoids middlebox protocol interference.

            QUIC has never been about "worse performance" to avoid TCP packet injection.

            Anybody who cares about TCP packet injection is using crypto (IPSec/Wireguard). If performant crypto is needed there are appliances which do it at wirespeed.

  • andsoitis 4 hours ago

    Designing for resource-constrained systems typically comes with making tradeoffs.

    Once the resource constraint is eliminared, you're no longer getting the benefit of that tradeoff but are paying the costs.

  • AlienRobot 34 minutes ago

    Anecdote: I was having trouble accessing wordpress.org. When I started using Wordpress, I could access the documentation just fine, but then suddenly I couldn't access the website anymore. I dual boot Linux, so it wasn't Windows fault. I could ping them just fine. I tried three different browsers with the same issue. It's just that when I accessed the website, it would get stuck and not load at all, and sometimes pages would just stop loading mid-way.

    Today I found the solution. Disable "Experimental QUIC Protocol" in Chrome settings.

    This makes me kind of worried because I've had issues accessing wordpress.org for months. There was no indication that this was caused by QUIC. I just managed to realize it because there was QUIC-related error in devtools that appeared only sometimes.

    I wonder what other websites are rendered inaccessible by this protocol and users have no idea what is causing it.

  • kibwen 4 hours ago

    How does it compare to HTTP/1 on similar benchmarks?

  • jvanderbot 6 hours ago

    Well latency/bandwidth tradeoffs make sense. After bufferbloat mitigations my throughout halved on my router. But for gaming while everyone is streaming, it makes sense to settle with half a gigabit.

  • Thaxll 3 hours ago

    QUIC is pretty much what serious online games have been doing in the last 20 years.

  • ec109685 4 hours ago

    Meanwhile fast.com (and presumably netflix cdn) is using http 1.1 still.

    • dan-robertson 4 hours ago

      Why do you need multiplexing when you are only downloading one (video) stream? Are there any features of http/2 that would benefit the Netflix use case?

      • jeltz 3 hours ago

        QUIC handles packet loss better. But I do not think there is any benefit from HTTP2.

  • superkuh 6 hours ago

    Since QUIC was designed for Fast Internet as used by the megacorporations like Google and Microsoft how it performs at these scales does matter even if it doesn't for a human person's end.

    Without it's designed for use case all it does is slightly help mobile platforms that don't want to hold open a TCP connection (for energy use reasons) and bring in fragile "CA TLS"-only in an environment where cert lifetimes are trending down to single months (Apple etc latest proposal).

    • dathinab 5 hours ago

      not really it's (mainly) designed by companies like Google to connect to all their end users

      Such a internet connection becoming so low latency that the latency of receiver side processing becomes dominant is in practice not the most relevant. Sure theoretically you can hit it with e.g. 5G but in practice even with 5G many real world situations won't. Most importantly a slow down of such isn't necessary bad for Google and co. as it only add limited amounts on strain on their services, infrastructure, internet and is still fast enough for most users to not care for most Google and co. use cases.

      Similar being slow due to receiver delays isn't necessary bad enough to cause user noticeable battery issues, on of the main reasons seem to many user<->kernel boundary crossings which are slow due to cache missues/ejections etc. but also don't boost your CPU clock (which is one of the main ways to drain your battery, besides the screen)

      Also like the article mentions the main issue is sub optimal network stack usage in browsers (including Chrome) not necessary a fundamental issue in the protocol. Which brings us to inter service communication for Google and co. which doesn't use any of the tested network stacks but very highly optimized stacks. I mean it really would be surprising if such network stacks where slow as there had been exhaustive perf. testing during the design of QUIC.

  • austin-cheney 5 hours ago

    EDITED.

    I preference WebSockets over anything analogous to HTTP.

    Commented edited because I mentioned performance conditions. Software developers tend to make unfounded assumptions/rebuttals of performance conditions they have not tested.

    • akira2501 5 hours ago

      I'd use them more, but WebSockets are just unfortunately a little too hard to implement efficiently in a serverless environment, I wish there was a protocol that spoke to that environment's tradeoffs more effectively.

      The current crop aside from WebSockets all seem to be born from taking a butcher knife to HTTP and hacking out everything that gets in the way of time to first byte. I don't think that's likely to produce anything worthwhile.

      • austin-cheney 5 hours ago

        That is a fair point. I wrote my own implementation of WebSockets in JavaScript and learned much in doing so, but it took tremendous trial and effort to get right. Nonetheless, the result was well worth the effort. I have a means to communicate to the browser and between servers that is real time with freedom to extend and modify it at my choosing. It is unbelievably more responsive than reliance upon HTTP in any of its forms. Imagine being able to execute hundreds of end-to-end test automation scenarios in the browser in 10 seconds. I can do that, but I couldn't with HTTP.

    • bawolff 5 hours ago

      This is an insane take.

      Just to pick at one point of this craziness, you think that communicating over web sockets does not involve round trips????

    • sleepydog 4 hours ago

      QUIC is a reliable transport. It's not "fire and forget", there is a mechanism for recovering lost messages similar, but slightly superior to TCP. QUIC has the significant advantage of 0- and 1-rtt connection establishments which can hide latency better than TCP's 3-way handshake.

      Current implementations have some disadvantages to TCP, but they are not inherent to the protocol, they just highlight the decades of work done to make TCP scale with network hardware.

      Your points seem better directed at HTTP/3 than QUIC.

    • Aurornis 5 hours ago

      > QUIC is faster than prior versions of HTTP, but its still HTTP. It will never be fast enough because its still HTTP: > * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS

      QUIC is a transport. HTTP can run on top of QUIC, but the way you’re equating QUIC and HTTP doesn’t make sense.

      String headers and socket opening have nothing to do with the performance issues being discussed.

      String headers aren’t even a performance issue at all. The amount of processing done for when the most excessive use of string headers is completely trivial relative to all of the other processing that goes into sending 1,000,000,000 bits per second (Gigabit) over the internet, which is the order of magnitude target being discussed.

      I don’t think you understand what QUIC is or even the prior art in HTTP/2 that precedes these discussions of QUIC and HTTP/3.

      • austin-cheney 5 hours ago

        > String headers aren’t even a performance issue at all.

        That is universally incorrect. String instructions require parsing as strings are for humans and binary is for machines. There is performance overhead to string parsing always, and it is relatively trivial to perf. I have performance tested this in my own WebSocket and test automation applications. That performance difference scales in logarithmic fashion provided the quantity of messages to send/receive. I encourage you to run your own tests.

        • jiggawatts 4 hours ago

          Both HTTP/2 and HTTP/3 use binary protocol encoding and compressed (binary) headers. You're arguing a straw man that has little to do with reality.

    • quotemstr 5 hours ago

      > * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS > * UDP. Yes, in theory UDP is faster than TCP but only when you completely abandon integrity.

      Have you ever read up on the technical details of QUIC? Every single of one of your bullets reflects a misunderstanding of QUIC's design.

      • Aurornis 5 hours ago

        Honestly the entire comment is a head scratcher, from comparing QUIC to HTTP (different layers of the stack) or suggesting that string headers are a performance bottleneck.

        Websockets are useful in some cases where you need to upgrade an HTTP connection to something more. Some people learn about websockets and then try to apply them to everything, everywhere. This seems to be one of those cases.

    • FridgeSeal 5 hours ago

      QUIC isn’t HTTP, QUIC is a protocol that operates at a similar level to UDP and TCP.

      HTTP/3 is HTTP over QUIC. HTTP protocols v2 and onwards use binary headers. QUIC, by design, does 0-RTT handshakes.

      > Yes, in theory UDP is faster than TCP but only when you completely abandon integrity

      The point of QUIC, is that it enables application/userspace level reconstruction with UDP levels of performance. There’s no integrity being abandoned here: packets are free to arrive out of order, across independent sub-streams, and the protocol machinery puts them back together. QUIC also supports full bidirectional streams, so HTTP/3 also benefits from this directly. QUIC/HTTP3 also supports multiple streams per client with backpressure per substream.

      Web-sockets are a pretty limited special case, built on-top of HTTP and TCP. You literally form the http connection and then upgrade it to web-sockets, it’s still TCP underneath.

      Tl;Dr: your gripes are legitimate, but they refer to HTTP/1.1 at most, QUIC and HTTP/3 are far more sophisticated and performant protocols.

      • austin-cheney 4 hours ago

        WebSockets are not built on top of HTTP, though that is how they are commonly implemented. WebSockets are faster when HTTP is not considered. A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP. This is easily provable if you attempt your own implementation of WebSockets.

        • deathanatos an hour ago

          … I mean, in theory someone could craft some protocol that just starts with speaking Websockets or starts with some other handshake¹, I suppose, but the overwhelming majority of the uses of websockets out there are going to be over HTTP, as that's what a browser speaks, and the client is quite probably a browser.

          > A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP.

          You're going to have to cite the paragraph, then, because that is most definitely not what RFC 6455 says. RFC 6455 says,

          > The handshake consists of an HTTP Upgrade request, along with a list of required and optional header fields.

          That's not "a single static string". You can't just say "are the first couple of bytes of the connection == SOME_STATIC", as that would not be a conforming implementation. (That would just be a custom protocol with its own custom upgrade-into-Websockets, as mentioned in the first paragraph, but if you're doing that, you might as well just ditch that and just start in Websockets.)

          ¹(i.e., I grant the RFC's "However, the design does not limit WebSocket to HTTP, and future implementations could use a simpler handshake", but making use of that to me that puts us solidly in "custom protocol" land, as conforming libraries won't interoperate.)