Using Rust in non-Rust servers to improve performance

(github.com)

259 points | by amatheus 4 days ago ago

142 comments

  • jchw 5 hours ago

    Haha, I was flabbergasted to see the results of the subprocess approach, incredible. I'm guessing the memory usage being lower for that approach (versus later ones) is because a lot of the heavy lifting is being done in the subprocess which then gets entirely freed once the request is over. Neat.

    I have a couple of things I'm wondering about though:

    - Node.js is pretty good at IO-bound workloads, but I wonder if this holds up as well when comparing e.g. Go or PHP. I have run into embarrassing situations where my RiiR adventure ended with less performance against even PHP, which makes some sense: PHP has tons of relatively fast C modules for doing some heavy lifting like image processing, so it's not quite so clear-cut.

    - The "caveman" approach is a nice one just to show off that it still works, but it obviously has a lot of overhead just because of all of the forking and whatnot. You can do a lot better by not spawning a new process each time. Even a rudimentary approach like having requests and responses stream synchronously and spawning N workers would probably work pretty well. For computationally expensive stuff, this might be a worthwhile approach because it is so relatively simple compared to approaches that reach for native code binding.

    • tln 3 hours ago

      The native code binding was impressively simple!

      7 lines of rust, 1 small JS change. It looks like napi-rs supports Buffer so that JS change could be easily eliminated too.

  • eandre 5 hours ago

    Encore.ts is doing something similar for TypeScript backend frameworks, by moving most of the request/response lifecycle into Async Rust: https://encore.dev/blog/event-loops

    Disclaimer: I'm one of the maintainers

  • xyst 5 hours ago

    In my opinion, the significant drop in memory footprint is truly underrated (13 MB vs 1300 MB). If everybody cared about optimizing for efficiency and performance, the cost of computing wouldn’t be so burdensome.

    Even self-hosting on an rpi becomes viable.

    • marcosdumay 4 hours ago

      It's the result of the data isolation above anything else attitude of Javascript.

      Or, in other words, it's the unavoidable result of insisting on using a language created for the frontend to write everything else.

      You don't need to rewrite your code in Rust to get that saving. Any other language will do.

      (Personally, I'm surprised all the gains are so small. Looks like it's a very well optimized code path.)

      • smolder 2 hours ago

        I rewrote the same web API in Javascript, Rust, C#, and Java as a "bench project" at work one time. The Rust version had smallest memory footprint by far as well as the best performance. So, no, "any other language" [than JS] is not all the same.

        • jeroenhd an hour ago

          C# and Java are closer but not really on the level of Rust when it comes to performance. A better comparison would be with C++ or a similarly low-level language.

          In my experience, languages like Ruby and Python are slower than languages like Javascript, which are slower than languages like C#/Java, which are slower than languages like C++/Rust, which are slower than languages like C and Fortran. Assembly isn't always the fastest approach these days, but well-placed assembly can blow C out of the water too.

          The ease of use and maintainability scale in reverse in my experience, though. I wouldn't want to maintain the equivalent of a quick and dirty RoR server reimplemented in C or assembly, especially after it's grown organically for a few years. Writing Rust can be very annoying when you can't take the normal programming shortcuts because of lifetimes or the borrow checker, in a way that JIT'ed languages allow.

          Everything is a scale and faster does not necessarily mean better if the code becomes unreadable.

          • jandrewrogers 23 minutes ago

            C and Fortran are not faster than C++, and haven't been for a long time. I've used all three languages in high-performance contexts. In practice, C++ currently produces the fastest code of high-level languages.

          • Klonoar an hour ago

            I have written and worked on more than my fair share of Rust web servers, and the code is more than readable. This typically isn't the kind of Rust where you're managing lifetimes and type annotations so heavily.

        • materielle an hour ago

          I’m curious how Go stacks up against C# and Java these days.

          “Less languages features, but a better compiler” was originally the aspirational selling point of Go.

          And even though there were some hiccups, at least 10 years ago, I remember that mainly being true for typical web servers. Go programs did tend to use less memory, have less GC pauses (in the context of a normal api web server), and faster startup time.

          But I know Java has put a ton of work in to catch up to Go. So I wonder if that’s still true today?

          • dartos an hour ago

            One of the big draws of go is ease of deployment. A single self contained binary is easy to package and ship, especially with containers.

            I don’t think Java has any edge when it comes to deployment.

            • jerven 25 minutes ago

              Java AOT has come a long way, and is not so rare as it used to be. Native binaries with GraalVM AOT are becoming more a common way to ship CLI tools written in JVM languages.

        • manquer an hour ago

          They are not saying every language will have same level of improvement as Rust, they are saying you can most of the improvements is available in most languages.

          perhaps you get 1300MB to 20 MB with C# or Java or go, and 13MB with rust . Rust’s design is not the reason for bulk of the reduction is the point

          • acdha an hour ago

            Sure, but until people actually have real data that’s just supposition. If a Java rewrite went from 1300MB to, say, 500MB they’d have a valid point and optimizing for RAM consumption is severely contrary to mainstream Java culture.

      • btilly 2 hours ago

        Your claim makes zero sense to me. Particularly when I've personally seen similar behavior out of other languages, like Java.

        As I said in another comment, the most likely cause is that temporary garbage is not collected immediately in JavaScript, while garbage is collected immediately in Rust. See https://doc.rust-lang.org/nomicon/ownership.html for the key idea behind how Rust manages this.

        If you truly believe that it is somehow due to data isolation, then I would appreciate a reference to where JavaScript's design causes it to behave differently.

      • jvanderbot 3 hours ago

        "Rust" really just means "Not javascript" as a recurring pattern in these articles.

        • noirscape 3 hours ago

          It's also frankly kinda like comparing apples and oranges as a language. JavaScript (and many of the "bad performance" high level languages minus Rails; Rails is bad and should be avoided for projects as much as possible unless you have lots of legacy cruft) are also heavily designed around rapid iteration. Rust is however very much not capable of rapid iteration, the borrow checker will fight you heavily every step of the way to the point where it demands constant refactors.

          Basically the best place where Rust can work is one where all variables, all requirements and all edgecases are known ahead of time or cases where manual memory safety is a necessity vis-a-vis accepting a minor performance hike from things like the garbage collector. This works well in some spaces (notably; systems programming, embedded and Browser Engines and I wouldn't consider the latter a valid target), but webserver development is probably one of the furthest places where you are looking for Rust.

          • hathawsh 2 hours ago

            I have often thought that programmers can actually just choose to make Rust easy by using a cyclic garbage collector such as Samsara. [1] If cyclic GC in Rust works as well as I think it can, it should be the best option for the majority of high level projects that need fast development with a trade-off of slightly lower efficiency. I suspect we'll see a "hockey stick" adoption curve once everyone figures this out.

            [1] https://github.com/chc4/samsara

            • 0cf8612b2e1e an hour ago

              I am still waiting for a scripting language to be bolted on top of Rust. Something that will silently Box all the values so the programmer does not have to think about the Rust specifics, but can still lean on all of the Rust machinery and libraries. If performance/correctness becomes a problem, the scripting layer could be replaced piecemeal with real Rust.

              • hathawsh 11 minutes ago

                Perhaps you mean to say that you're waiting for a new scripting language to be created that's designed to be "almost Rust." That could be interesting! OTOH, the bindings for existing languages have matured significantly:

                  - https://pyo3.rs/
                  - https://github.com/neon-bindings/neon
                  - https://github.com/mre/rust-language-bindings
              • dartos an hour ago

                And then we would’ve come full circle.

                Beautiful

            • worik 3 minutes ago

              This is what async/await rust programmers need

              They are comfortable with runtimes

          • worik 5 minutes ago

            > the borrow checker will fight you heavily every step of the way to the point where it demands constant refactors.

            No

            Once you learn to surrender to the borrow checker it becomes friend, not foe

            You must submit

          • sophacles 2 hours ago

            I found this to be untrue after I spent a little energy learning to think about problems in rust.

            In a lot of languages you're working with a hammer and nail (metaphorically speaking) and when you move to a different language its just a slightly different hammer and nail. Rust is a screwdriver and screw though, and once I stopped trying to pound the screw in with the screwdriver, but rather use the one to turn the other, it was a lot easier. Greenfield projects with a lot of iteration are just as fast as doing it in python (although a bit more front-loaded rather than debugging), working new features into existing code - same thing.

          • timeon 2 hours ago

            Writing server API'n'co is not unknown path that needs rapid prototyping.

          • echelon 2 hours ago

            > Rust is however very much not capable of rapid iteration, the borrow checker will fight you heavily every step of the way to the point where it demands constant refactors.

            Misconception.

            You will encounter the borrow checker almost never when writing backend web code in Rust. You only encounter it the first time when you're learning how to write backend code in Rust. Once you've gotten used to it, you will literally never hit it.

            Sometimes when I write super advanced endpoints that mutate global state or leverage worker threads I'll encounter it. But I'm intentionally doing stuff I could never do in Python or Javascript. Stuff like tabulating running statistics on health check information, batching up information to send to analytics services, maintaining in-memory caches that talk to other workers, etc.

            • materielle an hour ago

              To put this another way: the Rust borrow checker attempts to tie memory lifetime to stack frames.

              This tends to work well for most crud api servers, since you allocate “context, request, and response” data at the start of the handler function, and deallocate at the end. Most helper data can also be tied to the request lifecycle. And data is mainly isolated per-request. Meaning there isn’t much data sharing across multiple request.

              This means that the borrow checker “just works”, and you probably won’t even need lifetime annotations or even any special instructions for the borrow checkers. It’s the idealized use case the borrow checker was designed for.

              This is also the property which most GC languages like Java, Go, and C# exploit with generational garbage collectors. The reason it “works” in Java happens to be the same reason it works in Rust.

              If your server does need some shared in-memory data, you can start by just handing out copies. If you truly need something more complicated, and we are talking about less than 10% of crud api servers here, then you need to know a thing or two about the borrow checker.

              I’m not saying to rewrite web servers in Rust, or even advocating for it as a language. I’m just pointing out that a crud api server is the idealized use case for a borrow checker.

        • IshKebab 2 hours ago

          Not exactly. It wouldn't help if you moved your JavaScript to Python or Ruby or PHP... and anyway it's not really feasible from an FFI perspective to move it to anything other than Rust or C/C++ or maybe Zig. There's no good reason to pick C/C++ over Rust in most of these cases...

          So "Rust" means "Not JavaScript, and also a bunch of other constraints that mean that Rust is pretty much the only sensible choice."

          • marcosdumay 2 hours ago

            > It wouldn't help if you moved your JavaScript to Python or Ruby or PHP...

            Hum, no. The point is exactly that it would help a great deal if you moved to Python or Ruby or PHP.

            Of course, Rust will give you even better memory efficiency. But Javascript is a particularly bad option there, and almost anything else would be an improvement. ("Almost", because if you push it enough and move to something like MathLab, you'll get worse results.)

            • jerf 2 hours ago

              If moving from JS to CPython would help, it might help memory consumption, because JITs generally trade speed for increased memory. But then you'd get slower execution, because CPython is slower than the JS engines we tend to use. PyPy might generally track JS on performance (big, BIG "it depends" because the speed profile of JITs are crazy complicated, one of my least favorite things about them) but then you're back to trading memory for speed, so it's probably net-net a sideways move.

              Also, I don't know what Node is doing exactly, but if you take a lot of these dynamic languages and just fork them into multiple processes, which they still largely need to do to effectively use all the CPUs, you will generally see high per-process memory consumption just like Node. Any memory page that has a reference counter in it that is used by your code ends up Copied-On-Write in practice by every process in the steady state because all you need to do to end up copying the page is looking at any one reference it happens to contain in such a language. At least in my experience memory sharing gains were always minimal to effectively zero in such cases.

              • acdha 33 minutes ago

                > But then you'd get slower execution, because CPython is slower than the JS engines we tend to use

                I have not found this to be generally true. It depends heavily on whether your code is limited by pure high level language code[1] and culture makes comparisons harder if you’re not just switching languages but also abstraction models and a big stack of optimizations. In theory Java beats Python but in practice I’ve seen multiple times where a Java program was replaced by Python seeing whole number multiple improvements in performance and reductions in memory consumption because what was really happening is that a bunch of super complicated, optimization-resistant Java framework code was being replaced with much simpler code which was easier to optimize. Node is closer to that side of Java culturally, I think in both cases because people reacted to the limited language functionality by building tons of abstractions which are still there even after the languages improved so even though it’s possible to do much better a lot of programmers are still pushing around a lot of code with 2000s-era workarounds buried in the middle.

                1. I’m thinking of someone I saw spend months trying to beat Python in Go and eking out a 10% edge because the bulk of the work devolved to stdlib C code.

            • chrisldgk 2 hours ago

              This seems a bit unfair to JavaScript. There’s a lot of optimizations made to the language and its runtimes that have made a more than viable choice for server side applications over the years. The JavaScript that started as a Webbrowser client side language is very different from the ECMAScript that we have today. Depending on its usage it can also be one of the fastest, only regularly eclipsed by rust[1]. So no, JavaScript really isn’t a bad option for server side applications at all.

              [1] https://www.techempower.com/benchmarks/#hw=ph&test=composite...

          • chipdart 2 hours ago

            > There's no good reason to pick C/C++ over Rust in most of these cases...

            What leads you to believe in that?

            • acdha 27 minutes ago

              The constant stream of CVEs caused by even experts failing to use those languages correctly on the one side, and the much better developer experience on the other. C++ isn’t horrible but it’s harder to use, harder to find good developers, and there are relatively few cases where there’s something easier to do in C++ than Rust which would warrant picking it. In most cases, it’ll be both faster and safer if you use a modern language with good tooling instead and take advantage of the easy C bindings if there’s a particular library you need.

            • jrpelkonen 2 hours ago

              I’m not a big believer in absolutes like that, but unless a person is already proficient in C or C++, or there’s an existing C++ library, etc., I find it hard to justify using those over Rust. Rust has great tooling, good cross compilation support, good quality standard library and very good 3rd party ecosystem.

              Also, it has so few footguns compared to C or C++ even modestly experienced developers can safely use it.

            • IshKebab 6 minutes ago

              Because except in rare cases Rust can do everything C++ can do with basically the same performance profile, but it does it with modern tooling and without the security, reliability and productivity issues associated with C++'s pervasive Undefined Behaviour.

              There are some cases where C++ makes sense:

              * You have a large existing C++ codebase you need to talk to via a large API surface (C++/Rust FFI is not great)

              * You have a C++ library that's core to your project and doesn't have a good Rust alternative (i.e. Qt)

              * You don't like learning (and are therefore in completely the wrong industry!)

      • adastra22 4 hours ago

        There is no reason data isolation should cost you 100x memory usage.

        • chipdart 2 hours ago

          > There is no reason data isolation should cost you 100x memory usage.

          It really depends on what you mean by "memory usage".

          The fundamental principle of any garbage collection system is that you allocate objects in the heap at will without freeing them until you really need to, and when that time comes you rely on garbage collection strategies to free and move objects. What this means is that processes end up allocating more data that the one being used, just because there is no need to free it. Consequently, with garbage collecting languages you configure processes with a specific memory budget. The larger the budget, the rarer these garbage collection strategies kick in.

          I run a service written with a garbage collected language. It barely uses more than 100MB of memory to handle a couple hundred requests per minute. The process takes over as much as 2GB of RAM before triggering generation 0 garbage collection events. These events trigger around 2 or 3 times per month. A simplistic critic would argue the service is wasting 10x the memory. That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.

          • nicoburns an hour ago

            > That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.

            Given that compute is often priced proportional to (maximum) memory usage, there is potentially a lot to be gained: dramatically cheaper hosting costs. Of course if your hosting costs are small to be begin with then this likely isn't worthwhile.

        • marcosdumay 3 hours ago

          There are plenty of reasons. They are just not intrinsic to the isolation, instead they come from complications rooted deeply on the underlying system.

          If you rebuild Linux from the ground up with isolation in mind, you will be able to do it more efficiently. People are indeed in the process of rewriting it, but it's far from complete (and moving back and forward, as not every Linux dev cares about it).

          • btilly 2 hours ago

            Unless you can be concrete and specific about some of those reasons, you're just replacing handwaving with more vigorous handwaving.

            What is it specifically about JavaScript's implementation of data isolation that, in your mind, helps cause the excessive memory usage?

            • marcosdumay 2 hours ago

              Just a day or two ago, there was an article here about problems implementing a kind of read-only memory constraint that Javacript benefited from in other OSes.

              • btilly an hour ago

                I must have missed that article. Can you find it?

                Unless you can come up with a specific reference, it seems unlikely that this would explain the large memory efficiency difference. By contrast it is simple and straightforward to understand why keeping temporary garbage until garbage collection could result in tying up a lot of memory while continually running code that allocates memory and lets it go out of scope. If you search, you'll find lots of references to this happening in a variety of languages.

      • nh2 2 hours ago

        It's important to be aware that often it isn't the programming language that has the biggest effect on memory usage, but simply settings of the memory allocator and OS behaviour.

        This also means that you cannot "simply measure memory usage" (e.g. using `time` or `htop`) without already having a relatively deep understanding of the underlying mechanisms.

        Most importantly:

        libc / malloc implementation:

        glibc by default has heavy memory fragmentation, especially in multi-threaded programs. It means it will not return `malloc()`ed memory back to the OS when the application `free()`s it, keeping it instead for the next allocation, because that's faster. Its default settings will e.g. favour 10x increased RESident memory usage for 2% speed gain. Some of this can be turned off in glibc using e.g. the env var `MALLOC_MMAP_THRESHOLD_=65536` -- for many applications I've looked at, this instantaneously reduced RES fro 7 GiB to 1 GiB. Some other issues cannot be addressed, because the corresponding glibc tunables are bugged [2]. For jemalloc `MALLOC_CONF=dirty_decay_ms:0,muzzy_decay_ms:0` helps to return memory to the OS immediately.

        Linux:

        Memory is generally allocated from the OS using `mmap()`, and returned using `munmap()`. But that can be a bit slow. So some applications and programming language runtimes use instead `madvise(MADV_FREE)`; this effectively returns the memory to the OS, but the OS does not actually do costly mapping table changes unless it's under memory pressure. As a result, one observes hugely increased memory usage in `time` or `htop`. [2]

        The above means that people are completely unware what actually eats their memory and what the actual resource usage is, easily "measuring wrong" by factor 10x.

        For example, I've seen people switch between Haskell and Go (both directions) because they thought the other one used less memory. It actually was just the glibc/Linux flags that made the actual difference. Nobody made the effort to really understand what's going on.

        Same thing for C++. You think without GC you have tight memory control, but in fact your memory is often not returned to the OS when the destructor is called, for the above reason.

        This also means that the numbers for Rust or JS may easily be wrong (in either direction, or both).

        So it's quite important to measure memory usage also with the tools above malloc(), otherwise you may just measure the wrong thing.

        [1]: https://sourceware.org/bugzilla/show_bug.cgi?id=14827

        [2]: https://downloads.haskell.org/ghc/latest/docs/users_guide/ru...

      • chipdart 2 hours ago

        > Or, in other words, it's the unavoidable result of insisting on using a language created for the frontend to write everything else.

        I don't think this is an educated take.

        The whole selling point of JavaScript in the backend has nothing to do with "frontend" things. The primary selling point is what makes Node.js take over half the world: it's async architecture.

        And by the way, benchmarks such as Tech Empower Web Framework still features JavaScript frameworks that outperform Rust frameworks. How do you explain that?

        • nicce 2 hours ago

          > The primary selling point is what makes Node.js take over half the world: it's async architecture.

          It is the availability of the developers who know the language (JavaScript) (aka cheaper available workforce).

        • runevault 2 hours ago

          Rust has had async for a while (though it can be painful, but I think request/response systems like APIs should not run into a lot of the major footguns).

          C# has excellent async for asp.net and has for a long time. I haven't touched Java in ages so cannot comment on the JVM ecosystem's async support. So there are other excellent options for async backends that don't have the drawbacks of javascript.

    • echoangle 5 hours ago

      If every developer cared for optimizing efficiency and performance, development would become slower and more expensive though. People don’t write bad-performing code because it’s fun but because it’s easier. If hardware is cheap enough, it can be advantageous to quickly write slow code and get a big server instead of spending days optimizing it to save $100 on servers. When scaling up, the tradeoff has to be reconsidered of course.

      • marcos100 4 hours ago

        We all should think about optimization and performance all the time and make a conscious decision of doing or not doing it given a time constraint and what level of performance we want.

        People write bad-performing code not because it's easier, it's because they don't know how to do it better or don't care.

        Repeating things like "premature optimization is the root of all evil" and "it's cheaper to get a bigger machine than dev time" are bad because people stop caring about it and stop doing it and, if we don't do it, it's always going to be a hard and time-consuming task.

        • 0cf8612b2e1e 3 hours ago

          It is even worse for widely deployed applications. To pick on some favorites, Microsoft Teams and One Drive have lousy performance and burn up a ton of cpu. Both are deployed to tens/hundreds of millions of consumers, squandering battery life and electricity usage globally. Even a tiny performance improvement could lead to a fractional reduction in global energy use.

          • oriolid an hour ago

            I doubt that it would be good business for Microsoft though. The people who use them, and the people who buy them and force others to use them are two separate groups, and anyone who cares even a bit about user experience and has power to make the decision has already switched to something different. It's also the users, not Microsoft who pays for the wasted power and lost productivity.

        • toolz 3 hours ago

          Strongly disagree with this sentiment. Our jobs are typically to write software in a way that minimizes risk and best ensures the success of the project.

          How many software projects have you seen fail because it couldn't run fast enough or used too many resources? Personally, I've never seen it. I'm sure it exists, but I can't imagine it's a common occurrence. I've rewritten systems because they grew and needed perf upgrades to continue working, but this was always something the business knew, planned for and accepted as a strategy for success. The project may have been less successful if it had been written with performance in mind from the beginning.

          With that in mind, I can't think of many things less appropriate to keep in your mind as a first class concern when building software than performance and optimization. Sure, as you gain experience in your software stack you'll naturally be able to optimize, but since it will possibly never be the reason your projects fail and presumably your job is to ensure success of some project, then it follows that you should prioritize other things strongly over optimization.

          • MobiusHorizons 3 hours ago

            I see it all the time, applications that would be very usable and streamlined for users from a ui perspective are frustrating and painful to use because every action requires a multi second request. So the experience is mostly reduced to staring at progress spinners.

          • noirscape 2 hours ago

            It also depends on where the code is running. To put it simply; nobody cares how much RAM the server is using, but they do care if their clientside application isn't responsive. UI being performant and responsive should have priority over everything else.

          • timeon 2 hours ago

            Sure but it seems like race to the bottom. Faster development will beat better quality in the market. Especially in unregulated industry like this.

        • OtomotO 2 hours ago

          Worse even: it's super bad for the environment

          • nicce 2 hours ago

            We have Electron and we don't get rid of it for a decade, at least.

      • sampullman 4 hours ago

        I'm not so sure. I use Rust for simple web services now, when I would have used Python or JS/TS before, and the development speed isn't much different. The main draw is the language/type system/borrow checker, and reduced memory/compute usage is a nice bonus.

        • aaronblohowiak 3 hours ago

          Which framework? Do you write sync or async? I’ve AoC’d rust and really liked it but async seems a bit much.

          • dsff3f3f3f 2 hours ago

            Not the other poster but I moved from Go to Rust and the main packages I use for web services are axum, askama, serde and sqlx. Tokio and the futures crate are fleshed out enough now that I rarely run into async issues.

          • wtetzner 3 hours ago

            I have to agree, despite using it a lot, async is the worst part of Rust.

            If I had to do some of my projects over again, I'd probably just stick with synchronous Rust and thread pools.

            The concept of async isn't that bad, but it's implementation in Rust feels rushed and incomplete.

            For a language that puts so much emphasis on compile time checks to avoid runtime footguns, it's way too easy to clog the async runtime with blocking calls and not realize it.

          • tayo42 3 hours ago

            If he was OK with python performance limitations the rust without async is more then enough

      • treyd 3 hours ago

        Code is usually ran many more times than it is written. It's usually worth spending a bit of extra time to do something the right way the first time when you can avoid having to rewrite it under pressure only after costs have ballooned. This is proven time and time again, especially in places where inefficient code can be so easily identified upfront.

        • manquer an hour ago

          Not all code is run high enough times for that trade off to be always justified.

          It is very hard know if your software is going to be popular enough for costs to be factor at all and even if it would be, it is hard to know whether you can survive as a entity long enough for the extra delay, a competitor might ship a inferior but earlier product or you may run out money.

          You rather ship and see with the quick and dirty and see if there demand for it to worth the cleaner effort .

          There is no limit to that, more optimization keeps becoming a good idea as you scale at say Meta or Google levels it makes sense to spend building your own ASICs for example we won’t dream of doing that today

      • jarjoura 28 minutes ago

        Agreed. When a VC backed company is in hyper-growth, and barely has resources to scale up their shaky MVP tech stack so they can support 100+ million users, I doubt anyone thinks its reasonable to give the engineers 6 months to stop and learn Rust just to rewrite already working systems.

        Adding Rust into your build pipeline also takes planning and very careful upfront design decisions. `cargo build` works great from your command line, but you can't just throw that into any pre-existing build system and expect it to just work.

      • throwaway19972 4 hours ago

        Yea but we also write the same software over and over and over and over again. Perhaps slower, more methodical development might enable more software to be written fewer times. (Does not apply to commercially licensed software or services obviously, which is straight waste.)

        • chaxor 3 hours ago

          This is a decent point, but in many cases writing software over again can be a great thing, even in replaceing some very well established software.

          The trick is getting everyone to switch over and ensure correct security and correctness for the newer software. A good example may be openssh. It is very well established, so many will use it - but it has had some issues over the years, and due to that, it is actually _very_ difficult now to know what the _correct_ way to configure it for the best, modern, performant, and _secure_ operation. There are hundreds of different options for it, almost all of them existing for 'legacy reasons' (in other words no one should ever use in any circumstance that requires any security).

          Then along comes things like mosh or dropbear, which seem like they _may_ improve security, but still basically do the same thing as openssh, so it is unclear if they have a the same security problems and simply don't get reported due to lower use, or if they aren't vulnerable.

          While simultaneously, things like quicssh-rs rewrite the idea but completely differently, such that it is likely far, far more secure (and importantly simpler!), but getting more eyes on it for security is still important.

          So effectively, having things like Linux move to Rust (but as the proper foundation rather than some new and untrusted entity) can be great when considering any 'rewrite' of software, not only for removing the cruft that we now know shouldn't be used due to having better solutions (enforce using only best and modern crypto or filesystems, and so on), but also to remodel the software to be more simple, cleaner, concise, and correct.

      • Havoc an hour ago

        Tempted to say it’s more the learning the language that takes longer than the writing it part.

        From my casual dabbling in python and rust they feel like they’re in similar ballpark. Especially if I want the python code to be similarly robust as what rust tends to produce. Edge cases in python are much more gnarly

      • devmor 4 hours ago

        Caring about efficiency and performance doesn't have to mean spending all your time on it until you've exhausted every possible avenue. Sometimes using the right tools and development stack is enough to make massive gains.

        Sometimes it means spending a couple extra minutes here or there to teach a junior about freeing memory on their PR.

        No one is suggesting it has to be a zero-sum game, but it would be nice to bring some care for the engineering of the craft back into a field that is increasingly dominated by business case demands over all.

        • internet101010 4 hours ago

          Exactly. Nobody is saying to min-max from the start - just be a bit more thoughtful and use the right tools for the job in general.

    • btilly 3 hours ago

      That's because you're churning temporary memory. JS can't free it until garbage collection runs. Rust is able to do a lifetime analysis, and knows it can free it immediately.

      The same will happen on any function where you're calling functions over and over again that create transient data which later gets discarded.

    • throwitaway1123 an hour ago

      There are flags you can set to tune memory usage (notably V8's --max-old-space-size for Node and the --smol flag for Bun). And of course in advanced scenarios you can avoid holding strong references to objects with weak maps, weak sets, and weak refs.

    • leeoniya 5 hours ago

      fwiw, Bun/webkit is much better in mem use if your code is written in a way that avoids creating new strings. it won't be a 100x improvement, but 5x is attainable.

    • jchw 5 hours ago

      It's a little more nuanced than that of course, a big reason why the memory usage is so high is because Node.JS needs more of it to take advantage of a large multicore machine for compute-intensive tasks.

      > Regarding the abnormally high memory usage, it's because I'm running Node.js in "cluster mode", which spawns 12 processes for each of the 12 CPU cores on my test machine, and each process is a standalone Node.js instance which is why it takes up 1300+ MB of memory even though we have a very simple server. JS is single-threaded so this is what we have to do if we want a Node.js server to make full use of a multi-core CPU.

      On a Raspberry Pi you would certainly not need so many workers even if you did care about peak throughput, I don't think any of them have >4 CPU threads. In practice I do run Node.JS and JVM-based servers on Raspberry Pi (although not Node.JS software that I personally have written.)

      The bigger challenge to a decentralized Internet where everyone self-hosts everything is, well, everything else. Being able to manage servers is awesome. Actually managing servers is less glorious, though:

      - Keeping up with the constant race of security patching.

      - Managing hardware. Which, sometimes, fails.

      - Setting up and testing backup solutions. Which can be expensive.

      - Observability and alerting; You probably want some monitoring so that the first time you find out your drives are dying isn't months after SMART would've warned you. Likewise, you probably don't want to find out you have been compromised after your ISP warns you about abuse months into helping carry out criminal operations.

      - Availability. If your home internet or power goes out, self-hosting makes it a bigger issue than it normally would be. I love the idea of a world where everyone runs their own systems at home, but this is by far the worst consequence. Imagine if all of your e-mails bounced while the power was out.

      Some of these problems are actually somewhat tractable to improve on but the Internet and computers in general marched on in a different more centralized direction. At this point I think being able to write self-hostable servers that are efficient and fast is actually not the major problem with self-hosting.

      I still think people should strive to make more efficient servers of course, because some of us are going to self-host anyways, and Raspberry Pis run longer on battery than large rack servers do. If Rust is the language people choose to do that, I'm perfectly content with that. However, it's worth noting that it doesn't have to be the only one. I'd be just as happy with efficient servers in Zig or Go. Or Node.JS/alternative JS-based runtimes, which can certainly do a fine job too, especially when the compute-intensive tasks are not inside of the event loop.

      • wtetzner 3 hours ago

        Reducing memory footprint is a big deal for using a VPS as well. Memory is still quite expensive when using cloud computing services.

        • jchw an hour ago

          True that. Having to carefully balance responsiveness and memory usage/OOM risk when setting up PHP-FPM pools definitely makes me grateful when deploying Go and Rust software in production environments.

      • pferde 3 hours ago

        While I agree with pretty much all you wrote, I'd like to point out that e-mail, out of all the services one could conceivably self-host, is quite resilient to temporary outages. You just need to have another backup mail server somewhere (maybe another self-hosting friend or in a datacenter), and set up your DNS MX records accordingly. The incoming mail will be held there until you are back online, and then forwarded to your primary mail server. Everything transparent to the outside word, no mail gets lost, no errors shown to any outside sender.

      • bombela 4 hours ago

        > Imagine if all of your e-mails bounced while the power was out.

        Retry for a while until the destination becomes reachable again. That's how email was originally designed.

        • jasode 4 hours ago

          >Retry for a while until the destination becomes reachable again. That's how email was originally designed.

          Sure, the SMTP email protocol states guidelines for "retries" but senders don't waste resources retrying forever. E.g. max of 5 days: https://serverfault.com/questions/756086/whats-the-usual-re-...

          So gp's point is that if your home email server is down for an extended power outage (maybe like a week from a bad hurricane) ... and you miss important emails (job interview appointments, bank fraud notifications, etc) ... then that's one of the risks of running an email server on the Raspberry Pi at home.

          Switching to a more energy-efficient language like Rust for server apps so it can run on RPi still doesn't alter the risk calculation above. In other words, many users would still prioritize email reliability of Gmail in the cloud over the self-hosted autonomy of a RPi at home.

          • umanwizard 4 hours ago

            Another probably even bigger reason people don't self-host email specifically is that practically all email coming from a residential IP is spam from botnets, so email providers routinely block residential IPs.

          • jchw 4 hours ago

            Yeah, exactly this. The natural disaster in North Carolina is a great example of how I envision this going very badly. When you self-host at home, you just can't have the same kind of redundancy that data centers have.

            I don't think it's an obstacle that's absolutely insurmountable, but it feels like something where we would need to organize the entire Internet around solving problems like these. My personal preference would be to have devices act more independently. e.g. It's possible to sync your KeepassXC with SyncThing at which point any node is equal and thus only if you lose all of your devices simultaneously (e.g. including your mobile computer(s)) are you at risk of any serious trouble. (And it's easy to add new devices to back things up if you are especially worried about that.) I would like it if that sort of functionality could be generalized and integrated into software.

            For something like e-mail, the only way I can envision this working is if any of your devices could act as a destination in the event of a serious outage. I suspect this would be possible to accomplish to some degree today, but it is probably made a lot harder by two independent problems (IPv4 exhaustion/not having directly routable IPs on devices, mobile devices "roaming" through different IP addresses) which force you to rely on some centralized infrastructure anyways (e.g. something like Tailscale Funnels.)

            I for one welcome whoever wants to take on the challenge of making it possible to do reliable, durable self-hosting of all of my services without the pain. I would be an early adopter without question.

    • beached_whale 3 hours ago

      Im ok if it isnt popular. It will keep compute costs lower for those using it as the norm is excessive usage

  • isodev 5 hours ago

    This is a really cool comparison, thank you for sharing!

    Beyond performance, Rust also brings a high level of portability and these examples show just how versatile a pice of code can be. Even beyond the server, running this on iOS or Android is also straightforward.

    Rust is definitely a happy path.

    • jvanderbot 5 hours ago

      Rust deployment is a happy path, with few caveats. Writing is sometimes less happy than it might otherwise be, but that's the tradeoff.

      My favorite thing about Rust, however, is Rust dependency management. Cargo is a dream, coming from C++ land.

      • krick 5 hours ago

        Everything is a dream, when coming from C++ land. I'm still incredibly salty about how packages are managed in Rust, compared to golang or even PHP (composer). crates.io looks fine today, because Rust is still relatively unpopular, but 1 common namespace for all packages encourages name squatting, so in some years it will be a dumpster worse than pypi, I guarantee you that. Doing that in a brand-new package manager was incredibly stupid. It really came late to the market, only golang's modules are newer IIRC (which are really great). Yet it repeats all the same old mistakes.

        • guitarbill 4 hours ago

          I don't really understand this argument, and it isn't the first time I've heard it. What problem other than name squatting does it solve?

          How does a Java style com.foo.bar or Golang style URL help e.g. mitigate supply chain attacks? For Golang, if you search pkg.go.dev for "jwt" there's 8 packages named that. I'm not sure how they are sorted; it doesn't seem to be by import count. Yes, you can see the URL directly, but crates.io also shows the maintainers. Is "github.com/golang-jwt/jwt/v5" "better" than "golang.org/x/oauth2/jwt"? Hard to say at a glance.

          On the flip side, there have been several instances where Cargo packages were started by an individual, but later moved to a team or adopted. The GitHub project may be transferred, but the name stays the same. This generally seems good.

          I honestly can't quite see what the issue is, but I have been wrong many a time before.

        • Imustaskforhelp 5 hours ago

          In my opinion , I like golang's way better because then you have to be thoughtful about your dependencies and it also prevents any drama (like rust foundation cargo drama) (ahem) (if you are having a language that is so polarizing , it would be hard to find a job in that )

          I truly like rust as a performance language but I would rather like real tangible results (admittedly slow is okay) than imagination within the rust / performance land.

          I don't want to learn rust to feel like I am doing something "good" / "learning" where I can learn golang at a way way faster rate and do the stuff that I like for which I am learning programming.

          Also just because you haven't learned rust doesn't make you inferior to anybody.

          You should learn because you want to think differently , try different things. Not for performance.

          Performance is fickle minded.

          Like I was seeing a native benchmark of rust and zig (rust won) and then I was seeing benchmark of deno and bun (bun won) (bun is written in zig and deno in bun)

          The reason I suppose is that deno doesn't use actix and non actix servers are rather slower than even zig.

          It's weird .

          • jvanderbot 4 hours ago

            There are some influential fair comparisons of compiled languages, but for the most part my feeling is that people are moving from an extremely high level language like Python or JS, and then going to Rust to get performance, when any single compiled language would be fine, and for 90% of them, Go would have been the right choice (on backend or web-enabled systems apps), there was just a hurdle to get to most other compiled languages.

            It's just Rust is somehow more accessible to them? Maybe it's that pointers and memory just was an inaccessible / overburdensom transition?

            • umanwizard 4 hours ago

              Rust is the only mainstream language with an ergonomic modern type system and features like exhaustive matching on sum types (AFAIK... maybe I'm forgetting one). Yes things like OCaml and Haskell exist but they are much less mainstream than Rust. I think that's a big part of the appeal.

              In Go instead of having a value that can be one of two different types, you have to have two values one of which you set to the zero value. It feels prehistoric.

              • jvanderbot 3 hours ago

                That strikes me as an incredibly niche (and probably transient) strength! But I will remember that.

                • umanwizard 3 hours ago

                  It's not niche at all; it's extremely common to need this. Maybe I'm not explaining it well. For example, an idiomatic pattern in Go is to return two values, one of which is an error:

                    func f() (SomeType, error) {
                            // ...
                    }
                  
                  In Rust you would return one value:

                    fn f() -> anyhow::Result<SomeType> {
                        // ...
                    }
                  
                  In Go (and similar languages like C) nothing enforces that you actually set exactly one value, and nothing enforces that you actually handle the values that are returned.

                  It's even worse if you need to add a variant, because then it's easy to make a mistake and not update some site that consumes it.

            • bombela 4 hours ago

              Not sure how much it weighs on the balance in those types of decisions. But Rust has safe concurrency. That's probably quite a big boost of web server quality if anything else.

              • jvanderbot 3 hours ago

                Go's concurrency is unsafe? Rust's concurrency is automatically safe?

                I am not saying you're wrong, I just don't find it any better than C++ concurrent code, you just have many different lock types that correspond to the borrow-checker's expectations, vs C++'s primitives / lock types.

                Channels are nicer, but that's doable easily in C++ and native to Go.

                • thinkharderdev 2 hours ago

                  (Un)safe is a bit of an overloaded term but Rust's concurrency model is safe in the sense that it statically guarantees that you won't have data races. Trying to mutate the same memory location concurrently is a compile-time error. Neither C++ nor Golang prevent you from doing this. Aside from that

                • umanwizard 3 hours ago

                  > Go's concurrency is unsafe? Rust's concurrency is automatically safe?

                  Yes and yes...

                  Rust statically enforces that you don't have data races, i.e. it's not possible in Rust (without unsafe hacks) to forget to guard access to something with a mutex. In every other language this is enforced with code comments and programmer memory.

            • timeon an hour ago

              > It's just Rust is somehow more accessible to them?

              Going to lower level languages can be scary. What is 'fighting the borrow-checker' for some, may be 'guard rails' for others.

        • joshmarinacci 4 hours ago

          Progress. It doesn’t have to be the best. It just has to be better than C++.

  • ports543u 4 hours ago

    While I agree the enhancement is significant, the title of this post makes it seem more like an advertisement for Rust than an optimization article. If you rewrite js code into a native language, be it Rust or C, of course it's gonna be faster and use less resources.

    • mplanchard 4 hours ago

      Is there an equivalently easy way to expose a native interface from C to JS as the example in the post? Relatedly, is it as easy to generate a QR code in C as it is in Rust (11 LoC)?

    • baq 4 hours ago

      'of course' is not really that obvious except for microbenchmarks like this one.

      • ports543u 4 hours ago

        I think it is pretty obvious. Native languages are expected to be faster than interpreted or jitted, or automatic-memory-management languages in 99.9% of cases, where the programmer has far less control over the operations the processor is doing or the memory it is copying or using.

  • pjmlp 4 hours ago

    And so what we were doing with Apache, mod_<pick your lang> and C back in 2000, is new again.

    At least with Rust it is safer.

  • djoldman 3 hours ago

    Not trying to be snarky, but for this example, if we can compile to wasm, why not have the client compute this locally?

    This would entail zero network hops, probably 100,000+ QRs per second.

    IF it is 100,000+ QRs per second, isn't most of the thing we're measuring here dominated by network calls?

    • jeroenhd an hour ago

      WASM blobs for programs like these can easily turn into megabytes of difficult to compress binary blobs once transitive dependencies start getting pulled in. That can mean seconds of extra load time to generate an image that can be represented by maybe a kilobyte in size.

      Not a bad idea for an internal office network where every computer is hooked up with a gigabit or better, but not great for cloud hosted web applications.

    • munificent an hour ago

      It's a synthetic example to conjure up something CPU bound on the server.

  • Dowwie 4 hours ago

    Beware the risks of using NIFs with Elixir. They run in the same memory space as the BEAM and can crash not just the process but the entire BEAM. Granted, well-written, safe Rust could lower the chances of this happening, but you need to consider the risk.

    • mijoharas an hour ago

      I believe that by using rustler[0] to build the bindings that shouldn't be possible. (at the very least that's stated in the readme.)

      > Safety : The code you write in a Rust NIF should never be able to crash the BEAM.

      I tried to find some documentation stating how it works but couldn't. I think they use a dirty scheduler, and catch panics at the boundaries or something? wasn't able to find a clear reference.

      [0] https://github.com/rusterlium/rustler

  • voiper1 5 hours ago

    Wow, that's an incredible writeup.

    Super surprised that shelling out was nearly as good any any other method.

    Why is the average bytes smaller? Shouldn't it be the same size file? And if not, it's a different alorithm so not necessarily better?

  • bdahz 5 hours ago

    I'm curious what if we replace Rust with C/C++ in those tiers. Would the results be even better or worse than Rust?

    • znpy 5 hours ago

      It should be pretty much the same.

      The article is mostly about exemplifying the various leve of optimisation you can get by moving “hot code paths” to native code (irrespective whether you write that code in rust/c++/c.

      Worth noting that if you’re optimising for memory usage, rust (or some other native code) might not help you very much until you throw away your whole codebase, which might not be always feasible.

    • Imustaskforhelp 5 hours ago

      also maybe checking out bun ffi / I have heard they recently added their own compiler

  • bhelx 5 hours ago

    If you have a Java library, take a look at Chicory: https://github.com/dylibso/chicory

    It runs on any JVM and has a couple flavors of "ahead-of-time" bytecode compilation.

    • bluejekyll 4 hours ago

      This is great to see. I had my own effort around this that I could never quite get done.

      I didn’t notice this on the front page, what JVM versions is this compatible with?

  • demarq an hour ago

    I didn’t realize calling to the cli is that fast.

  • echelon 5 hours ago

    Rust is simply amazing to do web backend development in. It's the biggest secret in the world right now. It's why people are writing so many different web frameworks and utilities - it's popular, practical, and growing fast.

    Writing Rust for web (Actix, Axum) is no different than writing Go, Jetty, Flask, etc. in terms of developer productivity. It's super easy to write server code in Rust.

    Unlike writing Python HTTP backends, the Rust code is so much more defect free.

    I've absorbed 10,000+ qps on a couple of cheap tiny VPS instances. My server bill is practically non-existent and I'm serving up crazy volumes without effort.

    • boredumb 5 hours ago

      I've been experimenting with using Tide, sqlx and askama and after getting comfortable, it's even more ergonomic for me than using golang and it's template/sql librarys. Having compile time checks on SQL and templates in and of itself is a reason to migrate. I think people have a lot of issues with the life time scoping but for most applications it simply isn't something you are explicitly dealing with every day in the way that rust is often displayed/feared (and once you fully wrap your head around what it's doing it's as simple as most other language aspects).

    • kstrauser 4 hours ago

      I’ve written Python APIs since about 2001 or so. A few weeks ago I used Actix to write a small API server. If you squint and don’t see the braces, it looks an awful lot like a Flask app.

      I had fun writing it, learned some new stuff along the way, and ended up with an API that could serve 80K RPS (according to the venerable ab command) on my laptop with almost no optimization effort. I will absolutely reach for Rust+Actix again for my next project.

      (And I found, fixed, and PR’d a bug in a popular rate limiter, so I got to play in the broader Rust ecosystem along the way. It was a fun project!)

    • JamesSwift 5 hours ago

      > Writing Rust for web (Actix, Axum) is no different than writing Go, Jetty, Flask, etc. in terms of developer productivity. It's super easy to write server code in Rust.

      I would definitely disagree with this after building a micro service (url shortener) in rust. Rust requires you to rethink your design in unique ways, so that you generally cant do things in the 'dumbest way possible' as your v1. I found myself really having to rework my design-brain to fit rusts model to please the compiler.

      Maybe once that relearning has occurred you can move faster, but it definitely took a lot longer to write an extremely simple service than I would have liked. And scaling that to a full api application would likely be even slower.

      Caveat that this was years ago right when actix 2 was coming out I believe, so the framework was in a high amount of flux in addition to needing to get my head around rust itself.

      • collinvandyck76 4 hours ago

        > Maybe once that relearning has occurred you can move faster

        This has been my experience. I have about a year of rust experience under my belt, working with an existing codebase (~50K loc). I started writing the toy/throwaway programs i normally write, now in rust instead of go halfway through this stretch. Hard to say when it clicked, maybe about 7-8 months through this experience, so that i didn't struggle with the structure of the program and the fights with the borrow checker, but it did to the point where i don't really have to think about it much anymore.

        • guitarbill 3 hours ago

          I have a similar experience. Was drawn to Rust not because of performance or safety (although it's a big bonus), but because of the tooling and type system. Eventually, it does get easier. I do think that's a poor argument, kind of like a TV show that gets better in season 2. But I can't discount that it's been much nicer to maintain these tools compared to Python. Dependency version updates are much less scary due to actual type checking.

    • nesarkvechnep 5 hours ago

      It will probably never replace Elixir as my favourite web technology. For writing daemons though, it's already my favourite.

    • manfre 5 hours ago

      > I've absorbed 10,000+ qps on a couple of cheap tiny VPS instances.

      This metric doesn't convey any meaningful information. Performance metrics need context of the type of work completed and server resources used.

    • adamrezich an hour ago

      Disclaimer: I haven't ever written any serious Rust code, and the last time I even tried to use the language was years ago now.

      What is it about Rust that makes it so appealing to people to use for web backend development? From what I can tell, one of the selling points of Rust is its borrow checker/lifetime management system. But if you're making a web backend, then you really only need to care about two lifetimes: the lifetime of the program, and the lifetime of a given request/response. If you want to write a web backend in C, then it's not too difficult to set up a simple system that makes a temporary memory arena for each request/response, and, once the response is sent, marks this memory for reuse (and probably zeroes it, for maximum security), instead of freeing it.

      Again, I don't really have any experience with Rust whatsoever, but how does the borrow checker/lifetime system help you with this? It seems to me (as a naïve, outside observer) that these language features would get in the way more than they would help.

  • dyzdyz010 5 hours ago

    Make Rustler great again!

  • bebna 5 hours ago

    For me a "Non-Rust Server" would be something like a PHP webhoster. If I can run my own node instance, I can possible run everything I want.

  • lsofzz 5 hours ago

    <3