> Related to this, the IsA trait allows typesafe compile-time checked casts.
Unlike in C code, casting to a superclass can be written in such a way
that the compiler will complain if the destination type is not a
superclass, and with zero runtime cost.
This is unfair to C. With a little work when defining all classes (or non-leaf classes if you're willing for a hairier implementation), you can do this there too.
There are probably other ways, but the way that seems most "obvious" to me is to define the base via union of all bases in the chain, rather than only the immediate base class. So:
/*
class Foo {int f;};
class Bar extends Foo {int b;};
class Qux extends Bar {int q;};
class Leaf extends Qux {int l;};
*/
struct Foo { int f; };
struct Bar { union { struct Foo base, base_Foo}; int b; };
struct Qux { union { struct Bar base, base_Bar; struct Foo base_Foo; }; int q; };
struct Leaf { struct Qux base; int l; }
Then your "compile-time-safe-cast to base class" macro just checks 3 things in order:
1. check if we're already the right class.
2. check if `base` is the right class.
3. unconditionally try to use the appropriately-named `base_Foo` member as calculated from the type.
(runtime-safe-checked downcasts just have to check that the reverse is possible)
As a recovering language lawyer, I need to point out that what you are describing is not portable behavior because it is writing to one member of a union and reading from a different member, if I understood you correctly.
Will it work on most tool chains? Yes. But when it doesn't, it is going to be fun.
In Rust's case it is that when you say "you have IsA<Device> for all types that implement IsA<PCIDevice>" you are actually saying "you have IsA<Device> for all types, but only if they implement IsA<PCIDevice>".
When you later say "you have IsA<Device> for all types, but only if they implement IsA<I2CDevice>" the two conditions conflict.
It might be possible to avoid this issue with a different implementation but this is the simplest one that works and QEMU's hierarchy is generally shallow. If a better implementation came along, it would be a matter of search and repeat.
What does your wishlist for Rust look like? (Besides "simpler C/Rust interoperability", of course.) Has QEMU run into things that Rust-for-Linux hasn't, that feel missing from the Rust language?
Right now the only language-level thing I would like is const operator overloading. Even supporting MSRV as old as 1.63 was not a big deal, the worst thing was dependencies using let...else which we will vendor and patch.
Pin is what it is, but it is mostly okay since I haven't needed projection so far. Initialization using Linux's "impl PinInit<Self>" approach seems to be working very well in my early experiments, I contributed changes to use the crate without unstable features.
In the FFI area: Bindgen support for toml configuration (https://github.com/rust-lang/rust-bindgen/pull/2917 but it could also be response files on the command line), and easier passing of closures from Rust to C (though I found a very nice way to do it for ZSTs that implement Fn, which is by far the common case).
The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.
Outside the Rust core proper, Meson support needs to mature a bit for ease of use, but it is getting there. Linux obviously doesn't need that.
BTW, saw your comment in the dead thread, you're too nice. I have been curious about Rust for some time and with Linux maturing, and Linaro doing the first contribution of build system integration + sample device code, it was time to give it a try.
> Right now the only language-level thing I would like is const operator overloading.
As far as I know, const traits are still on track.
> easier passing of closures from Rust to C
As in, turning a Rust closure into a C function-pointer-plus-context-parameter?
> The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.
Would be happy to work with you to get something on the calendar.
> As far as I know, const traits are still on track.
Yes they are. My use case is something like the bitflags crate, there are lots of bit flags in emulated devices of course. In the meanwhile I guess it would be possible to use macros to turn something like "bit_const!(Type:A|B)" to "Type(A.0|B.0)" or something like that.
> As in, turning a Rust closure into a C function-pointer-plus-context-parameter?
Yes, more in general everything related to callbacks is doable but very verbose. We might do (procedural?) macro magic later on to avoid the verbosity but for now I prefer to stick to pure Rust until there's an idea of which patterns recur.
Let me know by email about any occasions to present what I have.
For drivers, it's already happening, especially for graphics but not limited to that. 6.13 has some very important changes. A lot of Linux is drivers so that's already a reason to be bullish.
Answering for QEMU instead: it depends on the community being willing to share the burden of writing the FFI code. Despite Rust being low level, there is still a substantial amount of work to do. Replies to the roadmap pointed out tracepoints as an area where I know nothing and therefore I would like someone else to do the work (I am working mostly on the object and threading models, which is also where a lot of the impedance mismatch between Rust and C lies).
Hello. I assume tracepoints mean kprobes/uprobes or something along those lines? I've just this weekend worked on implementing/adapting a crate for DTrace USDTs aka DTrace probes to also work on Linux and generate SystemTap SDTs (aka USDTs aka dtrace probes).
This is probably a little different from tracepoints in the kernel space but I'm somewhat interested in going deeper and into the kernel side of things. Let me know if you have any pointers as to where I might be of concrete assistance to you!
QEMU has several tracepoint providers, the main ones are (a slightly fancy version of) printf and USDT. There is a Python program that generates the C code for the chosen backend(s), so the thing to do would be to adjust the script to produce either an FFI bridge or the equivalent Rust code.
I've said this before on here and I'll say it again. The QEMU code base is a nightmare. The amount of fake C++ is mind numbing. Every time I come across a variable or strict declaration or method with the word "class" in it, I'm reminded of how much easier the whole thing would've been with C++. You can't even compile C++ into QEMU because of how the headers use keywords. That's not even touching their macro abuse template functions. You know what has templates and classes? C++. And constructors. There's just so much.
All this to say: if Rust can make it in, that's great, because I'm tired of dealing with C's simplicity for more complicated tasks. I'm not a rust user but if it lets me use classes and templates, I'll switch over
> I'm not a rust user but if it lets me use classes and templates, I'll switch over
Yer not switching any time soon then. Rust does have methods but not classes (QOM’s inheritance is specifically called as an issue in TFA) and it uses Haskell-style generics rather than C++-style templates.
I mean it has polymorphism via v-tables and composition via traits, that's enough object-orientation for me. Inheritance is a core principle of OOP but in practice most C++ class hierarchies are relatively flat (there are exceptions like Qt, which I think uses inheritance in a good way), in practice most of that can be mimicked with embedding and composition well enough to work. Not sure if you want to call that object-oriented but operating on polymorphous structures that encapsulate their data is pretty close to object orientation (the good parts of OOP, at least). And I'm not a big expert on Rust but I think you can do most of the things we do in C++ with templates using macros.
> I didn't like the way Simula I or Simula 67 did inheritance (though I thought Nygaard and Dahl were just tremendous thinkers and designers). So I decided to leave out inheritance as a built-in feature until I understood it better.
> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP.
Fun that this was mentioned too:
>> it uses Haskell-style generics
> polymorphism via v-tables and composition via traits, that's enough object-orientation for me
For which Alan Kay had this to say:
> My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful. The term "polymorphism" was imposed much later (I think by Peter Wegner) and it isn't quite valid, since it really comes from the nomenclature of functions, and I wanted quite a bit more than functions. I made up a term "genericity" for dealing with generic behaviors in a quasi-algebraic form.
In a way it's fascinating to see how C++ has shaped (dare I say warped) the collective vision of how to do OOP.
For example, as a Rubyist, which was heavily influenced by Smalltalk, it is fascinating how hard it can be to explain how fundamentally different message vs call are, and what's the whole point of modules, mostly because the other viewpoint is so warped as to cognitively reject that it can be any different from the C++ model.
The one lagging thing that isn't easy with rust's generics is expressions, and that looks to be getting kicked down the road indefinitely. You can't have Foo<N> * Foo<M> -> Foo<N+M> or similar. That is a useful thing for statically sized math libraries and for performance oriented metaprogramming. The latter can be clumsily handled with macros, but not the former. It is also blocking portable simd from stabilizing, apparently now indefinitely. I still wouldn't pick any other language a new production product, but I find the stalling out on const generic expressions frustrating, as simd and static math libraries are things I use daily.
You misunderstood.
N, M are supposed to be integers (const generics); in your example code you've made them types.
Also, your `type Output = Foo<<N as Add<M>>::Output>;` just means "multiplication has the same return type as addition". But desired is that multiplying a Foo<4> with a Foo<3> results in a Foo<7>.
Rust decided that it's important to not have instantiation-time compiler errors, but this makes computation with const generics complicated: you're not allowed to write Foo<{N+M}> because N+M might overflow, even if it never actually overflows for the generic arguments used in the program.
aiui this isn't inherently limited by instantiation-time error messages and is available on nightly today with the generic_const_exprs feature. It's in the pipeline.
I could go on and on about the limitations of min_const_generics, missing TAIT (type alias impl trait), unstable coroutines/generators, etc. but none of that stuff erases how much of a miracle Rust is as a language. The industry really needed this. It's practically the poster child of memory safety and security-critical industries are picking it up and Ferrocene is pushing it in safety-critical environments as well and it's just. Good. Please. Finally a serious contender against the terrible reign of C and C++.
You can do it with macros, the problem is
A) documentation
B) macro abuse
It's so much harder to debug things when function calls are secretly macros, and if I didn't have the vscode cpp language server for goto definition, I'd be completely lost. I'd wager that only 5% of the #defines aren't auto generated by occasionally recursive macros. Maybe hyperbole. Makes it really hard to figure out how existing code works.
I don't think he literally means templates and classes. Rust has equivalents that do the things you want templates and classes for (generics and structs/traits respectively).
I completely agree with his point about reimplementing C++ badly in C. GNOME does this too in their libraries. He will be much happier with Rust.
Hmmm I think I actually really want classes. Something that combines data and methods together without more function pointers. Template vs generic I don't care about
Rust structs are in some ways similar to C++ classes. You can combine data and methods together with them without worrying about things like function pointers.
:( the QOM inheritance is where I've had my wrist bugs. Recently I merged from master (upgrading me from version 8.something to 9.2.?) and they dramatically changed the resets. I tried the new way and had segfaults in bad places that went away when I reordered code that shouldn't have ordering requirements. That was too scary so I switched to their legacy reset function and it was all fine again. All this blind casting of opaques makes me nervous.
C++23 is a godawful mess; especially the functional paradigms (which look beautiful in e.g. OCaml) that got shoehorned kicking and screaming into the morass that C++ had already been.
If you read function specs in the OCaml docs, they're understandable and the syntax is clean; the same concepts bolted onto C++ look like line noise, both in actual syntax and the (allegedy) English-language description on en.cppreference.com.
Reading the C++ standard itself is an exercise in futility, very much unlike the C standard. (Although, latest developments in C land are repulsive too, IMO.)
The C++ committee's tantrums and scandals (physical violence at meetings!) put the worst that has been seen in open-source communities to shame. <https://izzys.casa/2024/11/on-safe-cxx/> has been posted to reddit and HN recently.
C++ compilation still takes absolutely forever, and C++ template error messages are as incomprehensible as ever.
One important problem with C++ (repeated ad nauseam, so this is nothing new) is the unforeseen and unintended harmful interactions between such language features that were supposed to be orthogonal. The only remedy for that is to stop adding stuff to the language; but nooo, it just keeps growing.
Another important problem is that, the more the compiler does for you implicitly, the less you see (and the less you can debug) what happens in the code. This is not a problem in OCaml, which is a managed language with a safe runtime, but it's a huge problem in C++, which remains a fundamentally unsafe language. (And yes, once you start extending OCaml with C, i.e., mixing implicit and explicit, OCaml too becomes unsafe, and a lot of head-scratching can occur.) A home-grown object system in C is at least explicit, so it's all there for the developer to read, instrument, step through, and so on.
I cant believe I just read that entire izzys.case post. Wow. I couldn’t possibly assess it all for accuracy, but if that’s reasonably correct, just wow.
What are the advantages of trying to use fake c++ instead of actual c++ for their use case? I'm sure there are / were smart people working on the project. Do they keep decision records? Have you had a conversation with the relevant people about it?
> What are the advantages of trying to use fake c++ instead of actual c++ for their use case?
There are exactly none, but there was a time during the mid- to late-90s when it was hip to implement OOP frameworks on top of C (C++ only really became a realistic option once C++98 was widely supported which took a couple of years, ...also there was such an extreme OOP hype in the 90s that it is hard to believe today - EVERYTHING had to be OOP, no matter if it made sense or not).
QEMU seems to be from around the end of that era (first release apparently in 2003), and looking at the code it looks exactly as I imagined, with meta-class pointers, constructor- and destructor-functions etc... it would almost certainly be better to use a simple non-OOP C-style, but that was an 'outdated' point of view at that time (since everything had to use that new shiny OOP paradigm).
The object model in QEMU is from 2007-2011. Internally there isn't a lot of inheritance (it's pretty shallow), but yeah I guess "is-a" is what was in vogue at the time.
However there is a reason to have the object model, and it was added because some things were getting out of hand. You had to write three parsers for everything you added: one for command line, one for the text-based command interface and one for the JSON API. And you had to do it once for each kind of "object" (device, network backend, disk backend, character device backend etc.). The object model is designed to let you write code just once for all three and to reuse the interface across object types.
C++, especially before C++11, was a total mess. Even today, it's super easy to shoot yourself in the foot and you literally can't learn the entirety of the language due to how massive it is. This still doesn't justify doing the absurdity of macro magic and garbage I've seen people pumping out in C over the years though.
IMHO if you deliberately use C it's because you want to keep things simple. Sometimes C++ will inevitably lead to overcomplicated designs nobody needs; there's a reason why 90's C++ feels massively outdated, while 90's C is still (mostly) accessible.
C is a great way to keep the urge people have to shovel in OOP even when it doesn't really make sense. When I see some bullshit C++ code at work, I often think, "if people had to write this in C, they wouldn't have overthought this this much"...
My 2 cents is that there was a Java craze back in the '90s that basically infected all code designed in that time period, and that's how we got some very weird nonsense that in hindsight was poorly thought out. Just look at stuff like GTK+ and GObject...
The GObject system for all it's faults serves a purpose. Similar to COM in Windows it allows mapping of higher level languages to libraries. Without it there wouldn't be all the bindings we have to Python, JavaScript, Vala and Rust today. I wouldn't say it was poorly thought out so much as mismatched with it's user's typical uses and expectations.
Yeah but it made literally zero sense to write GObject in C. They've reimplemented inheritance and the like with macro BS while the GNU project literally had both a C++ and Objective-C compiler - Objective-C warts and all was a perfect fit for GObject.
I have been guilty of this if only in school. A class assignment could be completed in java or c and I knew c++ and c and my teammates knew c. I really wanted to do oop because I was ignorant, but none of us knew java so we did OO in c. It was horrible. I like the simplicity of c. I wish there was a zig that was immutable by default, and had a borrowchecker, but I think that road inevitably leads to a GC jitted language or to something as complicated as rust. Well almost as complicated, at least macros would still be written in the same syntax as normal runtime code.
The only advantage I know of is that it means you stay within the C ABI. It can simplify linking a little in some cases, or FFI. I guess compilation is faster too.
I worked for a large infra project that was mostly written in C with the kind of trickery that is being here ascribed to QEMU code (I trust the parent, but I haven't seen the code myself).
It's a common thing to do in projects like this in C. While C++ solves some of the problems for larger projects, it brings so many more, it's usually not worth it. Projects of this kind, essentially, invent their own language, with their own conventions. C is just a convenient tool (given the alternatives...) to do that, as it's treated as a kind of a transpilation target. Think about languages like Haskell which essentially work like that, but went even further.
So, what typically happens is that memory allocation, I/O and concurrency need to be done differently from how the language runtime wants to do that. So, at this point, you basically throw away most of the existing language runtime. From my recent encounters with different bugs in QEMU, I imagine that they really, really need a lot of customization in how memory is allocated (my particular case was that QEMU was segfaulting when running ldconfig in aarch64 emulation, which turned out to be due to ldconfig having some weird custom-made memory allocation pattern which QEMU cannot account for.) It's easier to fight this battle with C than it is with C++.
I was going to post the same thing. Others on this thread may not have had experience with very large C codebases and hence haven't seen this play out. To a large degree C++ is just capturing what was already widespread practices in C, and indeed assembly before that.
You can define your own "language", the way you describe it, in C++ more easily than you can in C. C++ gives you more powerful and expressive tools for doing so. A program is not improved by foregoing these tools. There's nothing in C++ forcing you to do anything that makes anything "harder".
C++ makes everything harder by tempting you with all the unnecessary tools. And some of the mental load only goes away if you can be 100% sure that nobody in the project uses the feature.
I wouldn't be so quick to put "easy" and "C++" in the same sentence... Also, C and C++ language tools suck, when it comes to making a language compared to anything in Lisp family, for example, or any language that has Lisp-like macros. C wasn't chosen for it's ability to make other languages. It was an easy target. The benefits are the simpler and more malleable runtime that can be made to do all sorts of weird things where C++ would be much harder to deal with.
In other words, if you don't want the language runtime, if you don't want C++ templates, smart pointers, classes, lambdas, exceptions etc. But, you want to be able to manage memory for eg. realtime kind of system... simply removing all the unwanted stuff from C++ is harder than it is with C.
And, if you did want some / all of those features of C++, there are saner languages to have that (and you'd pay for it by having to drag in their runtime and conventions). Before Rust was a thing, I've seen people choose D instead, for example, as a kind of middle ground between the asceticism of C and unnecessary excess of C++.
Lisp is a non sequitur. C++ can do the same "weird things" C can. There is literally no advantage whatsoever of choosing to wear the C hairshirt instead of using C++. Not a single one.
Sure, Rust is better than C++ in some ways. You can have a legitimate debate about C++ versus Rust. There can be no debate about C versus C++: the latter is better in every single way.
> unnecessary excess of C++.
Is the unnecessary excess of C++ in the room with us right now?
Again, you don't have to use any part of C++ you don't like. A few minor spelling differences aside, it remains a superset of C. Using C++ not C will not hurt you in any project whatsoever.
Yeah, even integrating any C++ into it and calling any QEMU functions via extern "C" declarations when including the headers cause many issues because many variables are named "new" or have type cast issues.
It's a full time job on its own to fix all of the compilation errors. Could probably fix it with coccinelle scripts to make some parts easier, but still, validating the codebase with different compilers to make sure there's no resulting subtle breakage either still requires a lot of effort.
Any ideas why people are relying on distro packaged Rust for development instead of rustup? For Rust it feels weird making development choices around several year old versions of the language.
Rustup downloads toolchains from third-party (to the distro) repositories; distros do not want to be in a position where they can no longer build packages because of an external service going down.
So, if you are developing something you want to see packaged in distros, it needs to be buildable with the tool versions in the distro's repositories.
(Not just rustup- Debian requires repackaging Cargo dependencies so that the build can be conducted offline entirely from source packages.)
You’re answering a slightly different question but to me that’s a Debian packaging problem to solve. It’s weird to me that QEMU devs take this problem seriously enough to be putting in all sorts of workarounds to support old versions of the toolchain in the tip of tree just to privilege Debian support.
This feels more like a CI thing for the QEMU project and I’m sure solvable by using rustup or a trusted deb repo that makes the latest tool chain available on older Debian platforms.
As for Debian itself, for toolchains it really should do a better job back porting more recent versions of toolchains (not just Rust) or at least making them available to be installed. The current policy Debian is holding is really difficult to work with and causes downstream projects to do all sorts of workarounds to make Debian builds work (not just for Rust by the way - this applies to C++ as well). And it’s not like this is something it’s unfamiliar with - you can install multiple JVM versions in parallel and choose a different default.
It's not about our own CI -- we could easily use rustup as part of setting up the CI environment, and I think we might actually be doing exactly that at the moment.
Lots of QEMU users use it through their downstream distros. We even recommend that if you're using QEMU in a way that you care about its security then you should use a distro QEMU, because the distros will provide you timely security fix updates. Sure, we could throw all that cooperation away and say "tough, you need to use up-to-the-minute rust, if that's a problem for distro packagers we don't care". But we want to be a good citizen in the traditional distro packaging world, as we have been up til now. Not every open source project will want or need to cater to that, but I think for us it matters.
That doesn't mean that we always do the thing that is simplest for distros (that would probably be "don't use Rust at all"); but it does mean that we take distro pain into account as a factor when we're weighing up tradeoffs about what we do.
To be clear. I’m not criticizing the position the QEMU project is in. I recognize you have to work with Debian here. I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
Out of curiosity though, have you explored having your own deb repo instead? I would trust QEMU-delivered security fixes on mainline far more than the Debian maintainers to backport patches.
I think that trust would be somewhat misplaced -- QEMU has historically not made particularly timely security fixes either on mainline or on branches. To the extent that our stable-branch situation is better today than it was some years ago, that is entirely because the person who does the downstream Debian packaging stepped up to do a lot more backporting work and stable-branch maintenance and releases. (I'm very grateful for that effort -- I think it's good for the project to have those stable branch releases but I certainly don't have time myself to do that work.)
As an upstream project, we really don't want to be in the business of making, providing and supporting binary releases. We just don't have the volunteer effort available and willing to do that work. It's much easier for us to stick to making source releases, and delegate the job of providing binaries to our downstreams.
> QEMU has historically not made particularly timely security fixes either on mainline or on branches
> It's much easier for us to stick to making source releases, and delegate the job of providing binaries to our downstreams
Am I correct that this is essentially saying "we're going to do a snapshot of the software periodically but end users are responsible for applying patches that are maintained by other users as part of building"? Where do these security patches come from and how do non-Debian distros pick them up? Are Arch maintainers in constant contact with Debian maintainers for security issues to know to apply those patches & rebuild?
Security patches are usually developed by upstream devs and get applied to mainline fairly promptly[1], but you don't want to run head-of-git in production. If you run a distro QEMU then the distro maintainers backport security fixes to whatever QEMU they're currently shipping and produce new packages. None of this is particularly QEMU specific. There's a whole infrastructure of security mailing lists and disclosure policies for people to tell distros about security bugs and patches, so if you're a distro you're going to be in contact with that and can get a headsup before public disclosure.
[1] and also to stable branches, but not day-of-cve-announcement level of urgency.
Sure, but then why does the mainline branch need to worry about supporting the rust that’s bundled with the last stable Debian release? By definition that’s not going into a distro (or the distro is building mainline with rusts latest release anyway).
Is it a precautionary concern that backporting patches gets more complicated if the vuln is in Rust code?
But then again Rust code isn’t even compiled by default so I guess I’m not sure why you’re bothering to support for old versions of the toolchain in mainline, at least this early in the development process. Certainly not a two year old toolchain.
We already make an exception in that we don't support Debian bullseye (which is supported by the rest of QEMU until the April 2025 release), but not supporting Debian stable at all seemed too much.
That said we will probably switch to Debian rustc-web soon, and bump the lower limit to 1.75 or so.
> I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
What Debian has is not a "stranglehold" but an ideology, and Debian continues to matter to (some) upstream projects because lots of users identify with Debian's hyperconservative, noncommercial ideology.
Your complaint is basically, "it's too bad that the userbase not sharing my values is large enough to matter".
> What Debian has is not a "stranglehold" but an ideology, and Debian continues to matter to (some) upstream projects because lots of users identify with Debian's hyperconservative, noncommercial ideology.
> Your complaint is basically, "it's too bad that the userbase not sharing my values is large enough to matter".
Arch and rolling releases have about the same market share as Debian. Indeed, ironically, Debian's widespread adoption is seen primarily in the enterprise space where it's free as in beer nature and peer adoption is a signal it's a suitable free (as in beer) alternative to RedHat. Without Ubuntu's popularity a while back making Debian not so crazy an idea, I think "Debian" philosophy would not have anywhere near the adoption we see in commercial environments.
> I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
Does Debian have a stranglehold? AFAIK every other distro does the same thing, and all of them for good reasons.
Ubuntu and Arch are about equal market share penetration for the desktop from what I researched with other and steam deck being the main dominant categories. So ignoring corp fleet deployments, I'd say Arch and NixOS have stolen quite a bit of market share from Debian-based systems in terms of end-user preference. But yes, Debian does still have a stranglehold because corp $ are behind Debian-style deployments.
> As for Debian itself, for toolchains it really should do a better job back porting more recent versions of toolchains (not just Rust) or at least making them available to be installed.
Disagree. No stable release of Debian today supports c++23 which is coming up on two years old at this point (this corresponds to a major Rust edition, not the minor stuff they publish on an ongoing basis every month).
Java in Bookworm installs JDK 17 which is three years old at this point. Java itself is on 23 with 21 being an LTS release.
This means that upstream users intentionally maintain an old toolchain just to support Debian packaging or maintain their own deb repo with newer tools.
You’re confusing cause and effect. People aren’t migrating because Debian packaging lags so badly, not because there aren’t improvements projects would love to otherwise use.
> Most toolchains don't have as much churn as Rust.
What churn? A release every 6 months? Unlike many others, toolchains (i count nodejs and co here) rust only need one toolchain because latest rustc always able to compile older rust code.
The goal of non-rolling release distros is to have a predefined set of dependencies, which the distro maintains and fixes if necessary.
If Rust decides that it no longer supports compiling on x86, or if it starts depending on a version of LLM that no longer runs on a supported architecture, Debian must fix that for their users. That leaves the curl2bash installers that are popular with fast moving tools and languages useless for long term stability. The same goes for crates, which can be pulled at any time for almost any reason and break compiles.
Then there are other setups distros can choose to support, like having update servers/package repositories for updating servers that aren't allowed to connect to the internet or are even air gapped. You can't save rustup to a DVD, but you can take a copy of the Debian repository and update air gapped servers in a way that leaves a permanent trace (as long as you archive the disks).
Not all distros have this problem. In theory the Rust people could set up an apt/repository Debian users can add to get the best features of a package manager and the latest version, and distros like Arch/Gentoo don't bother with the stability guarantees most distros have.
Qemu can ignore these requirements and opt into not being packaged into distros like Debian or Ubuntu or Fedora or RHEL or Oracle Linux. That'd cost them a lot of contributions from those platforms, though, and may cause a risk of the project being forked (or even worse, end up in the ffmpeg/avconv situation).
> If Rust decides that it no longer supports compiling on x86, or if it starts depending on a version of LLM that no longer runs on a supported architecture
Then you stop publishing new versions of the toolchain for x86 releases of the distro? I fail to see the problem. Nothing prevents you from not making a newer version of the toolchain available. Indeed, that's the default.
> The same goes for crates, which can be pulled at any time for almost any reason and break compiles.
I never said you have to extend this by making all versions of crates available. You can freeze the crates using the same mechanism to redirect Cargo as they do today.
I'm going to ignore the rustup stuff as I wasn't proposing rustup be used for building the base Debian image itself - that was a comment more about the environment QEMU itself can advocate for for people building it & leaving it to the distros to solve their own packaging problems.
Well, at least a couple of years ago, one surprising thing I discovered with rustup was that unlike distros that have a clean install/uninstall, rustup was rm -rf * ing everything in the place it was installed. There was an open bug on it and multiple complaints. I, who needed it for various system builds, had innocently rustup'd to /usr/local. On uninstall, it wiped all of /usr/local. I had backups on that machine, but it was still an unpleasant experience and I did lose a few not-too-important scripts.
I don't know if others have run into the same thing, but it's why I'd trust Gentoo's packaging more.
My point is there should be a discussion about how to make newer toolchains available to end users. This has nothing to do about packaging within the distro and a non-issue since whatever "tip of tree" is for QEMU is what would get frozen as the "blessed" version and that could use whatever tip for the toolchain they needed since that's likely the same version that Debian would freeze.
I find it funny and sad at the same time that an installer for a "safe" programming language teaches people to download a shell script from a website and run it. What a farce.
With cert pinning, CT, and other advancements in transport security, I don't see a huge fundamental difference between this and adding a random apt repository & doing an apt install.
You should also not add random apt repositories from the internet. But there is still a major different in terms of the implications for user education.
By "the Rust community" here, you mean the author of the mail, one of the biggest long-standing developers of QEMU since before Rust was a thing? You're complaining that the developers of QEMU themselves are interested in adopting Rust because they think it'll work well for them?
This kind of thing reminds me of the backlash against systemd, where detractors felt like it was being imposed on the distributions that were for the most part adopting it because they liked what it offered.
systemd has tried to alter the kernel in breaking ways for it's own ends plenty of times. it's easy enough to not use and plenty of distributions do so with some effort but it's not as if systemd has had zero consequences on the rest of the ecosystem.
How many developers does the project have? What is your metric for "biggest?" Just total number of contributions?
> You're complaining that the developers of QEMU themselves
Yep. Do any of them work for any commercial companies? Or are we ignoring that to make an argument of appeals? If they all left would QEMU have zero developers?
> because they think it'll work well for them?
Looking at the set of challenges presented at the bottom of the email and their wiki I'm not sure this idea is well founded. There are more wacky compromises listed than there are good ideas.
If you're just going to push hacks upstream and maintain vendor forks downstream then why wouldn't you just start a new project and bring in whatever bits of C code you need until you can replace them.
I've yet to these efforts actually reduce baggage or show any hope of showing returns on investment inside of a decade of insane effort, which always seems to hamper any further feature or bugfix development, out of fear of interrupting other agonizing "works in progress."
Some of this think this is clearly goofy and can't help but comment on it.
> There are more wacky compromises listed than there are good ideas.
I am happy to learn about other's opinions and especially contrary opinions, otherwise I would not have made a public post. I am myself not 100% sure that the idea will be successful, it would be stupid to think it certainly will; but the good thing is that the existing C code will be unaffected (or improved; for example https://lore.kernel.org/qemu-devel/20241129180326.722436-1-p... was found while looking at the code to write the corresponding Rust bindings).
Sometimes you have to try crazy things. One perhaps controversial changes I headed in QEMU was to introduce the Meson build system. You may wonder what is wrong with me. I am happy to say that since then I have almost never had to ask a question about the build system, which was a very common occurrence before, and it has actually simplified the implementation of features such as custom device configuration,
improved module loading, entitlement support for macOS, relocatable installs, autogenerated parsing of the configure command line, improvements to cross-compilation. While slightly reducing the number of lines of code compared to before. So I disagree that exploratory work is "a decade of insane effort".
What leg do you think you have to stand on to debate in such a manner? If you want to challenge the ideas, here on _hacker news_, then that would obviously be welcome, if all you want to do is appeal to authority in an effort to declare the conversation as invalid then you've added nothing and behaved disrespectfully.
Should call this site "embarrassed hacker propaganda." At least then you'd "have a leg to stand on." What a gross tactic you've relied on here.
Even if you took the whole safety aspect away, why should I start a new project in Rust as opposed to C or C++?
Rust has modern tooling, great IDE support and a language server, nice dependency management, cargo and I could go on. Writing rust makes it imo much easier to structure your code and project as well.
I just recently had to build a medium sized C project. The Makefile alone was at least 700 lines long. In my opinion I also just don't see younger people and newcomers putting up with that when there's a shiny new alternative. And if you want to sustain you have to attract new people and potential maintainers.
Other than that, I agree that the rewriting is mostly not warranted. The thing is, it is about people and not really about code or programming language. If your entire team knows C well and will be able to maintain that project for years to come, it'd be weird to rewrite it all.
Read his comment. This guy literally writes exploits for C/C++ software. Of course he wants you to keep using memory-unsafe languages, otherwise his business dries up!
You should not. Simple as that. Odds are very good there already is a project that does what you want and you should join or buy that. If you start a new project you have a lot of effort to be just as good as the previous one, and those projects are not standing still.
Okay, maybe it makes sense to start a new game. However you should use an existing game engine not build your own. A few other things like that exist.
C++'s metaprogramming is still better. Rust needs specialization, const expressions, and a bunch of other things before it can be a full replacement for C++.
You’re right, but most people who are still using C++ over rust aren’t doing so because of better metaprogramming. It’s a relatively minor advantage, and on the other side, the advantages of Rust over C++ are massive.
Say you have to develop an embedded project. Try dealing with Rust and its dependency hell as everything you do requires a million different packages.. Or, develop me a driver that needs DMA.. How about a kernel allocator? Want to do that? Sure just wrap everything in "Unsafe"... So what is the point? Furthermore Rust programs link to libc ironically.
You're definitely not going to be linking to libc in a no_std environment, which is going to cover much (though not all) of embedded development. That will also remove some of the most bloated library dependencies from even being available to you in the first place, and of the rest, it will debloat a lot of them.
Even if your embedded development is targeting a hosted platform with libstd available, whether you link libc will depend on whether you use libc. Thanks to its stable syscall ABI, you don't have to link anything on Linux. But most other similarly situated operating systems require you to link something, whether it be USER32.DLL/KERNEL32.DLL on Windows, libSystem*.dylib on macOS, or yes, libc*.so on (most of?) the BSDs and other UNIX systems. And when libc is not required by the platform, but you do need some of its functionality (either out of convenience or cross-platform compatibility), you can statically link it (including MSVCRT on Windows).
Yes it has and can be done with rust. But usually by writing “unsafe” code so what the hell is the point if the majority of the code isn’t gonna have the Rust memory guarantees it would normally have? Furthermore, not all MCUs have a Rust compiler or tool chain to use. C is the defacto standard. Not saying it cannot be done, just that I am not sure it’s worth the extra effort and overhead.
Entire kernels have been written in Rust with less than 10% unsafe code. The entire line of argument that "the majority of code" needs to be unsafe in such contexts is BS.
Especially when the "Rust culture" suggests building safe abstractions on top of unsafe building blocks, which tends to keep the unsafe code pretty well-contained to a small section of the codebase. There's plenty of Rust frameworks for microcontroller programming (embassy, etc.) that don't involve much if any unsafe Rust at all.
Can you link me to a viable/real world kernel rewritten in Rust? One that supports modern hardware? I am not talking about implementing a toy kernel here.
I'm current working on an embedded project in rust. The most annoying part is when the project is shared with a non-embedded part, but otherwise it's a breeze to work on.
> "Sure just wrap everything in "Unsafe"... So what is the point?"
https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html : "You can take five actions in unsafe Rust that you can’t in safe Rust .. it’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked .. by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."
You are confusing best practice for what is done all too often. Unsafe should be small blocks, but I've seen people put unsafe on everything even though it isn't needed thus making it hard to find where it is needed. I'm not a rust programmer, but I'm lead to believe that those people then do things that need unsafe - but a safe option not only exists but would have been easier to write.
For very low-level stuff (e.g. embedded) you might need a lot of unsafe. For the vast majority of software it’s extremely rare. I worked full time for five years on a Rust project (https://github.com/MaterializeInc/materialize) and anecdotally, unsafe code was much less than 1% of the codebase.
Clangd just works, and quite nicely, for me in Emacs, didn't even need to install any external package besides clang-tools from my distro's repository. I don't get this argument at all. The dependency management advantage is probably cool until you realize you can basically only trust distro maintainers to provide long term support for dependencies and crates makes it much harder for them, and makes programs less likely to remain secure in the long run. Who's going to fix a heap of Rust abandonware when an exploit is found in their transient dependencies?
The fact that rust makes it harder to write memory corruption bugs is only one of its many advantages. It’s genuinely a lot nicer and easier to use than comparable languages like C++, and I’d prefer it to that even if I didn’t care about memory corruption at all.
Also, what’s wrong with the syntax? I hear a lot that people find the syntax ugly but I never understood what’s so fundamentally different about it compared to any other mainstream Algol-based language
What are the other advantages exactly besides tool chain? A lot of people using Rust where the could have just used Go or another language that is memory safe and actually productive. Maybe I am just being a hater because I grew up on C/C++ but I know for a fact "Rustaceans" are getting out of control. Approaching zealot territory for sure. The time people spend fighting the Rust compiler for a project, they could have just written secure C.. That is just my personal opinion. I am not saying Rust shouldn't exist. I am just saying it isn't some big universal answer to all security and systems programming issues..
> What are the other advantages exactly besides tool chain?
It's a very productive language once you're experienced.
I'm one of those people who (initially) didn't care about memory safety at all and just wanted a more productive C++, and for me it delivers on that perfectly. Nowadays I even use Rust for scripting, because with the right libraries it's just as productive for me as Ruby and Python can be while being orders of magnitude faster.
I always find it funny when I see the "fighting with the borrow checker" meme, or as you say - "people spend fighting the Rust compiler for a project", where people complain how extremely unproductive Rust is. This is very much true, if you're a beginner. It's a language with a very high skill ceiling (similar to C++).
So you basically never have to worry about lifetimes or memory management or the borrow checker? Because that would be a prerequisite for it to be as productive as Python.
I'd love to see a seasoned Python developer and a seasoned Rust developer comparing the time they spend to solve e.g. Advent of Code. I bet the Python dev would solve it at least ten times faster (developer time, not execution time).
> So you basically never have to worry about lifetimes or memory management or the borrow checker?
Yes. Once you're experienced enough you naturally start writing code which satisfies the borrow checker and you never really have to think about it. At least that's how it is for me.
> Because that would be a prerequisite for it to be as productive as Python.
It's not that hard to be more productive than Python for a lot of tasks, simply because Python isn't actually the most productive language in a lot of cases, it's just the most well known/most popular. (:
I do a lot of data processing in my scripts, and for many years my default was to use Ruby. The nice thing about Ruby is that things which take 3~4 lines of Python usually only take 1 line of Ruby and are significantly more convenient to do (e.g. it has proper map/filter/etc., nice multiline lambdas, regex matching is integrated into the language, shelling out to other processes is convenient and easy, etc.), which translates into significant productivity savings when you just want to whip up a script as fast as possible.
So some time ago I started writing my scripts in Rust instead of Ruby (because I often deal with multi-gigabyte files, so the slowness of Ruby started to be a problem; otherwise I would have kept using Ruby). And I've made myself a small custom library that essentially allows me to use Ruby-like APIs in Rust, and it's remarkable how well that actually worked. I can essentially write Ruby-flavored Rust, with Ruby-like productivity, but get Rust-like performance.
In C++ I have learned the patterns and so I rarely need to worry about lifetime - everything is either on the stack or a unique_ptr. Even when I need to take a pointer I know I don't own it but my project has clear lifetime rules and so I normally won't run into issues.
The above is not perfect. I do sometimes mess up, but it is rare, and that is C++ so I don't get tools/the language helping me.
> So you basically never have to worry about lifetimes or memory management or the borrow checker? Because that would be a prerequisite for it to be as productive as Python.
To be more productive than Python? I almost never have to worry about lifetimes or the borrow checker. And even when I do, I'm still more productive.
I wrote a comment a while back on this topic; someone asked for a comparison between a little Python script and Rust. You can see both versions linked here https://news.ycombinator.com/item?id=40089906
That's been my experience. The troubles I've had with Rust were when I was trying to do things the way I would've in C, and either rustc or cargo clippy told me that my idea is bad and there's a better way. I feel like I've learned a lot about better coding in general from it.
> The time people spend fighting the Rust compiler for a project, they could have just written secure C
I simply don't believe this anymore, based on the number of buffer overflow and memory corruption CVEs coming out of even mature C codebases every year
CVEs are being assigned recklessly these days. Most of them aren't even security bugs and or cannot be exploited. Be wary of CVEs and their practical utility.
The US government, Linux, Google, Mozilla and many more see value in memory safe languages, yet I see so many people, like you, saying "No, they're all wrong. Get good.". I just don't get it.
What insights do you have that makes you more qualified than all these organizations combined? What are they all missing?
I have been doing government work for a while and understand why they respond that way. Nation states can in fact develop very powerful kill chains. My entire start up was created and sold based on a vehicle kill chain but that took us a year and a half to do… Mean while some kid in his moms basement uses leaked credentials and sprays a network or phishes and boom. Major breach…
A team can learn and start using Golang in a week. That is in fact what is powering a lot of companies right now. Golang has even better memory safety guarantees than Rust and you don't really need to worry about memory management at all... Furthermore its compiled statically and more suitable for distribution and horizontal scaling. I am not sure how quickly a team can become productive in Rust but I am willing to bet that it would take way way longer to get off the ground. That being said, using an LLM might help with that but then you would still have code people don't understand and that becomes technical debt... Maybe I am just old and grumpy.. I am learning Rust myself and I just don't understand why it's being pushed so hard. I think Zig should be pushed for systems programming if anything..
> Golang has even better memory safety guarantees than Rust
I don't think that's true at all. For one, Go has data races, which lead to undefined behavior and memory corruption. For example, appending to the same slice from multiple threads will corrupt its metadata and can lead to out-of-bounds reads and writes.
> A team can learn and start using Golang in a week
True, but why should we be optimizing for the first week experience? Your career lasts 40 years.
> Golang has even better memory safety guarantees than Rust
That is not true. What specific example did you have in mind? As an example of something Rust can enforce that Go can’t is not mutating something that’s shared between threads without acquiring the proper lock.
> Furthermore its compiled statically and more suitable for distribution and horizontal scaling.
Rust can be statically linked just like Go can. Not sure what else you think makes it less suitable for distribution and horizontal scaling. There are certainly lots of companies distributing Rust programs and horizontally scaling them so this seems empirically false.
> I am learning Rust myself and I just don’t understand why it’s being pushed so hard.
Because it has a lot of nice features that make a lot of people like it - memory safety without GC, prevention of data races, algebraic data types, etc. No other mainstream compiled languages has this set of features. There’s no conspiracy to “push” Rust. The push is organic. People just like it.
> "I know for a fact "Rustaceans" are getting out of control"
That isn't a fact, that's an opinion. A pearl-clutching, panicky, fact-free opinion framed in terms of "control" which raises questions about who you think should be "controlling" those uppity people who are doing things you don't like.
Seriously - an explosion can be out of control, but other people aren't supposed to be in your control in the first place, right? That's basic freedoms and so on. How is your position any different to any other entrenched social / power structure attempting to control people who want things to change?
> Maybe I am just being a hater because I grew up on C/C++ but I know for a fact "Rustaceans" are getting out of control. Approaching zealot territory for sure.
This is a perfectly level-headed submission about trying out Rust in some corner of the Linux Kernel. Which you then take as an opportunity to go on this rant... I’ll let people read my conclusion between the lines.
> The time people spend fighting the Rust compiler for a project, they could have just written secure C.. That is just my personal opinion.
Empirical evidence from the largest software firms in the world who do research on objective metrics on software defects show that there's no such thing as secure C and that Rust is slightly more productive.
> Creating an entire language to prevent such bugs is excessive, especially given the current state of software security.
I'm having a hard time reconciling those 2 statements. I can imagine any 1 person having one or the other of those opinions, but not both at the same time.
> Meanwhile, the vast majority of security issues (around 98%) come from basic human errors, such as reusing passwords or falling for phishing, not sophisticated zero-days exploiting memory corruption.
We're not limited to doing just one thing, you know. Some people are working on improving the authentication space. Others are working on making it easier for developers to write safe code. Both of those can progress at the same time.
Lcamtuf, the creator of afl shares the same opinion… I linked to his Substack in my original comment. Give that a read. My main point is over emphasis on memory safety doesn’t seem to pay off when there are lower hanging fruits on most large networks.. obviously I want memory safety, but that’s irrelevant when there are easier attack vectors still open.
I read it. He's not saying what you seem to think he's saying.
That aside, any solution that involves tooling people have to opt into is doomed to fail. We've seen this a thousand times throughout history: if you require people to go out of their convenience to choose the safer option, they won't do it. It doesn't matter how great the tools are. How cheap they are. How easy they are. If they're not part of the standard pipeline everyone gets unless they deliberately modify it, they won't be used.
You've said a few times that C/C++ have tools that get you most of the same controls as Rust. First, they don't. But even if they did, those tools aren't used by default. Rust makes everyone write code that satisfies the borrow checker whether they want to or not. There's no `-Wborrow-mistakes` flag we have to set. There's not even a `-Wno-borrow-mistakes` flag we can turn off if we get tired of the error message. If you write Rust, borrows are safe, and we don't get a say in it.
You can write C code that satisfies Coverity (where I use to work) and `-Wall` and Valgrind and and and, but the person next to you might not. And as long as it's easier to write unchecked C than to run it through a tester before validation, it won't be as safe as Rust at the things Rust checks. It can't be. Anything that depends on humans to make the safe choice is dead before it starts.
I agree the rewrite everything in Rust meme is overblown, and the Rust evangelists can be insufferable. I also agree that, at a glance, Rust syntax is (in my opinion) quite ugly. Once you start using the language it makes more sense why things are the way they are, but it was offputting to me at first and I still think its ugly now.
However I don't think adding Rust support, rewriting old, critical components, using Rust for a new project, etc. are bad things. If someone volunteers to rewrite critical components in a way that eliminates (or significantly reduces) one of the most exploited and damaging classes of bugs, while also adding support for said way, then I don't see the problem. If someone opens a GitHub issue on your C++ project telling you to rewrite it in Rust, just close it and move on.
I understand that. It is just they end up writing Rust code and then you go and see it's linked to libc or filled with "Unsafe". It isn't too difficult to write correct, safe C++. You can enable compiler settings that are similar to what Rust does..
On the critical bugs issue... To exploit a lot of these memory bugs it takes an entire new level of effort. We often need to chain together three or more bugs and land the exploit reliably.. This was much easier to do circa 2001 - 2016 but all the mitigations in place have really raised the bar.. From my experience providing exploits to a red teams that pen-test fortune 500 companies (my previous job at iDefense/FusionX), things have gotten far far more difficult. Rarely needed to use any advanced "sexy kill-chains" (like Chrome or IE exploit) because phishing and other means of network entry were far more reliable and much easier. My point is that study Google released expressed the bugs are dangerous and disastrous, yes, but they aren't the most prevalent risk in the real world. I am rambling a bit. Sorry for poor grammar typing on phone.
The current trend is people rewriting Unix command line utilities in Rust, as a hobby. Nobody is rewriting Adobe PhotoShop or Oracle in Rust.
Can you name some projects which you class as "out of control"?
[I don't use Rust, so this isn't "you should RiiR". Adobe isn't rewriting photoshop in Rust because of some Twitter/tech news hype. And if Adobe is rewriting Photoshop in Rust over the coming decades because of government guidance, is that really "out of control"?]
> Exploiting mature software like Adobe Reader is already incredibly challenging due to its hardened defenses.
Is this a troll? Isn’t Adobe Reader one of the easiest pieces of software to exploit because it enables risky features by default and lacks proper sandboxing? Just searching for “adobe reader security vulnerability” brings up a critical software update for CVE-2023-26369 as a top hit which is:
> Acrobat Reader versions 23.003.20284 (and earlier), 20.005.30516 (and earlier) and 20.005.30514 (and earlier) are affected by an out-of-bounds write vulnerability that could result in arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
> The claim, often cited from a Google study, that memory corruption bugs pose the largest risk feels exaggerated in this context
Actually this comes from analysis of NIST data which tries to track vulnerabilities and memory safety resulting in arbitrary code execution consistently shows up as the number one issue for C/C++ despite it being a non issue for most other languages. Indeed here’s the vulnerabilities for adobe reader and they’re dominated by memory safety issues: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=Adobe+reade...
> Don't even get me started on Rust syntax... I am convinced the syntax was intentionally developed to troll us all with the ugliest syntax on the planer.. I'll stop now before I begin ranting forever.
Rust shows what it takes from a language to achieve compile time memory safety (and the same syntax also provides thread safety) with the performance profile of C/C++. The syntax actually isn’t that hard to get used to and learn and I’d argue it’s easier for beginners since incorrect code will throw an error and explain what’s going wrong instead of crashing at compile time. In the static vs dynamic verification spectrum it costs more static verification. Indeed, from that perspective, why are you even coding in c++? Use assembly if it’s too high level and you don’t like syntax getting in your way or use Javascript so that you can write code that has almost no static checking. In practice rust is waaaaay nicer to write in with a much more mature and reliable project build system that works the same everywhere with a rich ecosystem of tooling that is trivially accessible. The standard library is high performance with a lot of things available that c++ would only dream of with c++ still arguing how to make breaking changes to the language in the standard body, a problem that rust developers don’t even think about (eg rust hashmaps are faster and higher quality than c++ maps).
Developing a full kill chain for adobe usually requires chaining together several bugs. "CVEs" are getting ridiculous. Prove to me it is easy and go win Pwn2Own or you can do what I do. Sell it to government contractors for a hefty price...
It doesn’t matter about easy or hard. It’s possible and then we’re just talking about the $ required to purchase it on the black market - government contractors don’t really pay well unless I’m misinformed. And a vulnerability remains a vulnerability forever until your victims patch it.
Said another way. Based on CVEs most of the focus of attackers is on memory safety vulnerabilities which means that regardless of price they otherwise fetch, these are still the cheapest and most valuable exploits to uncover in terms of exploit power / dollar spent.
Thanks for posting this to HN! Author here, happy to answer any questions.
(By the way, I did not originally start the project, though I've worked quite a bit on the safe abstractions that are mentioned in the roadmap).
> Related to this, the IsA trait allows typesafe compile-time checked casts. Unlike in C code, casting to a superclass can be written in such a way that the compiler will complain if the destination type is not a superclass, and with zero runtime cost.
This is unfair to C. With a little work when defining all classes (or non-leaf classes if you're willing for a hairier implementation), you can do this there too.
There are probably other ways, but the way that seems most "obvious" to me is to define the base via union of all bases in the chain, rather than only the immediate base class. So:
Then your "compile-time-safe-cast to base class" macro just checks 3 things in order: (runtime-safe-checked downcasts just have to check that the reverse is possible)As a recovering language lawyer, I need to point out that what you are describing is not portable behavior because it is writing to one member of a union and reading from a different member, if I understood you correctly.
Will it work on most tool chains? Yes. But when it doesn't, it is going to be fun.
No, you only need to assert that the field with that name exists.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." :D
Though I admit that Rust also has to include the full list of base classes (https://github.com/rust-lang/rfcs/pull/1268 would fix it).
The main reason the C version must hard-code is because C macros can't be recursive. What's Rust's excuse?
(that said, the C approach is also nice for manually accessing fields of distant base classes)
In Rust's case it is that when you say "you have IsA<Device> for all types that implement IsA<PCIDevice>" you are actually saying "you have IsA<Device> for all types, but only if they implement IsA<PCIDevice>".
When you later say "you have IsA<Device> for all types, but only if they implement IsA<I2CDevice>" the two conditions conflict.
It might be possible to avoid this issue with a different implementation but this is the simplest one that works and QEMU's hierarchy is generally shallow. If a better implementation came along, it would be a matter of search and repeat.
> define the base via union of all bases in the chain [...] just checks 3 things in order
Can you please show a concrete example? Thanks.
What does your wishlist for Rust look like? (Besides "simpler C/Rust interoperability", of course.) Has QEMU run into things that Rust-for-Linux hasn't, that feel missing from the Rust language?
Right now the only language-level thing I would like is const operator overloading. Even supporting MSRV as old as 1.63 was not a big deal, the worst thing was dependencies using let...else which we will vendor and patch.
Pin is what it is, but it is mostly okay since I haven't needed projection so far. Initialization using Linux's "impl PinInit<Self>" approach seems to be working very well in my early experiments, I contributed changes to use the crate without unstable features.
In the FFI area: Bindgen support for toml configuration (https://github.com/rust-lang/rust-bindgen/pull/2917 but it could also be response files on the command line), and easier passing of closures from Rust to C (though I found a very nice way to do it for ZSTs that implement Fn, which is by far the common case).
The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.
Outside the Rust core proper, Meson support needs to mature a bit for ease of use, but it is getting there. Linux obviously doesn't need that.
BTW, saw your comment in the dead thread, you're too nice. I have been curious about Rust for some time and with Linux maturing, and Linaro doing the first contribution of build system integration + sample device code, it was time to give it a try.
> Right now the only language-level thing I would like is const operator overloading.
As far as I know, const traits are still on track.
> easier passing of closures from Rust to C
As in, turning a Rust closure into a C function-pointer-plus-context-parameter?
> The "data structure interoperability" part of the roadmap is something I should present to someone in the Rust community for impressions. Some kind of standardization of core FFI traits would be nice.
Would be happy to work with you to get something on the calendar.
> As far as I know, const traits are still on track.
Yes they are. My use case is something like the bitflags crate, there are lots of bit flags in emulated devices of course. In the meanwhile I guess it would be possible to use macros to turn something like "bit_const!(Type:A|B)" to "Type(A.0|B.0)" or something like that.
> As in, turning a Rust closure into a C function-pointer-plus-context-parameter?
Yes, more in general everything related to callbacks is doable but very verbose. We might do (procedural?) macro magic later on to avoid the verbosity but for now I prefer to stick to pure Rust until there's an idea of which patterns recur.
Let me know by email about any occasions to present what I have.
I made this handy function to pass callbacks to C: https://github.com/andoriyu/uclicious/blob/master/src/traits...
Bullish or bearish on rust in the kernel?
Hello! Small world. :)
For drivers, it's already happening, especially for graphics but not limited to that. 6.13 has some very important changes. A lot of Linux is drivers so that's already a reason to be bullish.
Answering for QEMU instead: it depends on the community being willing to share the burden of writing the FFI code. Despite Rust being low level, there is still a substantial amount of work to do. Replies to the roadmap pointed out tracepoints as an area where I know nothing and therefore I would like someone else to do the work (I am working mostly on the object and threading models, which is also where a lot of the impedance mismatch between Rust and C lies).
Hello. I assume tracepoints mean kprobes/uprobes or something along those lines? I've just this weekend worked on implementing/adapting a crate for DTrace USDTs aka DTrace probes to also work on Linux and generate SystemTap SDTs (aka USDTs aka dtrace probes).
This is probably a little different from tracepoints in the kernel space but I'm somewhat interested in going deeper and into the kernel side of things. Let me know if you have any pointers as to where I might be of concrete assistance to you!
QEMU has several tracepoint providers, the main ones are (a slightly fancy version of) printf and USDT. There is a Python program that generates the C code for the chosen backend(s), so the thing to do would be to adjust the script to produce either an FFI bridge or the equivalent Rust code.
Can you link the script? This sounds vaguely like something that would be no skin off my back, so I'd be quite happy to help with this.
Yes it's https://github.com/qemu/qemu/tree/master/scripts/tracetool.
I also found https://github.com/cuviper/probe-rs/tree/master/src/platform which seems interesting.
I've said this before on here and I'll say it again. The QEMU code base is a nightmare. The amount of fake C++ is mind numbing. Every time I come across a variable or strict declaration or method with the word "class" in it, I'm reminded of how much easier the whole thing would've been with C++. You can't even compile C++ into QEMU because of how the headers use keywords. That's not even touching their macro abuse template functions. You know what has templates and classes? C++. And constructors. There's just so much.
All this to say: if Rust can make it in, that's great, because I'm tired of dealing with C's simplicity for more complicated tasks. I'm not a rust user but if it lets me use classes and templates, I'll switch over
> I'm not a rust user but if it lets me use classes and templates, I'll switch over
Yer not switching any time soon then. Rust does have methods but not classes (QOM’s inheritance is specifically called as an issue in TFA) and it uses Haskell-style generics rather than C++-style templates.
I mean it has polymorphism via v-tables and composition via traits, that's enough object-orientation for me. Inheritance is a core principle of OOP but in practice most C++ class hierarchies are relatively flat (there are exceptions like Qt, which I think uses inheritance in a good way), in practice most of that can be mimicked with embedding and composition well enough to work. Not sure if you want to call that object-oriented but operating on polymorphous structures that encapsulate their data is pretty close to object orientation (the good parts of OOP, at least). And I'm not a big expert on Rust but I think you can do most of the things we do in C++ with templates using macros.
> Inheritance is a core principle of OOP
Of Simula (& descendants) proto-OOP objects, not as it was envisioned by the one who coined the term.
http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...
> I didn't like the way Simula I or Simula 67 did inheritance (though I thought Nygaard and Dahl were just tremendous thinkers and designers). So I decided to leave out inheritance as a built-in feature until I understood it better.
> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP.
Fun that this was mentioned too:
>> it uses Haskell-style generics
> polymorphism via v-tables and composition via traits, that's enough object-orientation for me
For which Alan Kay had this to say:
> My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful. The term "polymorphism" was imposed much later (I think by Peter Wegner) and it isn't quite valid, since it really comes from the nomenclature of functions, and I wanted quite a bit more than functions. I made up a term "genericity" for dealing with generic behaviors in a quasi-algebraic form.
In a way it's fascinating to see how C++ has shaped (dare I say warped) the collective vision of how to do OOP.
For example, as a Rubyist, which was heavily influenced by Smalltalk, it is fascinating how hard it can be to explain how fundamentally different message vs call are, and what's the whole point of modules, mostly because the other viewpoint is so warped as to cognitively reject that it can be any different from the C++ model.
The one lagging thing that isn't easy with rust's generics is expressions, and that looks to be getting kicked down the road indefinitely. You can't have Foo<N> * Foo<M> -> Foo<N+M> or similar. That is a useful thing for statically sized math libraries and for performance oriented metaprogramming. The latter can be clumsily handled with macros, but not the former. It is also blocking portable simd from stabilizing, apparently now indefinitely. I still wouldn't pick any other language a new production product, but I find the stalling out on const generic expressions frustrating, as simd and static math libraries are things I use daily.
> You can't have Foo<N> * Foo<M> -> Foo<N+M> or similar.
You can though, unless I totally misunderstand your syntax
You misunderstood. N, M are supposed to be integers (const generics); in your example code you've made them types. Also, your `type Output = Foo<<N as Add<M>>::Output>;` just means "multiplication has the same return type as addition". But desired is that multiplying a Foo<4> with a Foo<3> results in a Foo<7>.
Rust decided that it's important to not have instantiation-time compiler errors, but this makes computation with const generics complicated: you're not allowed to write Foo<{N+M}> because N+M might overflow, even if it never actually overflows for the generic arguments used in the program.
aiui this isn't inherently limited by instantiation-time error messages and is available on nightly today with the generic_const_exprs feature. It's in the pipeline.
I believe this kind of stuff is being worked on.
Instantiation time errors unlock so much metaprogramming potential that Rust is going to be forced to allow them sooner or later.
I could go on and on about the limitations of min_const_generics, missing TAIT (type alias impl trait), unstable coroutines/generators, etc. but none of that stuff erases how much of a miracle Rust is as a language. The industry really needed this. It's practically the poster child of memory safety and security-critical industries are picking it up and Ferrocene is pushing it in safety-critical environments as well and it's just. Good. Please. Finally a serious contender against the terrible reign of C and C++.
You can do it with macros, the problem is A) documentation B) macro abuse
It's so much harder to debug things when function calls are secretly macros, and if I didn't have the vscode cpp language server for goto definition, I'd be completely lost. I'd wager that only 5% of the #defines aren't auto generated by occasionally recursive macros. Maybe hyperbole. Makes it really hard to figure out how existing code works.
I don't think he literally means templates and classes. Rust has equivalents that do the things you want templates and classes for (generics and structs/traits respectively).
I completely agree with his point about reimplementing C++ badly in C. GNOME does this too in their libraries. He will be much happier with Rust.
On the other hand it seems like gobject was good for gnome as it made it easier to bind GTK to other languages
Hmmm I think I actually really want classes. Something that combines data and methods together without more function pointers. Template vs generic I don't care about
Rust structs do bind data and methods together.
Rust structs are in some ways similar to C++ classes. You can combine data and methods together with them without worrying about things like function pointers.
:( the QOM inheritance is where I've had my wrist bugs. Recently I merged from master (upgrading me from version 8.something to 9.2.?) and they dramatically changed the resets. I tried the new way and had segfaults in bad places that went away when I reordered code that shouldn't have ordering requirements. That was too scary so I switched to their legacy reset function and it was all fine again. All this blind casting of opaques makes me nervous.
The legacy reset way is fine, and yeah this is where being more strict and doing more stuff within the language should help.
C++23 is a godawful mess; especially the functional paradigms (which look beautiful in e.g. OCaml) that got shoehorned kicking and screaming into the morass that C++ had already been.
If you read function specs in the OCaml docs, they're understandable and the syntax is clean; the same concepts bolted onto C++ look like line noise, both in actual syntax and the (allegedy) English-language description on en.cppreference.com.
Reading the C++ standard itself is an exercise in futility, very much unlike the C standard. (Although, latest developments in C land are repulsive too, IMO.)
The C++ committee's tantrums and scandals (physical violence at meetings!) put the worst that has been seen in open-source communities to shame. <https://izzys.casa/2024/11/on-safe-cxx/> has been posted to reddit and HN recently.
C++ compilation still takes absolutely forever, and C++ template error messages are as incomprehensible as ever.
One important problem with C++ (repeated ad nauseam, so this is nothing new) is the unforeseen and unintended harmful interactions between such language features that were supposed to be orthogonal. The only remedy for that is to stop adding stuff to the language; but nooo, it just keeps growing.
Another important problem is that, the more the compiler does for you implicitly, the less you see (and the less you can debug) what happens in the code. This is not a problem in OCaml, which is a managed language with a safe runtime, but it's a huge problem in C++, which remains a fundamentally unsafe language. (And yes, once you start extending OCaml with C, i.e., mixing implicit and explicit, OCaml too becomes unsafe, and a lot of head-scratching can occur.) A home-grown object system in C is at least explicit, so it's all there for the developer to read, instrument, step through, and so on.
When your "core guidelines" <https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines> could fill a book, there's a problem with your language.
(I'm not here to defend QEMU's object model indiscriminately; I'm here to bash C++.)
I cant believe I just read that entire izzys.case post. Wow. I couldn’t possibly assess it all for accuracy, but if that’s reasonably correct, just wow.
What are the advantages of trying to use fake c++ instead of actual c++ for their use case? I'm sure there are / were smart people working on the project. Do they keep decision records? Have you had a conversation with the relevant people about it?
> What are the advantages of trying to use fake c++ instead of actual c++ for their use case?
There are exactly none, but there was a time during the mid- to late-90s when it was hip to implement OOP frameworks on top of C (C++ only really became a realistic option once C++98 was widely supported which took a couple of years, ...also there was such an extreme OOP hype in the 90s that it is hard to believe today - EVERYTHING had to be OOP, no matter if it made sense or not).
QEMU seems to be from around the end of that era (first release apparently in 2003), and looking at the code it looks exactly as I imagined, with meta-class pointers, constructor- and destructor-functions etc... it would almost certainly be better to use a simple non-OOP C-style, but that was an 'outdated' point of view at that time (since everything had to use that new shiny OOP paradigm).
The object model in QEMU is from 2007-2011. Internally there isn't a lot of inheritance (it's pretty shallow), but yeah I guess "is-a" is what was in vogue at the time.
However there is a reason to have the object model, and it was added because some things were getting out of hand. You had to write three parsers for everything you added: one for command line, one for the text-based command interface and one for the JSON API. And you had to do it once for each kind of "object" (device, network backend, disk backend, character device backend etc.). The object model is designed to let you write code just once for all three and to reuse the interface across object types.
That 90s OOP hype really mirrors the contemporary FP hype (and the 70s–80a structured programming hype). Programming is definitely subject to fads.
C++, especially before C++11, was a total mess. Even today, it's super easy to shoot yourself in the foot and you literally can't learn the entirety of the language due to how massive it is. This still doesn't justify doing the absurdity of macro magic and garbage I've seen people pumping out in C over the years though.
IMHO if you deliberately use C it's because you want to keep things simple. Sometimes C++ will inevitably lead to overcomplicated designs nobody needs; there's a reason why 90's C++ feels massively outdated, while 90's C is still (mostly) accessible. C is a great way to keep the urge people have to shovel in OOP even when it doesn't really make sense. When I see some bullshit C++ code at work, I often think, "if people had to write this in C, they wouldn't have overthought this this much"...
My 2 cents is that there was a Java craze back in the '90s that basically infected all code designed in that time period, and that's how we got some very weird nonsense that in hindsight was poorly thought out. Just look at stuff like GTK+ and GObject...
The GObject system for all it's faults serves a purpose. Similar to COM in Windows it allows mapping of higher level languages to libraries. Without it there wouldn't be all the bindings we have to Python, JavaScript, Vala and Rust today. I wouldn't say it was poorly thought out so much as mismatched with it's user's typical uses and expectations.
Yeah but it made literally zero sense to write GObject in C. They've reimplemented inheritance and the like with macro BS while the GNU project literally had both a C++ and Objective-C compiler - Objective-C warts and all was a perfect fit for GObject.
I have been guilty of this if only in school. A class assignment could be completed in java or c and I knew c++ and c and my teammates knew c. I really wanted to do oop because I was ignorant, but none of us knew java so we did OO in c. It was horrible. I like the simplicity of c. I wish there was a zig that was immutable by default, and had a borrowchecker, but I think that road inevitably leads to a GC jitted language or to something as complicated as rust. Well almost as complicated, at least macros would still be written in the same syntax as normal runtime code.
The only advantage I know of is that it means you stay within the C ABI. It can simplify linking a little in some cases, or FFI. I guess compilation is faster too.
But yeah broadly speaking it's a terrible idea.
I worked for a large infra project that was mostly written in C with the kind of trickery that is being here ascribed to QEMU code (I trust the parent, but I haven't seen the code myself).
It's a common thing to do in projects like this in C. While C++ solves some of the problems for larger projects, it brings so many more, it's usually not worth it. Projects of this kind, essentially, invent their own language, with their own conventions. C is just a convenient tool (given the alternatives...) to do that, as it's treated as a kind of a transpilation target. Think about languages like Haskell which essentially work like that, but went even further.
So, what typically happens is that memory allocation, I/O and concurrency need to be done differently from how the language runtime wants to do that. So, at this point, you basically throw away most of the existing language runtime. From my recent encounters with different bugs in QEMU, I imagine that they really, really need a lot of customization in how memory is allocated (my particular case was that QEMU was segfaulting when running ldconfig in aarch64 emulation, which turned out to be due to ldconfig having some weird custom-made memory allocation pattern which QEMU cannot account for.) It's easier to fight this battle with C than it is with C++.
I was going to post the same thing. Others on this thread may not have had experience with very large C codebases and hence haven't seen this play out. To a large degree C++ is just capturing what was already widespread practices in C, and indeed assembly before that.
You can define your own "language", the way you describe it, in C++ more easily than you can in C. C++ gives you more powerful and expressive tools for doing so. A program is not improved by foregoing these tools. There's nothing in C++ forcing you to do anything that makes anything "harder".
C++ makes everything harder by tempting you with all the unnecessary tools. And some of the mental load only goes away if you can be 100% sure that nobody in the project uses the feature.
I wouldn't be so quick to put "easy" and "C++" in the same sentence... Also, C and C++ language tools suck, when it comes to making a language compared to anything in Lisp family, for example, or any language that has Lisp-like macros. C wasn't chosen for it's ability to make other languages. It was an easy target. The benefits are the simpler and more malleable runtime that can be made to do all sorts of weird things where C++ would be much harder to deal with.
In other words, if you don't want the language runtime, if you don't want C++ templates, smart pointers, classes, lambdas, exceptions etc. But, you want to be able to manage memory for eg. realtime kind of system... simply removing all the unwanted stuff from C++ is harder than it is with C.
And, if you did want some / all of those features of C++, there are saner languages to have that (and you'd pay for it by having to drag in their runtime and conventions). Before Rust was a thing, I've seen people choose D instead, for example, as a kind of middle ground between the asceticism of C and unnecessary excess of C++.
Lisp is a non sequitur. C++ can do the same "weird things" C can. There is literally no advantage whatsoever of choosing to wear the C hairshirt instead of using C++. Not a single one.
Sure, Rust is better than C++ in some ways. You can have a legitimate debate about C++ versus Rust. There can be no debate about C versus C++: the latter is better in every single way.
> unnecessary excess of C++.
Is the unnecessary excess of C++ in the room with us right now?
Again, you don't have to use any part of C++ you don't like. A few minor spelling differences aside, it remains a superset of C. Using C++ not C will not hurt you in any project whatsoever.
Yeah, even integrating any C++ into it and calling any QEMU functions via extern "C" declarations when including the headers cause many issues because many variables are named "new" or have type cast issues.
It's a full time job on its own to fix all of the compilation errors. Could probably fix it with coccinelle scripts to make some parts easier, but still, validating the codebase with different compilers to make sure there's no resulting subtle breakage either still requires a lot of effort.
Any ideas why people are relying on distro packaged Rust for development instead of rustup? For Rust it feels weird making development choices around several year old versions of the language.
Rustup downloads toolchains from third-party (to the distro) repositories; distros do not want to be in a position where they can no longer build packages because of an external service going down.
So, if you are developing something you want to see packaged in distros, it needs to be buildable with the tool versions in the distro's repositories.
(Not just rustup- Debian requires repackaging Cargo dependencies so that the build can be conducted offline entirely from source packages.)
You’re answering a slightly different question but to me that’s a Debian packaging problem to solve. It’s weird to me that QEMU devs take this problem seriously enough to be putting in all sorts of workarounds to support old versions of the toolchain in the tip of tree just to privilege Debian support.
This feels more like a CI thing for the QEMU project and I’m sure solvable by using rustup or a trusted deb repo that makes the latest tool chain available on older Debian platforms.
As for Debian itself, for toolchains it really should do a better job back porting more recent versions of toolchains (not just Rust) or at least making them available to be installed. The current policy Debian is holding is really difficult to work with and causes downstream projects to do all sorts of workarounds to make Debian builds work (not just for Rust by the way - this applies to C++ as well). And it’s not like this is something it’s unfamiliar with - you can install multiple JVM versions in parallel and choose a different default.
It's not about our own CI -- we could easily use rustup as part of setting up the CI environment, and I think we might actually be doing exactly that at the moment.
Lots of QEMU users use it through their downstream distros. We even recommend that if you're using QEMU in a way that you care about its security then you should use a distro QEMU, because the distros will provide you timely security fix updates. Sure, we could throw all that cooperation away and say "tough, you need to use up-to-the-minute rust, if that's a problem for distro packagers we don't care". But we want to be a good citizen in the traditional distro packaging world, as we have been up til now. Not every open source project will want or need to cater to that, but I think for us it matters.
That doesn't mean that we always do the thing that is simplest for distros (that would probably be "don't use Rust at all"); but it does mean that we take distro pain into account as a factor when we're weighing up tradeoffs about what we do.
To be clear. I’m not criticizing the position the QEMU project is in. I recognize you have to work with Debian here. I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
Out of curiosity though, have you explored having your own deb repo instead? I would trust QEMU-delivered security fixes on mainline far more than the Debian maintainers to backport patches.
I think that trust would be somewhat misplaced -- QEMU has historically not made particularly timely security fixes either on mainline or on branches. To the extent that our stable-branch situation is better today than it was some years ago, that is entirely because the person who does the downstream Debian packaging stepped up to do a lot more backporting work and stable-branch maintenance and releases. (I'm very grateful for that effort -- I think it's good for the project to have those stable branch releases but I certainly don't have time myself to do that work.)
As an upstream project, we really don't want to be in the business of making, providing and supporting binary releases. We just don't have the volunteer effort available and willing to do that work. It's much easier for us to stick to making source releases, and delegate the job of providing binaries to our downstreams.
These two statements to me seem contradictory:
> QEMU has historically not made particularly timely security fixes either on mainline or on branches
> It's much easier for us to stick to making source releases, and delegate the job of providing binaries to our downstreams
Am I correct that this is essentially saying "we're going to do a snapshot of the software periodically but end users are responsible for applying patches that are maintained by other users as part of building"? Where do these security patches come from and how do non-Debian distros pick them up? Are Arch maintainers in constant contact with Debian maintainers for security issues to know to apply those patches & rebuild?
Security patches are usually developed by upstream devs and get applied to mainline fairly promptly[1], but you don't want to run head-of-git in production. If you run a distro QEMU then the distro maintainers backport security fixes to whatever QEMU they're currently shipping and produce new packages. None of this is particularly QEMU specific. There's a whole infrastructure of security mailing lists and disclosure policies for people to tell distros about security bugs and patches, so if you're a distro you're going to be in contact with that and can get a headsup before public disclosure.
[1] and also to stable branches, but not day-of-cve-announcement level of urgency.
Sure, but then why does the mainline branch need to worry about supporting the rust that’s bundled with the last stable Debian release? By definition that’s not going into a distro (or the distro is building mainline with rusts latest release anyway).
Is it a precautionary concern that backporting patches gets more complicated if the vuln is in Rust code?
But then again Rust code isn’t even compiled by default so I guess I’m not sure why you’re bothering to support for old versions of the toolchain in mainline, at least this early in the development process. Certainly not a two year old toolchain.
We already make an exception in that we don't support Debian bullseye (which is supported by the rest of QEMU until the April 2025 release), but not supporting Debian stable at all seemed too much.
That said we will probably switch to Debian rustc-web soon, and bump the lower limit to 1.75 or so.
> I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
What Debian has is not a "stranglehold" but an ideology, and Debian continues to matter to (some) upstream projects because lots of users identify with Debian's hyperconservative, noncommercial ideology.
Your complaint is basically, "it's too bad that the userbase not sharing my values is large enough to matter".
> What Debian has is not a "stranglehold" but an ideology, and Debian continues to matter to (some) upstream projects because lots of users identify with Debian's hyperconservative, noncommercial ideology.
> Your complaint is basically, "it's too bad that the userbase not sharing my values is large enough to matter".
Arch and rolling releases have about the same market share as Debian. Indeed, ironically, Debian's widespread adoption is seen primarily in the enterprise space where it's free as in beer nature and peer adoption is a signal it's a suitable free (as in beer) alternative to RedHat. Without Ubuntu's popularity a while back making Debian not so crazy an idea, I think "Debian" philosophy would not have anywhere near the adoption we see in commercial environments.
> I’m more frustrated that Debian has such a stranglehold on packaging decisions and it effectively refuses to experiment or innovate on that in any way.
Does Debian have a stranglehold? AFAIK every other distro does the same thing, and all of them for good reasons.
Ubuntu and Arch are about equal market share penetration for the desktop from what I researched with other and steam deck being the main dominant categories. So ignoring corp fleet deployments, I'd say Arch and NixOS have stolen quite a bit of market share from Debian-based systems in terms of end-user preference. But yes, Debian does still have a stranglehold because corp $ are behind Debian-style deployments.
> As for Debian itself, for toolchains it really should do a better job back porting more recent versions of toolchains (not just Rust) or at least making them available to be installed.
Most toolchains don't have as much churn as Rust.
Disagree. No stable release of Debian today supports c++23 which is coming up on two years old at this point (this corresponds to a major Rust edition, not the minor stuff they publish on an ongoing basis every month).
Java in Bookworm installs JDK 17 which is three years old at this point. Java itself is on 23 with 21 being an LTS release.
This means that upstream users intentionally maintain an old toolchain just to support Debian packaging or maintain their own deb repo with newer tools.
You’re confusing cause and effect. People aren’t migrating because Debian packaging lags so badly, not because there aren’t improvements projects would love to otherwise use.
> Most toolchains don't have as much churn as Rust.
What churn? A release every 6 months? Unlike many others, toolchains (i count nodejs and co here) rust only need one toolchain because latest rustc always able to compile older rust code.
Now compare rust releases to this: https://gcc.gnu.org/releases.html
Rust’s release cadence is 6 weeks not 6 months.
The goal of non-rolling release distros is to have a predefined set of dependencies, which the distro maintains and fixes if necessary.
If Rust decides that it no longer supports compiling on x86, or if it starts depending on a version of LLM that no longer runs on a supported architecture, Debian must fix that for their users. That leaves the curl2bash installers that are popular with fast moving tools and languages useless for long term stability. The same goes for crates, which can be pulled at any time for almost any reason and break compiles.
Then there are other setups distros can choose to support, like having update servers/package repositories for updating servers that aren't allowed to connect to the internet or are even air gapped. You can't save rustup to a DVD, but you can take a copy of the Debian repository and update air gapped servers in a way that leaves a permanent trace (as long as you archive the disks).
Not all distros have this problem. In theory the Rust people could set up an apt/repository Debian users can add to get the best features of a package manager and the latest version, and distros like Arch/Gentoo don't bother with the stability guarantees most distros have.
Qemu can ignore these requirements and opt into not being packaged into distros like Debian or Ubuntu or Fedora or RHEL or Oracle Linux. That'd cost them a lot of contributions from those platforms, though, and may cause a risk of the project being forked (or even worse, end up in the ffmpeg/avconv situation).
> If Rust decides that it no longer supports compiling on x86, or if it starts depending on a version of LLM that no longer runs on a supported architecture
Then you stop publishing new versions of the toolchain for x86 releases of the distro? I fail to see the problem. Nothing prevents you from not making a newer version of the toolchain available. Indeed, that's the default.
> The same goes for crates, which can be pulled at any time for almost any reason and break compiles.
I never said you have to extend this by making all versions of crates available. You can freeze the crates using the same mechanism to redirect Cargo as they do today.
I'm going to ignore the rustup stuff as I wasn't proposing rustup be used for building the base Debian image itself - that was a comment more about the environment QEMU itself can advocate for for people building it & leaving it to the distros to solve their own packaging problems.
Well, at least a couple of years ago, one surprising thing I discovered with rustup was that unlike distros that have a clean install/uninstall, rustup was rm -rf * ing everything in the place it was installed. There was an open bug on it and multiple complaints. I, who needed it for various system builds, had innocently rustup'd to /usr/local. On uninstall, it wiped all of /usr/local. I had backups on that machine, but it was still an unpleasant experience and I did lose a few not-too-important scripts.
I don't know if others have run into the same thing, but it's why I'd trust Gentoo's packaging more.
Because QEMU wants their thing to be packaged in various distros. Those distros don't allow packages to bring their own tool chain (for good reasons).
My point is there should be a discussion about how to make newer toolchains available to end users. This has nothing to do about packaging within the distro and a non-issue since whatever "tip of tree" is for QEMU is what would get frozen as the "blessed" version and that could use whatever tip for the toolchain they needed since that's likely the same version that Debian would freeze.
I find it funny and sad at the same time that an installer for a "safe" programming language teaches people to download a shell script from a website and run it. What a farce.
With cert pinning, CT, and other advancements in transport security, I don't see a huge fundamental difference between this and adding a random apt repository & doing an apt install.
You should also not add random apt repositories from the internet. But there is still a major different in terms of the implications for user education.
[flagged]
By "the Rust community" here, you mean the author of the mail, one of the biggest long-standing developers of QEMU since before Rust was a thing? You're complaining that the developers of QEMU themselves are interested in adopting Rust because they think it'll work well for them?
This kind of thing reminds me of the backlash against systemd, where detractors felt like it was being imposed on the distributions that were for the most part adopting it because they liked what it offered.
systemd has tried to alter the kernel in breaking ways for it's own ends plenty of times. it's easy enough to not use and plenty of distributions do so with some effort but it's not as if systemd has had zero consequences on the rest of the ecosystem.
> one of the biggest long-standing developers
How many developers does the project have? What is your metric for "biggest?" Just total number of contributions?
> You're complaining that the developers of QEMU themselves
Yep. Do any of them work for any commercial companies? Or are we ignoring that to make an argument of appeals? If they all left would QEMU have zero developers?
> because they think it'll work well for them?
Looking at the set of challenges presented at the bottom of the email and their wiki I'm not sure this idea is well founded. There are more wacky compromises listed than there are good ideas.
If you're just going to push hacks upstream and maintain vendor forks downstream then why wouldn't you just start a new project and bring in whatever bits of C code you need until you can replace them.
I've yet to these efforts actually reduce baggage or show any hope of showing returns on investment inside of a decade of insane effort, which always seems to hamper any further feature or bugfix development, out of fear of interrupting other agonizing "works in progress."
Some of this think this is clearly goofy and can't help but comment on it.
> There are more wacky compromises listed than there are good ideas.
I am happy to learn about other's opinions and especially contrary opinions, otherwise I would not have made a public post. I am myself not 100% sure that the idea will be successful, it would be stupid to think it certainly will; but the good thing is that the existing C code will be unaffected (or improved; for example https://lore.kernel.org/qemu-devel/20241129180326.722436-1-p... was found while looking at the code to write the corresponding Rust bindings).
Sometimes you have to try crazy things. One perhaps controversial changes I headed in QEMU was to introduce the Meson build system. You may wonder what is wrong with me. I am happy to say that since then I have almost never had to ask a question about the build system, which was a very common occurrence before, and it has actually simplified the implementation of features such as custom device configuration, improved module loading, entitlement support for macOS, relocatable installs, autogenerated parsing of the configure command line, improvements to cross-compilation. While slightly reducing the number of lines of code compared to before. So I disagree that exploratory work is "a decade of insane effort".
So I am curious, what are the wacky compromises?
What leg do you think you have to stand on when it comes to Rust in Linux? Linus Torvalds is fine with the Kernel experimenting with Rust.
What leg do you think you have to stand on to debate in such a manner? If you want to challenge the ideas, here on _hacker news_, then that would obviously be welcome, if all you want to do is appeal to authority in an effort to declare the conversation as invalid then you've added nothing and behaved disrespectfully.
Should call this site "embarrassed hacker propaganda." At least then you'd "have a leg to stand on." What a gross tactic you've relied on here.
just wait until you see the real conspiracy
[flagged]
No need, Linux with extinguish by itself when the leadership that created it in first place is long gone.
It will certainly continue to exist in dozen forks, although nothing that will continue their vision.
[flagged]
Even if you took the whole safety aspect away, why should I start a new project in Rust as opposed to C or C++?
Rust has modern tooling, great IDE support and a language server, nice dependency management, cargo and I could go on. Writing rust makes it imo much easier to structure your code and project as well.
I just recently had to build a medium sized C project. The Makefile alone was at least 700 lines long. In my opinion I also just don't see younger people and newcomers putting up with that when there's a shiny new alternative. And if you want to sustain you have to attract new people and potential maintainers.
Other than that, I agree that the rewriting is mostly not warranted. The thing is, it is about people and not really about code or programming language. If your entire team knows C well and will be able to maintain that project for years to come, it'd be weird to rewrite it all.
Read his comment. This guy literally writes exploits for C/C++ software. Of course he wants you to keep using memory-unsafe languages, otherwise his business dries up!
LMAO. Alright you made me laugh you got me there. My secret plot fam…
> why should I start a new project
You should not. Simple as that. Odds are very good there already is a project that does what you want and you should join or buy that. If you start a new project you have a lot of effort to be just as good as the previous one, and those projects are not standing still.
Okay, maybe it makes sense to start a new game. However you should use an existing game engine not build your own. A few other things like that exist.
C++'s metaprogramming is still better. Rust needs specialization, const expressions, and a bunch of other things before it can be a full replacement for C++.
You’re right, but most people who are still using C++ over rust aren’t doing so because of better metaprogramming. It’s a relatively minor advantage, and on the other side, the advantages of Rust over C++ are massive.
Say you have to develop an embedded project. Try dealing with Rust and its dependency hell as everything you do requires a million different packages.. Or, develop me a driver that needs DMA.. How about a kernel allocator? Want to do that? Sure just wrap everything in "Unsafe"... So what is the point? Furthermore Rust programs link to libc ironically.
You're definitely not going to be linking to libc in a no_std environment, which is going to cover much (though not all) of embedded development. That will also remove some of the most bloated library dependencies from even being available to you in the first place, and of the rest, it will debloat a lot of them.
Even if your embedded development is targeting a hosted platform with libstd available, whether you link libc will depend on whether you use libc. Thanks to its stable syscall ABI, you don't have to link anything on Linux. But most other similarly situated operating systems require you to link something, whether it be USER32.DLL/KERNEL32.DLL on Windows, libSystem*.dylib on macOS, or yes, libc*.so on (most of?) the BSDs and other UNIX systems. And when libc is not required by the platform, but you do need some of its functionality (either out of convenience or cross-platform compatibility), you can statically link it (including MSVCRT on Windows).
All of that HAS been done in Rust. It's not that bad. It's actually quite good in many ways. #[no_std] is fairly unique to Rust.
Yes it has and can be done with rust. But usually by writing “unsafe” code so what the hell is the point if the majority of the code isn’t gonna have the Rust memory guarantees it would normally have? Furthermore, not all MCUs have a Rust compiler or tool chain to use. C is the defacto standard. Not saying it cannot be done, just that I am not sure it’s worth the extra effort and overhead.
Entire kernels have been written in Rust with less than 10% unsafe code. The entire line of argument that "the majority of code" needs to be unsafe in such contexts is BS.
Especially when the "Rust culture" suggests building safe abstractions on top of unsafe building blocks, which tends to keep the unsafe code pretty well-contained to a small section of the codebase. There's plenty of Rust frameworks for microcontroller programming (embassy, etc.) that don't involve much if any unsafe Rust at all.
Can you link me to a viable/real world kernel rewritten in Rust? One that supports modern hardware? I am not talking about implementing a toy kernel here.
This is closer to an RTOS, but since we're also talking about embedded, we wrote https://hubris.oxide.computer/ for use in our products.
Kernel is about 3% unsafe code last I checked, which was just a few months ago.
https://www.redox-os.org/
I'm current working on an embedded project in rust. The most annoying part is when the project is shared with a non-embedded part, but otherwise it's a breeze to work on.
Had no issues using DMA or anything else.
> "Sure just wrap everything in "Unsafe"... So what is the point?"
https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html : "You can take five actions in unsafe Rust that you can’t in safe Rust .. it’s important to understand that unsafe doesn’t turn off the borrow checker or disable any other of Rust’s safety checks: if you use a reference in unsafe code, it will still be checked .. by requiring these five unsafe operations to be inside blocks annotated with unsafe you’ll know that any errors related to memory safety must be within an unsafe block. Keep unsafe blocks small; you’ll be thankful later when you investigate memory bugs."
You are confusing best practice for what is done all too often. Unsafe should be small blocks, but I've seen people put unsafe on everything even though it isn't needed thus making it hard to find where it is needed. I'm not a rust programmer, but I'm lead to believe that those people then do things that need unsafe - but a safe option not only exists but would have been easier to write.
For very low-level stuff (e.g. embedded) you might need a lot of unsafe. For the vast majority of software it’s extremely rare. I worked full time for five years on a Rust project (https://github.com/MaterializeInc/materialize) and anecdotally, unsafe code was much less than 1% of the codebase.
Clangd just works, and quite nicely, for me in Emacs, didn't even need to install any external package besides clang-tools from my distro's repository. I don't get this argument at all. The dependency management advantage is probably cool until you realize you can basically only trust distro maintainers to provide long term support for dependencies and crates makes it much harder for them, and makes programs less likely to remain secure in the long run. Who's going to fix a heap of Rust abandonware when an exploit is found in their transient dependencies?
The fact that rust makes it harder to write memory corruption bugs is only one of its many advantages. It’s genuinely a lot nicer and easier to use than comparable languages like C++, and I’d prefer it to that even if I didn’t care about memory corruption at all.
Also, what’s wrong with the syntax? I hear a lot that people find the syntax ugly but I never understood what’s so fundamentally different about it compared to any other mainstream Algol-based language
What are the other advantages exactly besides tool chain? A lot of people using Rust where the could have just used Go or another language that is memory safe and actually productive. Maybe I am just being a hater because I grew up on C/C++ but I know for a fact "Rustaceans" are getting out of control. Approaching zealot territory for sure. The time people spend fighting the Rust compiler for a project, they could have just written secure C.. That is just my personal opinion. I am not saying Rust shouldn't exist. I am just saying it isn't some big universal answer to all security and systems programming issues..
> What are the other advantages exactly besides tool chain?
It's a very productive language once you're experienced.
I'm one of those people who (initially) didn't care about memory safety at all and just wanted a more productive C++, and for me it delivers on that perfectly. Nowadays I even use Rust for scripting, because with the right libraries it's just as productive for me as Ruby and Python can be while being orders of magnitude faster.
I always find it funny when I see the "fighting with the borrow checker" meme, or as you say - "people spend fighting the Rust compiler for a project", where people complain how extremely unproductive Rust is. This is very much true, if you're a beginner. It's a language with a very high skill ceiling (similar to C++).
"as productive as Ruby or Python"
So you basically never have to worry about lifetimes or memory management or the borrow checker? Because that would be a prerequisite for it to be as productive as Python.
I'd love to see a seasoned Python developer and a seasoned Rust developer comparing the time they spend to solve e.g. Advent of Code. I bet the Python dev would solve it at least ten times faster (developer time, not execution time).
> So you basically never have to worry about lifetimes or memory management or the borrow checker?
Yes. Once you're experienced enough you naturally start writing code which satisfies the borrow checker and you never really have to think about it. At least that's how it is for me.
> Because that would be a prerequisite for it to be as productive as Python.
It's not that hard to be more productive than Python for a lot of tasks, simply because Python isn't actually the most productive language in a lot of cases, it's just the most well known/most popular. (:
I do a lot of data processing in my scripts, and for many years my default was to use Ruby. The nice thing about Ruby is that things which take 3~4 lines of Python usually only take 1 line of Ruby and are significantly more convenient to do (e.g. it has proper map/filter/etc., nice multiline lambdas, regex matching is integrated into the language, shelling out to other processes is convenient and easy, etc.), which translates into significant productivity savings when you just want to whip up a script as fast as possible.
So some time ago I started writing my scripts in Rust instead of Ruby (because I often deal with multi-gigabyte files, so the slowness of Ruby started to be a problem; otherwise I would have kept using Ruby). And I've made myself a small custom library that essentially allows me to use Ruby-like APIs in Rust, and it's remarkable how well that actually worked. I can essentially write Ruby-flavored Rust, with Ruby-like productivity, but get Rust-like performance.
In C++ I have learned the patterns and so I rarely need to worry about lifetime - everything is either on the stack or a unique_ptr. Even when I need to take a pointer I know I don't own it but my project has clear lifetime rules and so I normally won't run into issues.
The above is not perfect. I do sometimes mess up, but it is rare, and that is C++ so I don't get tools/the language helping me.
> So you basically never have to worry about lifetimes or memory management or the borrow checker? Because that would be a prerequisite for it to be as productive as Python.
To be more productive than Python? I almost never have to worry about lifetimes or the borrow checker. And even when I do, I'm still more productive.
I wrote a comment a while back on this topic; someone asked for a comparison between a little Python script and Rust. You can see both versions linked here https://news.ycombinator.com/item?id=40089906
> The time people spend fighting the Rust compiler for a project
To be clear, this isn't time you've spent yourself. You're saying this based on hearsay that other people are struggling with this.
Not that I'd convince you, but this is mostly an issue that beginners face. Folks who can get past that hump enjoy a plateau of productivity.
And really it's not so much fighting the compiler as it's understanding your code better—in ways that help you even when writing C.
The times that you do fight the compiler it will suggest the right thing (add clone, add &, add *, import a type or trait).
That's been my experience. The troubles I've had with Rust were when I was trying to do things the way I would've in C, and either rustc or cargo clippy told me that my idea is bad and there's a better way. I feel like I've learned a lot about better coding in general from it.
> The time people spend fighting the Rust compiler for a project, they could have just written secure C
I simply don't believe this anymore, based on the number of buffer overflow and memory corruption CVEs coming out of even mature C codebases every year
CVEs are being assigned recklessly these days. Most of them aren't even security bugs and or cannot be exploited. Be wary of CVEs and their practical utility.
Also this is a good read from AFL creator lcamtuf: https://lcamtuf.substack.com/p/a-reactionary-take-on-memory-...
The US government, Linux, Google, Mozilla and many more see value in memory safe languages, yet I see so many people, like you, saying "No, they're all wrong. Get good.". I just don't get it.
What insights do you have that makes you more qualified than all these organizations combined? What are they all missing?
I have been doing government work for a while and understand why they respond that way. Nation states can in fact develop very powerful kill chains. My entire start up was created and sold based on a vehicle kill chain but that took us a year and a half to do… Mean while some kid in his moms basement uses leaked credentials and sprays a network or phishes and boom. Major breach…
Go just feels a lot less productive to me than Rust. It feels very low level and boilerplate-intensive like C even though it has a garbage collector.
A team can learn and start using Golang in a week. That is in fact what is powering a lot of companies right now. Golang has even better memory safety guarantees than Rust and you don't really need to worry about memory management at all... Furthermore its compiled statically and more suitable for distribution and horizontal scaling. I am not sure how quickly a team can become productive in Rust but I am willing to bet that it would take way way longer to get off the ground. That being said, using an LLM might help with that but then you would still have code people don't understand and that becomes technical debt... Maybe I am just old and grumpy.. I am learning Rust myself and I just don't understand why it's being pushed so hard. I think Zig should be pushed for systems programming if anything..
> Golang has even better memory safety guarantees than Rust
I don't think that's true at all. For one, Go has data races, which lead to undefined behavior and memory corruption. For example, appending to the same slice from multiple threads will corrupt its metadata and can lead to out-of-bounds reads and writes.
> A team can learn and start using Golang in a week
True, but why should we be optimizing for the first week experience? Your career lasts 40 years.
> Golang has even better memory safety guarantees than Rust
That is not true. What specific example did you have in mind? As an example of something Rust can enforce that Go can’t is not mutating something that’s shared between threads without acquiring the proper lock.
> Furthermore its compiled statically and more suitable for distribution and horizontal scaling.
Rust can be statically linked just like Go can. Not sure what else you think makes it less suitable for distribution and horizontal scaling. There are certainly lots of companies distributing Rust programs and horizontally scaling them so this seems empirically false.
> I am learning Rust myself and I just don’t understand why it’s being pushed so hard.
Because it has a lot of nice features that make a lot of people like it - memory safety without GC, prevention of data races, algebraic data types, etc. No other mainstream compiled languages has this set of features. There’s no conspiracy to “push” Rust. The push is organic. People just like it.
There is Zig and a couple others. Rust borrows those features from functional languages they aren’t new innovations by any means..
> "I know for a fact "Rustaceans" are getting out of control"
That isn't a fact, that's an opinion. A pearl-clutching, panicky, fact-free opinion framed in terms of "control" which raises questions about who you think should be "controlling" those uppity people who are doing things you don't like.
Seriously - an explosion can be out of control, but other people aren't supposed to be in your control in the first place, right? That's basic freedoms and so on. How is your position any different to any other entrenched social / power structure attempting to control people who want things to change?
You’re right that is an opinion. I said that to emphasize the opinion…
> Maybe I am just being a hater because I grew up on C/C++ but I know for a fact "Rustaceans" are getting out of control. Approaching zealot territory for sure.
This is a perfectly level-headed submission about trying out Rust in some corner of the Linux Kernel. Which you then take as an opportunity to go on this rant... I’ll let people read my conclusion between the lines.
> The time people spend fighting the Rust compiler for a project, they could have just written secure C.. That is just my personal opinion.
Empirical evidence from the largest software firms in the world who do research on objective metrics on software defects show that there's no such thing as secure C and that Rust is slightly more productive.
> Binary exploitation is my field of expertise.
> Creating an entire language to prevent such bugs is excessive, especially given the current state of software security.
I'm having a hard time reconciling those 2 statements. I can imagine any 1 person having one or the other of those opinions, but not both at the same time.
> Meanwhile, the vast majority of security issues (around 98%) come from basic human errors, such as reusing passwords or falling for phishing, not sophisticated zero-days exploiting memory corruption.
We're not limited to doing just one thing, you know. Some people are working on improving the authentication space. Others are working on making it easier for developers to write safe code. Both of those can progress at the same time.
Lcamtuf, the creator of afl shares the same opinion… I linked to his Substack in my original comment. Give that a read. My main point is over emphasis on memory safety doesn’t seem to pay off when there are lower hanging fruits on most large networks.. obviously I want memory safety, but that’s irrelevant when there are easier attack vectors still open.
I read it. He's not saying what you seem to think he's saying.
That aside, any solution that involves tooling people have to opt into is doomed to fail. We've seen this a thousand times throughout history: if you require people to go out of their convenience to choose the safer option, they won't do it. It doesn't matter how great the tools are. How cheap they are. How easy they are. If they're not part of the standard pipeline everyone gets unless they deliberately modify it, they won't be used.
You've said a few times that C/C++ have tools that get you most of the same controls as Rust. First, they don't. But even if they did, those tools aren't used by default. Rust makes everyone write code that satisfies the borrow checker whether they want to or not. There's no `-Wborrow-mistakes` flag we have to set. There's not even a `-Wno-borrow-mistakes` flag we can turn off if we get tired of the error message. If you write Rust, borrows are safe, and we don't get a say in it.
You can write C code that satisfies Coverity (where I use to work) and `-Wall` and Valgrind and and and, but the person next to you might not. And as long as it's easier to write unchecked C than to run it through a tester before validation, it won't be as safe as Rust at the things Rust checks. It can't be. Anything that depends on humans to make the safe choice is dead before it starts.
I agree the rewrite everything in Rust meme is overblown, and the Rust evangelists can be insufferable. I also agree that, at a glance, Rust syntax is (in my opinion) quite ugly. Once you start using the language it makes more sense why things are the way they are, but it was offputting to me at first and I still think its ugly now.
However I don't think adding Rust support, rewriting old, critical components, using Rust for a new project, etc. are bad things. If someone volunteers to rewrite critical components in a way that eliminates (or significantly reduces) one of the most exploited and damaging classes of bugs, while also adding support for said way, then I don't see the problem. If someone opens a GitHub issue on your C++ project telling you to rewrite it in Rust, just close it and move on.
I understand that. It is just they end up writing Rust code and then you go and see it's linked to libc or filled with "Unsafe". It isn't too difficult to write correct, safe C++. You can enable compiler settings that are similar to what Rust does..
On the critical bugs issue... To exploit a lot of these memory bugs it takes an entire new level of effort. We often need to chain together three or more bugs and land the exploit reliably.. This was much easier to do circa 2001 - 2016 but all the mitigations in place have really raised the bar.. From my experience providing exploits to a red teams that pen-test fortune 500 companies (my previous job at iDefense/FusionX), things have gotten far far more difficult. Rarely needed to use any advanced "sexy kill-chains" (like Chrome or IE exploit) because phishing and other means of network entry were far more reliable and much easier. My point is that study Google released expressed the bugs are dangerous and disastrous, yes, but they aren't the most prevalent risk in the real world. I am rambling a bit. Sorry for poor grammar typing on phone.
> "The current trend of rewriting everything in Rust is out of control and misguided... EDIT: lcamtuf does a great job explaining my perspective on Rust push https://lcamtuf.substack.com/p/a-reactionary-take-on-memory-... "
The current trend is people rewriting Unix command line utilities in Rust, as a hobby. Nobody is rewriting Adobe PhotoShop or Oracle in Rust.
Can you name some projects which you class as "out of control"?
[I don't use Rust, so this isn't "you should RiiR". Adobe isn't rewriting photoshop in Rust because of some Twitter/tech news hype. And if Adobe is rewriting Photoshop in Rust over the coming decades because of government guidance, is that really "out of control"?]
Alright so I exaggerated a bit for sure. But I am seeing way too much “Rust in the silver bullet” nonsense…
> Exploiting mature software like Adobe Reader is already incredibly challenging due to its hardened defenses.
Is this a troll? Isn’t Adobe Reader one of the easiest pieces of software to exploit because it enables risky features by default and lacks proper sandboxing? Just searching for “adobe reader security vulnerability” brings up a critical software update for CVE-2023-26369 as a top hit which is:
> Acrobat Reader versions 23.003.20284 (and earlier), 20.005.30516 (and earlier) and 20.005.30514 (and earlier) are affected by an out-of-bounds write vulnerability that could result in arbitrary code execution in the context of the current user. Exploitation of this issue requires user interaction in that a victim must open a malicious file.
> The claim, often cited from a Google study, that memory corruption bugs pose the largest risk feels exaggerated in this context
Actually this comes from analysis of NIST data which tries to track vulnerabilities and memory safety resulting in arbitrary code execution consistently shows up as the number one issue for C/C++ despite it being a non issue for most other languages. Indeed here’s the vulnerabilities for adobe reader and they’re dominated by memory safety issues: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=Adobe+reade...
> Don't even get me started on Rust syntax... I am convinced the syntax was intentionally developed to troll us all with the ugliest syntax on the planer.. I'll stop now before I begin ranting forever.
Rust shows what it takes from a language to achieve compile time memory safety (and the same syntax also provides thread safety) with the performance profile of C/C++. The syntax actually isn’t that hard to get used to and learn and I’d argue it’s easier for beginners since incorrect code will throw an error and explain what’s going wrong instead of crashing at compile time. In the static vs dynamic verification spectrum it costs more static verification. Indeed, from that perspective, why are you even coding in c++? Use assembly if it’s too high level and you don’t like syntax getting in your way or use Javascript so that you can write code that has almost no static checking. In practice rust is waaaaay nicer to write in with a much more mature and reliable project build system that works the same everywhere with a rich ecosystem of tooling that is trivially accessible. The standard library is high performance with a lot of things available that c++ would only dream of with c++ still arguing how to make breaking changes to the language in the standard body, a problem that rust developers don’t even think about (eg rust hashmaps are faster and higher quality than c++ maps).
Developing a full kill chain for adobe usually requires chaining together several bugs. "CVEs" are getting ridiculous. Prove to me it is easy and go win Pwn2Own or you can do what I do. Sell it to government contractors for a hefty price...
It doesn’t matter about easy or hard. It’s possible and then we’re just talking about the $ required to purchase it on the black market - government contractors don’t really pay well unless I’m misinformed. And a vulnerability remains a vulnerability forever until your victims patch it.
Said another way. Based on CVEs most of the focus of attackers is on memory safety vulnerabilities which means that regardless of price they otherwise fetch, these are still the cheapest and most valuable exploits to uncover in terms of exploit power / dollar spent.
> I am convinced the syntax was intentionally developed to troll us all with the ugliest syntax on the planer..
Not a troll. Just a bit too concessionary to C++ developers. :)