It's great that finally bounds checking happened in C++ by (mostly) default.
The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.
See also the "lite assertions" mode @ https://gcc.gnu.org/wiki/LibstdcxxDebugMode for glibc, however these are less well documented and it's less clear what performance impact these measures are expected to have.
> those that lead to undefined behavior but aren't security-critical.
Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.
Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".
Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
The author of TFA actually makes another related assumption:
> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.
Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.
>Not at all? Most memory-safety issues will never even show up in the radar
Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].
Most people around here are too busy evangelizing rust or some web framework.
Most people around here don’t have any reason to have strong opinions about safety-critical code.
Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.
Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.
Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.
The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.
I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.
nooooo you don't understand, safety is the most important thing ever for every application, and everything else should be deprioritized compared to that!!!
GP picked the less useful of the two examples. The other one is a use-after-move, which static analysis won't catch beyond trivial cases where the relevant code is inside function scope.
I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).
That is actually memory safe, as null will always trigger access violation..
Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.
The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.
Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.
It's great that finally bounds checking happened in C++ by (mostly) default.
The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.
Interesting how C++ is still improving; seems like changes of this kind my rival at least some of the Rust use cases; time will tell
See also the "lite assertions" mode @ https://gcc.gnu.org/wiki/LibstdcxxDebugMode for glibc, however these are less well documented and it's less clear what performance impact these measures are expected to have.
> those that lead to undefined behavior but aren't security-critical.
Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.
Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".
Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
The author of TFA actually makes another related assumption:
> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.
Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.
>Not at all? Most memory-safety issues will never even show up in the radar
Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].
1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
Most people around here are too busy evangelizing rust or some web framework.
Most people around here don’t have any reason to have strong opinions about safety-critical code.
Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.
Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.
Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.
The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.
I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.
Safety-critical systems aren’t connected to a MAC address you can ping. I didn’t move the goalposts.
nooooo you don't understand, safety is the most important thing ever for every application, and everything else should be deprioritized compared to that!!!
std::optional is unsafe in idiomatic use cases? I'd like to challenge that.
Seems like the daily anti c++ post
Two of the authors are libc++ maintainers and members of the committee, it would be pretty odd if they were anti C++.
I’m very much pro c++, but anti c++’s direction.
> optional is unsafe in idiomatic use cases? I’d like to challenge that.
Optional is by default unsafe - the above code is UB.But using the deref op is deliberately unsafe, and never used without a check in practice. This would neither pass a review, nor static analysis.
GP picked the less useful of the two examples. The other one is a use-after-move, which static analysis won't catch beyond trivial cases where the relevant code is inside function scope.
I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).
> never used without a check in practice
Ho ho ho good one.
> This would neither pass a review, nor static analysis
I beg to differ. Humans are fallible. Static analysis of C++ cannot catch all cases and humans will often accept a change that passes the analyses.
> Static analysis of C++ cannot catch all cases
You're ignoring how static analysis can be made to err on the side of safety rather than promiscuity.
Specifically, for optional dereferencing, static analysis can be made to disallow it unless it can prove the safety.
That is actually memory safe, as null will always trigger access violation..
Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.
It is undefined behavior. You cannot make a claim about what it will always do.
>null will always trigger access violation..
No, it won't. https://gcc.godbolt.org/z/Mz8sqKvad
Oh my bad, I read that as nullptr, I use a custom optional that does not support such a silly mode as "disengaged"
How is that an optional then?
The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.
You didn't read this, did you? https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
It's not a pointer.
They linked directly to https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/ which did exactly what I'd guessed as its example:
> The following code for example, simply returns an uninitialized value:
But that is not idiomatic at all. Idiomatic would be too use .value()
Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.
Not only is this a silly No True Scotsman argument, but it's also absolute nonsense. It's perfectly idiomatic to use `*some_optional`.
It is discussed in the linked post: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
tl;dr: use-after-move, or dereferencing null.