The problem they want to address is partial equality when you want to compare orders but ignoring the ordered_at timestamp. To me, the problem is throwing too many unrelated concerns into one struct. Ideally instead of using destructuring to compare only the specific fields you care about, you'd decompose this into two structs:
I get that this is a toy example meant to illustrate the point; there are certainly more complex cases where there's no clean boundary to split your struct across. But this should be the first tool you reach for.
You have a good point there, that is better. But it is still, well honestly, wrong. Two orders ordered at different times are just not the same order, and using a typeclass approach to say that they most definitely are is going to bite you in the back seat.
PartialEq and Eq for PizzaDetails is good. If there is a business function that computes whether or not someone orders the same thing, then that should start by projecting the details.
I do agree that implementing PartialEq on orders in this way is a bad fit. But it is a synthetic example to make a point, so I tried to keep it in the spirit of the original article (while ironically picking nits in the same vein myself).
Yeah, I immediately twitched when I saw the PartialEq implementation. Somebody is going to write code which finds the "correct" order and ends up allowing someone to order the same pizza but get yours, while you have to wait for it to be made and cooked again.
It's not difficult to write the predicate same_details_as() and then it's obvious to reviewers if that's what we meant and discourages weird ad-hoc code which might stop working when the PizzaDetails is redefined.
Indexing into arrays and vectors is really wise to avoid.
The same day Cloudflare had its unwrap fiasco, I found a bug in my code because of a slice that in certain cases went past the end of a vector. Switched it to use iterators and will definitely be more careful with slices and array indexes in the future.
Was it a fiasco? Really? The rust unwrap call is the equivalent to C code like this:
int result = foo(…);
assert(result >= 0);
If that assert tripped, would you blame the assert? Of course not. Or blame C? No. If that assert tripped, it’s doing its job by telling you there’s a problem in the call to foo().
You can write buggy code in rust just like you can in any other language.
I think it's because unwrap() seems to unassuming at a glance. If it were or_panic() instead I think people would intuit it more as extremely dangerous. I understand that we're not dealing with newbies here, but everyone is still human and everything we do to reduce mistakes is a good thing.
I'm sure lots of bystanders are surprised to learn what .unwrap() does. But reading the post, I didn't get the impression that anyone at cloudflare was confused by unwrap's behaviour.
If you read the postmortem, they talk in depth about what the issue really was - which from memory is that their software statically allocated room for 20 rules or something. And their database query unexpected returned more than 20 items. I can see the argument for renaming unwrap to unwrap_or_panic. But no alternate spelling of .unwrap() would have saved cloudflare from their buggy database code.
The point is Rust provides more safety guarantees than C. But unwrap is an escape hatch, one that can blow up in your face. If they had taken the Haskell route and not provide unwrap at all, this wouldn't have happened.
Haskell’s fromJust, and similar partial functions like head, are as dangerous as Rust’s unwrap. The difference is only in the failure mode. Rust panics, whereas Haskell throws a runtime exception.
You might think that the Haskell behavior is “safer” in some sense, but there’s a huge gotcha: exceptions in pure code are the mortal enemy of lazy evaluation. Lazy evaluation means that an exception can occur after the catch block that surrounded the code in question has exited, so the exception isn’t guaranteed to get caught.
Exceptions can be ok in a monad like IO, which is what they’re intended for - the monad enforces an evaluation order. But if you use a partial function like fromJust in pure code, you have to be very careful about forcing evaluation if you want to be able to catch the exception it might generate. That’s antithetical to the goal of using exceptions - now you have to write to code carefully to make sure exceptions are catchable.
The bottom line is that for reliable code, you need to avoid fromJust and friends in Haskell as much you do in Rust.
The solution in both languages is to use a linter to warn about the use of partial functions: HLint for Haskell, Clippy for Rust. If Cloudflare had done that - and paid attention to the warning! - they would have caught that unwrap error of theirs at linting time. This is basically a learning curve issue.
This is the equivalent of force-unwrap in Swift, which is strongly discouraged. Swift format will reject this anti-pattern. The code running the internet probably should not force unwrap either.
Funny, it's really the same thing, why Rust people say we should abandon C. Meanwhile in C, it is also common to hand out handle instead of indices precisely due to this problem.
It's pretty similar, but writing `for item in container { item.do_it() }` is a lot less error prone than the C equivalent. The ha-ha-but-serious take is that once you get that snippet to compile, there's almost nothing you could ever do to break it without also making the compiler scream at you.
In rust, handing out indexes isn’t that common. It’s generally bad practice because your program will end up with extra, unnecessary bounds checks. Usually we program rust just the same as in C - get a reference (pointer) to an item inside the array and pass that around. The rust compiler ensures the array isn’t modified or freed while the pointer is held. (Which is helpful, but very inconvenient at times!)
In the first example, the match feels extremely overkill. Vec.first() exposes the correct semantic (as does Vec.iter().nth(0) for the more general case), returning an Option.
That's fine but the matching exposes that you should also handle the "has more than one element" case. And in general it's pointing to the idea of "try to not separate checks from code that depends on those checks"
This made me wonder, why aren't there usually teams whose job is to keep an eye on the coding patterns used in the various codebases? Similarly like how you have an SOC team who keeps monitoring traffic patterns, or an Operations Support team who keeps monitoring health probes, KPIs, and logs, or a QA who keeps writing tests against new code, maybe there would be value to keeping track of what coding patterns develop into over the course of the lifetime of codebases?
Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?
I'd imagine automated code smell checking tools can only cover so much at least.
* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).
I work one of these teams! At my company (~300 engineers), we have tech debt teams for both frontend and backend. I’m on the backend team.
We do the work that’s too large in scope for other teams to handle, and clearly documenting and enforcing best practices is one component of that. Part of that is maintaining a comprehensive linting suite, and the other part is writing documentation and educating developers. We also maintain core libraries and APIs, so if we notice many teams are doing the same thing in different ways, we’ll sit down and figure out what we can build that’ll accommodate most use cases.
(I have not read this article closely, but it is about the right concept, so I provide it as a starting point since "readability" writ large can be an ambiguous term.)
See https://abseil.io/tips/ for some idea of the kinds of guidance these kinds of teams work to provide, at least at Google. I worked on the “C++ library team” at Google for a number of years.
These roles don’t really have standard titles in the industry, as far as I’m aware. At Google we were part of the larger language/library/toolchain infrastructure org.
Much of what we did was quasi-political … basically coaxing and convincing people to adopt best practices, after first deciding what those practices are. Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.
Speaking to the original question, no, there were no teams just manually reading code and looking for mistakes. If buggy code could be detected in an automated way, then we’d do that and attempt to fix it everywhere. Otherwise we’d attempt to educate and get everyone to level up their code review skills.
> Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.
Are you aware how those engineers established their recommendations? Did they maybe perform case studies? Or was it more just a distillation of lived experience type of deal?
Aside from just being immensely satisfying, these patterns of defensive programming may be a big part of how we get to quality GAI-written code at scale. Clippy (and the Rust compiler proper) can provide so much of the concrete iterative feedback an agent needs to stay on track and not gloss over mistakes.
Wow that’s amazing. The partial equality implementation is really surprising.
One question about avoiding boolean parameters, I’ve just been using structs wrapping bools. But you can’t treat them like bools… you have to index into them like wrapper.0.
Is there a way to treat the enum style replacement for bools like normal bools, or is just done with matches! Or match statements?
It’s probably not too important but if we could treat them like normal bools it’d feel nicer.
I almost always prefer enums and matches! vs bool parameters.
Another way is to implement a Trait that you find useful that encapsulates the logic. And don't forget you can do impl <Enum> {} blocks to add useful functions that execute regardless of which member of the enum you got.
For ints you can implement the deref trait on structs. So you can treat YourType(u64) as a u64 without destructing. I couldn’t figure out a way to do that with YouType(bool).
As someone unfamiliar with Rust (yet! it's on my ever growing list of things I'd like to absorb into my brain), unwrap_or_else() sounds like part of the "What You See Is What I Threatened the Computer To Do" paradigm.
> INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite.
What's really nice is where you don't need defensive programming in Rust.
If your function gets ownership of, or an exclusive reference to an object, then you know for sure that this reference, for as long as it exists, is the only one in the entire program that can access this object (across all threads, 3rd party libraries, recursion, async, whatever).
References can't be null. Smart pointers can't be null. Not merely "can't" meaning not allowed and may throw or have a dummy value, but just can't. Wherever such type exists, it's already checked (often by construction) that it's valid and can't be null.
If your object's getter lends an immutable reference to its field, then you know the field won't be mutated by the caller (unless you've intentionally allowed mutable "holes" in specific places by explicitly wrapping them in a type that grants such access in a controlled way).
If your object's getter lends a reference, then you know the caller won't keep the reference for longer than the object's lifetime. If the type is not copyable/cloneable, then you know it won't even get copied.
If you make a method that takes ownership of `self`, then you know for sure that the caller won't be able to call any more methods on this object (e.g. `connection.close(); connection.send()` won't compile, `future.then(next)` only needs to support one listener, not an arbitrary number).
If you have a type marked as non-thread safe, then its instances won't be allowed in any thread-spawning functions, and won't be possible to send through channels that cross threads, etc. This is verified globally, across all code including 3rd party libraries and dynamic callbacks, at compile time.
I'm not reading a solid argument as to not use "..Defaults()" because doing so suggests that you may introduce a bug and therefore should be explicit about EVERYTHING instead? Ugh. Hard disagree.
Using ..Default::default() means “whatever additional fields are added later, I don’t care”. Which is great until someone needs to add a field to the struct, and they rely on the compiler to tell them all the places that don’t have a value for the field (so they can pass the right value depending on the situation.) Then the callers with Default are missed, and bugs can result.
Any time you say “I don’t care what happens in the future here”, you better have a good reason for that to be the case, IMO.
In Pattern: Defensively Handle Constructors, it recommends using a nested inner module with a private Seal type. But the Seal type is unnecessary. The nested inner module is all you need, a private field `_private: ()` is only settable from within that inner module, so you don't need the extra private type.
Some of the advice is applicable tp C++ as well. Enums and such things with non-exhaustive checks all have warnings and setting warnings as errors helps.
Loved the article, such a nice read. I am still slowly ramping up my proficiency in Rust and this gave me a lot of things to think through. I particularly enjoyed the temporary mutability pattern, very cool and didn't think about it before!
> I particularly enjoyed the temporary mutability pattern, very cool and didn't think about it before!
It's not too uncommon in other languages (sometimes under the name "immediately invoked function expression"), though depending on the language you may see lambdas involved. For example, here's one of the examples from the article ported to C++:
auto data = []() {
auto data = get_vector();
auto temp = compute_something();
data.insert_range(data.end(), temp);
std::ranges::sort(data);
return data;
}();
The tech industry is full of brash but lightly-seasoned people resurrecting discredited ideas for contrarianism cred and making the rest of us put down monsters we thought we'd slain a long time ago.
"Defensive programming" has multiple meanings. To the extent it means "avoid using _ as a catch-all pattern so that the compiler nags you if someone adds an enum arm you need to care about", "defensive" programming is good.
That said, I wouldn't use the word "defensive" to describe it. The term lacks precision. The above good practice ends up getting mixed up with the bad "defensive" practices of converting contract violations to runtime errors or just ignoring them entirely --- the infamous pattern in Java codebases of scrawling the following like of graffiti all over the clean lines of your codebase:
if (someArgument == null) {
throw new NullPointerException("someArgument cannot be null");
}
That's just noise. If someArgument can't be null, let the program crash.
Needed file not found? Just return ""; instead.
Negative number where input must be contractually not negative? Clamp to zero.
Program crashing because a method doesn't exist? if not: hasattr(self, "blah") return None
People use the term "defensive" to refer to code like the above. They programs that "defend" against crashes by misbehaving. These programs end up being flakier and harder to debug than programs that are "defensive" in that they continually validate their assumptions and crash if they detect a situation that should be impossible.
The term "defensive programming" has been buzzing around social media the past few weeks and it's essential that we be precise that
1) constraint verification (preferably at compile time) is good; and
2) avoidance of crashes at runtime at all costs after an error has occurred is harmful.
For a second I thought you were advocating for something of those, and I had a rant primed up.
Yes. Defensively handle all the failure modes you know how to handle, but nothing else. If you're writing a service daemon and the user passes in a config filename that doesn't exist, crash and say why. Don't try to guess, or offer up a default config, or otherwise try to paper over the idea that the user asked you to do something impossible. Pretty much anything you try other than just crashing is guaranteed to be wrong.
And for the love of Knuth, don't freaking clamp to zero or otherwise convert inputs into semantically different value than specified. (Like, it's fine to load a string representation of a float into an IEEE754 datatype if you're not working with money or other exact values. But don't parse 256 as 255 and call it good enough. It isn't.)
There's probably a "Pay it forwards" lesson from Rust's diagnostics too.
So much end user software tries to be "friendly" by just saying "An error occurred" regardless of what's wrong or whether you can do anything about it. Rust does better and it's a reminder that you can too.
I think the "if something bad happens throw an exception" thing does have some value, namely that you can make it very explicit in the code that this is a use case that you can't handle, and not that you merely forgot something or wrote a bug.
static function unreachable() {
throw new Exception('Unreachable');
}
Now, we don't actually need the default match arm. If we just leave it off entirely, and someone passes in something we can't match, it'll throw a PHP error about unmatched cases.
But what I've found is that if I do that, then other programmers go in later and just add in the case to the match statement so it runs. Which, of course, breaks other stuff down stream, because it's not a valid value we can actually use. Or worse: they add a default match arm that doesn't work! Just so the PHP interpreter doesn't complain.
But with this, now the reader knows "the person who wrote this considered what happens when something bad is passed in, and decided we cant handle it. There's probably a good reason for that". So they don't touch it.
Now, PHP has unique challenges because it's so dynamic. If someone passes in the wrong thing we might end up coercing null to zero and messing up calculations, or we might end up truncating a float or something. Ideally we prevent this with enums, but enums are a pain in the ass to write because of autoloading semantics (I don't want to write a whole new file for just a few cases)
Nice article. The problem of multiple booleans is just one instance of a more general problem: when a function takes multiple arguments of the same type (i32, String, etc.). The newtype pattern allows you to create distinct types in such cases and enforce correctness at compile time.
This is one of the best Rust articles I've ever read. It's obviously from experience and covers a lot of _business logic_ foot guns that Rust doesn't typically protect you against without a little bit of careful coding that allows the compiler to help you.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
The 'defensive' nature refers to the mindset of the programmer (like when guilty people are defensive when being asked a simple question), that he isn't sure of anything in the code at any point, so he needs to constantly check every invariant.
Enterprise code is full of it, and it can quickly lead to the program becoming like 50% error handling by volume, many of the errors being impossible to trigger because the app logic is validating a condition already checked in the validation layer.
Its presence usually betrays a lack of understanding of the code structure, or even worse, a faulty or often bypassed validation layer, which makes error checking in multiple places actually necessary.
One example is validating every parameter in every call layer, as if the act of passing things around has the ability to degrade information.
The kind of defensive programming you're talking about has almost nothing to do with the contents of this article. This article is mostly just about structuring code so that bugs appear at compile time rather than runtime.
Good article, but one (very minor) nit I have is with the PizzaOrder example.
The problem they want to address is partial equality when you want to compare orders but ignoring the ordered_at timestamp. To me, the problem is throwing too many unrelated concerns into one struct. Ideally instead of using destructuring to compare only the specific fields you care about, you'd decompose this into two structs: I get that this is a toy example meant to illustrate the point; there are certainly more complex cases where there's no clean boundary to split your struct across. But this should be the first tool you reach for.You have a good point there, that is better. But it is still, well honestly, wrong. Two orders ordered at different times are just not the same order, and using a typeclass approach to say that they most definitely are is going to bite you in the back seat.
PartialEq and Eq for PizzaDetails is good. If there is a business function that computes whether or not someone orders the same thing, then that should start by projecting the details.
I do agree that implementing PartialEq on orders in this way is a bad fit. But it is a synthetic example to make a point, so I tried to keep it in the spirit of the original article (while ironically picking nits in the same vein myself).
Yeah, I immediately twitched when I saw the PartialEq implementation. Somebody is going to write code which finds the "correct" order and ends up allowing someone to order the same pizza but get yours, while you have to wait for it to be made and cooked again.
It's not difficult to write the predicate same_details_as() and then it's obvious to reviewers if that's what we meant and discourages weird ad-hoc code which might stop working when the PizzaDetails is redefined.
You can solve this in the general case by implementing the typeclass for the coarser equality relation over an ad-hoc wrapper newtype.
Well it isn't a good call. This is the kind of code that OOP makes people write.
Decomposing things just to have different equality notions doesn't generalize.
How would you decompose a character string so that you could have a case-insensitive versus sensitive comparison?
:)
> How would you decompose a character string
With a capitalization bit mask of course!
And you can speed up full equality comparisons with a quick cap equality check first.
(That is the how. The when is probably "never". :)
Don't forget to store the locale used for capitalization, too.
Indexing into arrays and vectors is really wise to avoid.
The same day Cloudflare had its unwrap fiasco, I found a bug in my code because of a slice that in certain cases went past the end of a vector. Switched it to use iterators and will definitely be more careful with slices and array indexes in the future.
> Cloudflare had its unwrap fiasco,
Was it a fiasco? Really? The rust unwrap call is the equivalent to C code like this:
If that assert tripped, would you blame the assert? Of course not. Or blame C? No. If that assert tripped, it’s doing its job by telling you there’s a problem in the call to foo().You can write buggy code in rust just like you can in any other language.
I think it's because unwrap() seems to unassuming at a glance. If it were or_panic() instead I think people would intuit it more as extremely dangerous. I understand that we're not dealing with newbies here, but everyone is still human and everything we do to reduce mistakes is a good thing.
I'm sure lots of bystanders are surprised to learn what .unwrap() does. But reading the post, I didn't get the impression that anyone at cloudflare was confused by unwrap's behaviour.
If you read the postmortem, they talk in depth about what the issue really was - which from memory is that their software statically allocated room for 20 rules or something. And their database query unexpected returned more than 20 items. I can see the argument for renaming unwrap to unwrap_or_panic. But no alternate spelling of .unwrap() would have saved cloudflare from their buggy database code.
.or_panic() would be a genuinely better name for .unwrap(). .unwrap() sounds a lot like .unbox(), whatcoulddagowrong?
The point is Rust provides more safety guarantees than C. But unwrap is an escape hatch, one that can blow up in your face. If they had taken the Haskell route and not provide unwrap at all, this wouldn't have happened.
https://hackage.haskell.org/package/base/docs/Data-Maybe.htm...
"The fromJust function extracts the element out of a Just and throws an error if its argument is Nothing."
Haskell’s fromJust, and similar partial functions like head, are as dangerous as Rust’s unwrap. The difference is only in the failure mode. Rust panics, whereas Haskell throws a runtime exception.
You might think that the Haskell behavior is “safer” in some sense, but there’s a huge gotcha: exceptions in pure code are the mortal enemy of lazy evaluation. Lazy evaluation means that an exception can occur after the catch block that surrounded the code in question has exited, so the exception isn’t guaranteed to get caught.
Exceptions can be ok in a monad like IO, which is what they’re intended for - the monad enforces an evaluation order. But if you use a partial function like fromJust in pure code, you have to be very careful about forcing evaluation if you want to be able to catch the exception it might generate. That’s antithetical to the goal of using exceptions - now you have to write to code carefully to make sure exceptions are catchable.
The bottom line is that for reliable code, you need to avoid fromJust and friends in Haskell as much you do in Rust.
The solution in both languages is to use a linter to warn about the use of partial functions: HLint for Haskell, Clippy for Rust. If Cloudflare had done that - and paid attention to the warning! - they would have caught that unwrap error of theirs at linting time. This is basically a learning curve issue.
This is the equivalent of force-unwrap in Swift, which is strongly discouraged. Swift format will reject this anti-pattern. The code running the internet probably should not force unwrap either.
Funny, it's really the same thing, why Rust people say we should abandon C. Meanwhile in C, it is also common to hand out handle instead of indices precisely due to this problem.
It's pretty similar, but writing `for item in container { item.do_it() }` is a lot less error prone than the C equivalent. The ha-ha-but-serious take is that once you get that snippet to compile, there's almost nothing you could ever do to break it without also making the compiler scream at you.
In rust, handing out indexes isn’t that common. It’s generally bad practice because your program will end up with extra, unnecessary bounds checks. Usually we program rust just the same as in C - get a reference (pointer) to an item inside the array and pass that around. The rust compiler ensures the array isn’t modified or freed while the pointer is held. (Which is helpful, but very inconvenient at times!)
In the first example, the match feels extremely overkill. Vec.first() exposes the correct semantic (as does Vec.iter().nth(0) for the more general case), returning an Option.
Note its possible to write the example more succinctly (while having the same behavior) with:
https://docs.rs/itertools/latest/itertools/trait.Itertools.h...
That's fine but the matching exposes that you should also handle the "has more than one element" case. And in general it's pointing to the idea of "try to not separate checks from code that depends on those checks"
This made me wonder, why aren't there usually teams whose job is to keep an eye on the coding patterns used in the various codebases? Similarly like how you have an SOC team who keeps monitoring traffic patterns, or an Operations Support team who keeps monitoring health probes, KPIs, and logs, or a QA who keeps writing tests against new code, maybe there would be value to keeping track of what coding patterns develop into over the course of the lifetime of codebases?
Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?
I'd imagine automated code smell checking tools can only cover so much at least.
* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).
I work one of these teams! At my company (~300 engineers), we have tech debt teams for both frontend and backend. I’m on the backend team.
We do the work that’s too large in scope for other teams to handle, and clearly documenting and enforcing best practices is one component of that. Part of that is maintaining a comprehensive linting suite, and the other part is writing documentation and educating developers. We also maintain core libraries and APIs, so if we notice many teams are doing the same thing in different ways, we’ll sit down and figure out what we can build that’ll accommodate most use cases.
There are. All the big tech companies have them. It’s just difficult to accomplish when you have millions of lines of code.
Is there an industry standard name for these teams that I somehow missed then?
Not exactly the question you asked, but you may want to read the chapter on “Large-Scale Changes” in the “Software Engineering at Google” book: https://abseil.io/resources/swe-book/html/ch22.html
You may wish to search for "readability at Google". Here is one article:
https://www.moderndescartes.com/essays/readability/
(I have not read this article closely, but it is about the right concept, so I provide it as a starting point since "readability" writ large can be an ambiguous term.)
See https://abseil.io/tips/ for some idea of the kinds of guidance these kinds of teams work to provide, at least at Google. I worked on the “C++ library team” at Google for a number of years.
These roles don’t really have standard titles in the industry, as far as I’m aware. At Google we were part of the larger language/library/toolchain infrastructure org.
Much of what we did was quasi-political … basically coaxing and convincing people to adopt best practices, after first deciding what those practices are. Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.
Speaking to the original question, no, there were no teams just manually reading code and looking for mistakes. If buggy code could be detected in an automated way, then we’d do that and attempt to fix it everywhere. Otherwise we’d attempt to educate and get everyone to level up their code review skills.
This is a really cool insight, thank you!
> Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.
Are you aware how those engineers established their recommendations? Did they maybe perform case studies? Or was it more just a distillation of lived experience type of deal?
Aside from just being immensely satisfying, these patterns of defensive programming may be a big part of how we get to quality GAI-written code at scale. Clippy (and the Rust compiler proper) can provide so much of the concrete iterative feedback an agent needs to stay on track and not gloss over mistakes.
Wow that’s amazing. The partial equality implementation is really surprising.
One question about avoiding boolean parameters, I’ve just been using structs wrapping bools. But you can’t treat them like bools… you have to index into them like wrapper.0.
Is there a way to treat the enum style replacement for bools like normal bools, or is just done with matches! Or match statements?
It’s probably not too important but if we could treat them like normal bools it’d feel nicer.
I almost always prefer enums and matches! vs bool parameters. Another way is to implement a Trait that you find useful that encapsulates the logic. And don't forget you can do impl <Enum> {} blocks to add useful functions that execute regardless of which member of the enum you got.
and later: Or if you hate all that there's always:If let is probably the closest to a regular bool.
For ints you can implement the deref trait on structs. So you can treat YourType(u64) as a u64 without destructing. I couldn’t figure out a way to do that with YouType(bool).
The very useful TryFrom trait landed only in 1.34, so hopefully the code using unwrap_or_else() in From impl predates that...
Actually the From trait documentation is now extremely clear about when to implement it (https://doc.rust-lang.org/std/convert/trait.From.html#when-t...)
As someone unfamiliar with Rust (yet! it's on my ever growing list of things I'd like to absorb into my brain), unwrap_or_else() sounds like part of the "What You See Is What I Threatened the Computer To Do" paradigm.
> INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite.
Oh wow! That's amazing! "I came to learn Computer Science, but I left with good bedside manners".
Immediately thought of INTERCAL :)
There are also the equally threatening and useful `map_or_else` (on Result and Option) and `ok_or_else` (on Option and experimentally on bool)
What's really nice is where you don't need defensive programming in Rust.
If your function gets ownership of, or an exclusive reference to an object, then you know for sure that this reference, for as long as it exists, is the only one in the entire program that can access this object (across all threads, 3rd party libraries, recursion, async, whatever).
References can't be null. Smart pointers can't be null. Not merely "can't" meaning not allowed and may throw or have a dummy value, but just can't. Wherever such type exists, it's already checked (often by construction) that it's valid and can't be null.
If your object's getter lends an immutable reference to its field, then you know the field won't be mutated by the caller (unless you've intentionally allowed mutable "holes" in specific places by explicitly wrapping them in a type that grants such access in a controlled way).
If your object's getter lends a reference, then you know the caller won't keep the reference for longer than the object's lifetime. If the type is not copyable/cloneable, then you know it won't even get copied.
If you make a method that takes ownership of `self`, then you know for sure that the caller won't be able to call any more methods on this object (e.g. `connection.close(); connection.send()` won't compile, `future.then(next)` only needs to support one listener, not an arbitrary number).
If you have a type marked as non-thread safe, then its instances won't be allowed in any thread-spawning functions, and won't be possible to send through channels that cross threads, etc. This is verified globally, across all code including 3rd party libraries and dynamic callbacks, at compile time.
I'm not reading a solid argument as to not use "..Defaults()" because doing so suggests that you may introduce a bug and therefore should be explicit about EVERYTHING instead? Ugh. Hard disagree.
Care to say why you disagree?
Using ..Default::default() means “whatever additional fields are added later, I don’t care”. Which is great until someone needs to add a field to the struct, and they rely on the compiler to tell them all the places that don’t have a value for the field (so they can pass the right value depending on the situation.) Then the callers with Default are missed, and bugs can result.
Any time you say “I don’t care what happens in the future here”, you better have a good reason for that to be the case, IMO.
In Pattern: Defensively Handle Constructors, it recommends using a nested inner module with a private Seal type. But the Seal type is unnecessary. The nested inner module is all you need, a private field `_private: ()` is only settable from within that inner module, so you don't need the extra private type.
Some of the advice is applicable tp C++ as well. Enums and such things with non-exhaustive checks all have warnings and setting warnings as errors helps.
Loved the article, such a nice read. I am still slowly ramping up my proficiency in Rust and this gave me a lot of things to think through. I particularly enjoyed the temporary mutability pattern, very cool and didn't think about it before!
> I particularly enjoyed the temporary mutability pattern, very cool and didn't think about it before!
It's not too uncommon in other languages (sometimes under the name "immediately invoked function expression"), though depending on the language you may see lambdas involved. For example, here's one of the examples from the article ported to C++:
This is a great article. Anyone know more content like this?
The tech industry is full of brash but lightly-seasoned people resurrecting discredited ideas for contrarianism cred and making the rest of us put down monsters we thought we'd slain a long time ago.
"Defensive programming" has multiple meanings. To the extent it means "avoid using _ as a catch-all pattern so that the compiler nags you if someone adds an enum arm you need to care about", "defensive" programming is good.
That said, I wouldn't use the word "defensive" to describe it. The term lacks precision. The above good practice ends up getting mixed up with the bad "defensive" practices of converting contract violations to runtime errors or just ignoring them entirely --- the infamous pattern in Java codebases of scrawling the following like of graffiti all over the clean lines of your codebase:
That's just noise. If someArgument can't be null, let the program crash.Needed file not found? Just return ""; instead.
Negative number where input must be contractually not negative? Clamp to zero.
Program crashing because a method doesn't exist? if not: hasattr(self, "blah") return None
People use the term "defensive" to refer to code like the above. They programs that "defend" against crashes by misbehaving. These programs end up being flakier and harder to debug than programs that are "defensive" in that they continually validate their assumptions and crash if they detect a situation that should be impossible.
The term "defensive programming" has been buzzing around social media the past few weeks and it's essential that we be precise that
1) constraint verification (preferably at compile time) is good; and
2) avoidance of crashes at runtime at all costs after an error has occurred is harmful.
For a second I thought you were advocating for something of those, and I had a rant primed up.
Yes. Defensively handle all the failure modes you know how to handle, but nothing else. If you're writing a service daemon and the user passes in a config filename that doesn't exist, crash and say why. Don't try to guess, or offer up a default config, or otherwise try to paper over the idea that the user asked you to do something impossible. Pretty much anything you try other than just crashing is guaranteed to be wrong.
And for the love of Knuth, don't freaking clamp to zero or otherwise convert inputs into semantically different value than specified. (Like, it's fine to load a string representation of a float into an IEEE754 datatype if you're not working with money or other exact values. But don't parse 256 as 255 and call it good enough. It isn't.)
There's probably a "Pay it forwards" lesson from Rust's diagnostics too.
So much end user software tries to be "friendly" by just saying "An error occurred" regardless of what's wrong or whether you can do anything about it. Rust does better and it's a reminder that you can too.
I think the "if something bad happens throw an exception" thing does have some value, namely that you can make it very explicit in the code that this is a use case that you can't handle, and not that you merely forgot something or wrote a bug.
In PHP a pattern I often employ is:
Where unreachable is literally just: Now, we don't actually need the default match arm. If we just leave it off entirely, and someone passes in something we can't match, it'll throw a PHP error about unmatched cases.But what I've found is that if I do that, then other programmers go in later and just add in the case to the match statement so it runs. Which, of course, breaks other stuff down stream, because it's not a valid value we can actually use. Or worse: they add a default match arm that doesn't work! Just so the PHP interpreter doesn't complain.
But with this, now the reader knows "the person who wrote this considered what happens when something bad is passed in, and decided we cant handle it. There's probably a good reason for that". So they don't touch it.
Now, PHP has unique challenges because it's so dynamic. If someone passes in the wrong thing we might end up coercing null to zero and messing up calculations, or we might end up truncating a float or something. Ideally we prevent this with enums, but enums are a pain in the ass to write because of autoloading semantics (I don't want to write a whole new file for just a few cases)
Nice article. The problem of multiple booleans is just one instance of a more general problem: when a function takes multiple arguments of the same type (i32, String, etc.). The newtype pattern allows you to create distinct types in such cases and enforce correctness at compile time.
This is one of the best Rust articles I've ever read. It's obviously from experience and covers a lot of _business logic_ foot guns that Rust doesn't typically protect you against without a little bit of careful coding that allows the compiler to help you.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
This was posted with a (mostly) healthy discussion on lobste.rs, here's the link https://lobste.rs/s/ouy4dq/patterns_for_defensive_programmin...
>Using _ as a placeholder for unused variables can lead to confusion
I would have guessed linters would have complained about what's being suggested there. Is the something special about var: _ thing that avoids it?
Defensive programming is a widely known antipattern : https://wiki.c2.com/?DefensiveProgramming
The 'defensive' nature refers to the mindset of the programmer (like when guilty people are defensive when being asked a simple question), that he isn't sure of anything in the code at any point, so he needs to constantly check every invariant.
Enterprise code is full of it, and it can quickly lead to the program becoming like 50% error handling by volume, many of the errors being impossible to trigger because the app logic is validating a condition already checked in the validation layer.
Its presence usually betrays a lack of understanding of the code structure, or even worse, a faulty or often bypassed validation layer, which makes error checking in multiple places actually necessary.
One example is validating every parameter in every call layer, as if the act of passing things around has the ability to degrade information.
The kind of defensive programming you're talking about has almost nothing to do with the contents of this article. This article is mostly just about structuring code so that bugs appear at compile time rather than runtime.