I want to raise a point that so far I have not seen in the top N comments: Often simplicity and cleverness are not opposites, because you can only find the simplest way to express something, when you are clever about it. Your solution doesn't have to "smell clever", no, it can be deceptively simple, making the casual reader think "of course it works this way", but when such casual reader was supposed to come up with such a solution, they might invent something unnecessarily convoluted, because their mind has not cut through all the stuff like a knife through butter and they were not so clever. To find the simplest abstraction, that is still able to represent everything you need to represent, and at the same time doesn't have unpleasant consequences like being very slow, or super difficult to understand is the actual cleverness. Writing code no one can understand is not actually that clever. It might be clever in terms of oneself solving a problem at a specific time, but it is not so clever in terms of that thing being maintainable, adaptable, readable, improvable, in the mid and long term.
We don't write everything in lambda calculus or SKI calculus or something like that, even though those things are made up of very simple parts, complexity would emerge. They are not the right choice for most tasks. How do you make it readable? Can you write something that is very clear and performant at the same time, while preventing yourself from programming yourself into a corner? That's where you need to get really clever.
Yeah, I don't know about other fields but the way "simple", "complex", "clever" are used in software is so backwards and inconsistent. These terms mean nothing in tech and we should excise them.
Much of what MacKinnon is referring to here would be seen as simple, complex, or clever by many people. He advocates for static languages but when he does so he talks about "useful" and "tradeoffs", which are radically better terms if you actually want to discuss these topics.
One case he talks about is an abstraction over Mongo, but then the queries were designed for mongo. Is that an issue of simple or complex? I have no idea, I'd say neither. The issue was that you abstracted away essential properties of your system.
TBH, despite the title, what he says is seemingly radically more about tradeoffs and actual concrete ways to write software - I didn't hear him talk that much about "simple" or "complex" and what he said were instead reasonable cases where things went right or wrong, or his more nuanced opinions on why certain techniques and technologies lead to better outcomes. Ultimately any productive conversation ends up that way, if you find yourself saying "simple" or "complex" a lot in a conversation then you're probably doing it wrong.
Legibility and domain modeling get mixed up with business models and incentives, so good-simple just isn’t the same thing to the people paying for simple and the ones fascinated by simple,
What mongo solved was user adoption and being legible to a specific type of person making a business decision with very high LTV (a web developer handling JSON and needing a scalable database for it they could dump it into). It’s aligned with the purchaser’s needs and incentives and honestly isn’t awful even if it does end up becoming enormously expensive later, because you might just really need a database your team understands and can use asap.
Mongo churns because what’s legible to a purchaser and what captures the essential qualities of some kind of application domain or abstraction are different things. Once immediate growth is covered and you can afford someone who knows a lot about databases to work on yours they’re like “what the hell” because it turns out that it didn’t do what you’d typically consider the bare minimum of a database: https://news.ycombinator.com/item?id=40901573
But what is consistency, and how nitpicky are you about it if your problems seem to be solved well enough that something’s no longer a problem? I like software because there is a certain mystique behind understanding things like “public key cryptography would be useful in any universe with numbers” or “you really couldn’t improve on TCP for what it does”, because these tie in to real world problems but are also universally applicable, so distilling them to their essential form reveals a kind of API of the universe. But you don’t need that to get value from software and computers, you just need to solve your problem, that being, you need a database now or you lose a sale. What’s “simple” or even what counts as a database isn’t legible between the two.
Tech has a lot of money wrapped up in it so there’s a lot of attempts at capturing mindshare and defining things because the payoff is enormous. It just doesn’t sound fun to me to build up a house of cards selling a database that isn’t actually a database and deciding to figure out that whole database thing later when churn, CAC, LTV, and market adoption metrics tell me to. But also, customers really did need to shove json into databases and weren’t too picky about the particulars. So fuck it, this is a database.
Simplicity and complexity are not opposites. Things become complex when we attend to multiple simple things at the same time.
For example, we have an algorithm that requires a key-value store with typical semantic. For the purposes of our algorithm we could simulate that store using an array and straightforward search and insert routines that just loop through the array without trying to be smart. Then we could attend to details of that key-value store and use a more efficient approach, this time without thinking about our original algorithm; or, perhaps, with a clear understanding of its access pattern.
In both cases the task at hand won't be more complex than necessary. But if we try to do both at the same time, it will be way more complex.
Here the separation is clear, but in real programming it is not clear and to discover these lines of separations is basically the essence of building a system. I think Brad Cox was occupied with that with his Software-IC concept and I kind of share his view that this is yet to happen. Things we build are not as composable as they should be; as they are in other industries.
An example: there is a text "shaping" library that takes a font, an input string and produces a sequence of glyphs to typeset that string. Modern fonts and certain scripts are very complex and this task is not trivial. Now, this particular library takes an UTF-8 string. Which means it has an UTF-8 decoder inside.
But a text shaping library does not need an UTF-8 decoder. The product it is used in will certainly have one or, if it works in UTF-16 or, as Python, uses 3-way encoding, may not even need it and thus will have to add an UTF-8 encoding step only to communicate with that library. A simpler design would be to remove that UTF-8 decoder and make the library to accept Unicode characters as integers. If we need UTF-8, it is trivial to decode a string and feed the resulting Unicode into the shaper; if we don't, it is equally trivial to use the library with any other encoding.
(I guess I ended up with a slightly different example than I intended.) Anyway, removing an UTF-8 decoder here would result in a simpler and more universal design, although - this is an unexpected development - it may superficially look more complex to many people who have the "standard" UTF-8 string and just need to get the job done.
In other words, circling back to Brad Cox's Software ICs, we're all using devboards and Arduinos instead, because those look simple to newbies and save a little glue work here and there.
In hardware world, it's fine to use devboards and Arduinos to prototype things, but then you're supposed to stop being a newbie, stop using breadboards, and actually design circuits using relevant ICs directly, with minimal amount of glue in between. Unfortunately, in software, manufacturing costs are too cheap to meter, so we're fine with using bench-top prototypes in production, because we're not the ones paying the costs for the waste anyway, our users are.
(Our users, and hardware developers too, as they get the blame for "low battery life" of products running garbage software.)
> Unfortunately, in software, manufacturing costs are too cheap to meter, so we're fine with using bench-top prototypes in production, because we're not the ones paying the costs for the waste anyway, our users are.
I’m not sure what you’re trying to say - UTF-8 is the standard text encoding by a mile. It’s not a prototype.
UTF-8 here is like having a devboard with an USB controller chip, complete with power circuitry and USB port. It could all be high-quality components, it's super useful to have it on the board for prototyping, but in the actual product, you aren't going to ship three devboards wired by SPI, but each carrying some combination of USB, Ethernet, Wi-Fi, Bluetooth controllers and other stuff, all of it disabled/unused, all because you need three ICs and found it easier to order devboards. You're just going to use the ICs you care about, supply the USB controller and port and necessary wiring yourself, and otherwise use minimum amount of extra components necessary.
So, in context of Mikhail_Edoshin's example, I'm saying that this "text shaping library" they mention is basically a devboard - full of components not necessary for its core functions. Most software libraries are like that, so applications using them are basically like a device built from wiring up a bunch of devboards.
The reason this is so is that there is no way to say "the library accepts UTF-32, for other encodings use the standard decoder" because there is no such a decoder. "For want of a nail". So it circles back to the idea of easily composable software which is not yet there. Everybody bring their own nails and there is no way to move nails between projects.
What do you mean that there is no standard decoder? For what, UTF-8?
I agree on the composability. Accepting Unicode code points is more generic. I guess it depends on your environment. If every caller will combine it with a UTF-8 decoder, you might want to include it.
I meant something like a standard component that everybody uses (or a selection of components that vary here and there but are interchangeable in general). A catalog of such components; a - well, I'll say it - a software factory that builds them, a fast one, a slower but very detailed on errors, one that accepts single bytes, one that takes a pointer to a string, and so on, all different, yet all working in the same way as far as the main task is concerned.
While I agree with MacKinnon in principle, we must acknowledge that simplicity is a luxury enabled by the surplus of compute power. > There are domains where 'cleverness' isn't just a vanity project, but a hard requirement. Take the Fast Inverse Square Root from Quake III or modern Zero-copy networking in high-throughput systems like Kafka. If we prioritized 'simple and readable' code in those contexts, we would be leaving orders of magnitude of performance on the table.
Sometimes, 'clever' code is simply code that refuses to ignore the underlying reality of the hardware. The danger isn't cleverness itself, but unnecessary cleverness applied to problems where the bottleneck is human understanding rather than machine execution.
Also reminds me of the series of humorous posts saying essentially the same thing by Grug - https://grugbrain.dev/
"grug brain developer not so smart, but grug brain developer program many long year and learn some things although mostly still confused"
"grug brain developer try collect learns into small, easily digestible and funny page, not only for you, the young grug, but also for him because as grug brain developer get older he forget important things, like what had for breakfast or if put pants on"
I see it this way: simpler code can be smaller, say half the size. It takes half the time to write (at the most), half the time to read, half the time to compile and execute. That already gives it an eight-fold advantage.
You better have a good reason for spending the time and money to do more that the simple solution. Engineering is all about money spent for results. Not cleverness, except indirectly.
- A huge amount of simple pieces that would be more complex without that tiny piece.
In other words, the clever abstraction can be justified if it enables lots of simplicity. It has to do it right now, not in the future.
If your kernel is complicated but writing drivers is simple, people won't even notice the abstractions. They will think of the system as "simple", without realizing there is some clever stuff making that "simple" possible.
If there's a leaky, tangled messy piece of incredibly complex software, but it's small and enables lots of other pieces to be simpler, then it's great.
That's where typical ideas about complexity fail (selecting scope). It's easy to point out a specific part of the code and say it is complicated, without realizing it enables other parts to be simpler.
I've seen a fair share of refactorings that ended up simplifying a core logic but making whole sections that depend on it worse.
I completely agree. I don’t think it’s bad for code to be complex if it’s essential complexity that has to go somewhere. To me it’s a red flag when someone cares more about making the code look simple than solving the problem.
I guess what I mean is that you can’t just assume that you can get clever to work if it doesn’t capture some inherent quality of the problem or relies on unrealistic assumptions. In other words, you often can’t just define the abstraction and requirements and iterate/vibes your way from there. Things like the CAP theorem of the discrete number cells in a small blood sample make certain abstractions literally impossible to deliver.
In his "Power of Simplicity"[1] talk, Alan Kay had a great illustration of this specific phenomenon using astronomy:
Before Johannes Kepler had the insight of describing the orbits of the planets with ellipsis, peoples were using the (conceptually simpler) circles which didn't completely match the observed movement of celestial body such as Mars, thus resulted in complicated circle-within-circles orbits to try to model reality. By introducing a more complex basic shape (ellipsis instead of circle) which happened to match the underlying reality more, the overall description of orbits got greatly simplified.
It's a phenomenon I've seen a few time in my career so far: that while often there's complex code because there are actually complex hedge case to handle (essential complexity), sometime it's really because the data structure used to model the thing you're handling is slightly missing the mark, making things fit almost-but-not-quite, and many operation done around to handle data can be greatly simplified (if not avoided altogether) by changing the underlying data-structure.
(Also, Alan Kay apparently did another talk called "Is it really complex, or did we just make it complicated"[2] that seems pertinent to the thread, though I haven't watch it yet)
Really solid discussion on maintainability. I liked the recurring theme that clarity beats cleverness, especially the points on over-abstraction RFCs as a cultural tool and documentation focusing on why instead of restating code. The examples from consulting and legacy systems made it feel very grounded and practical.
Obvious is good. Optimization can come later. Cleverness is for when you are out of options.
The programming landscape 30+ years ago and its severely constrained resources strongly biased our idea of "good software" in favor of cleverness. I think we can say we know better now. Having been responsible for picking up someone else's clever code myself.
(Good) Abstraction is there to hide complexity. I don't think it's controversial to say that software has become extremely complex. You need to support more spoken languages, more backends, more complex devices, etc.
The most complex thing to support is peoples' resumes. If carpenters were incentivized like software devs are, we'd quickly start seeing multi-story garden sheds in reinforced concrete because every carpenters dream job at Bunkers Inc. pays 10x more.
People talk about "complexity" and "simplicity" in code without defining them. I've arrived at a way of looking it that I think makes it reasonably unambiguous. I prefer "opaque" to "complicated" for this concept, but I think it's really the same thing people mean when they talk about code being over-complicated.
Opaque code is code that requires you to form an unnecessarily large, detailed mental model of it in order to answer a particular question you may have about it.
People rarely read code in its entirety, like a novel. There is almost always a specific question they want to answer. It might be "how will it behave in this use case?", "how will this change affect its behaviour?" or "what change should I make it to achieve this new behaviour?". Alternatively, it might be something more high level, but still specific, like "how does this fit together?" (i.e. there's a desire to understand the overall organisational principles of the code, rather than a specific detail).
Opaque code typically:
* Requires you to read and understand large volumes of what should be irrelevant code in order to answer your question, often across multiple codebases.
* Requires you to do difficult detective work in order to identify what code needs to be read and understood to answer the question with confidence.
* Only provides an answer to your question with caveats/assumptions about human behaviour, such as "well unless someone has done X somewhere, but I doubt anyone would do that and would have to read the entire codebase to be sure".
Of course, this doesn't yield some number as to how "opaque" the code is, and importantly it depends on the question you're asking. A codebase might be quite transparent to some questions and opaque to others. It can be a very useful exercise to think about what questions people are likely to seek answers for from a given codebase.
When you think about things this way, you come to realise a lot of supposedly good practices actually exacerbate code opacity, often for the sake of "reusability" of things that will never be reused. Dependency injection containers are a bête noire of mine for this reason. There's nothing wrong with dependency injection itself (giving things their dependencies rather than having them create them), but DI containers tend to end up being dependency obfuscators, and the worst ones import a huge amount of quirky, often poorly-documented behaviour into your system. They are probably the single biggest cause of having to spend an entire afternoon trawling through code, often including that of the blasted container itself (and runtime config!), to answer what should be a very simple and quick question about a codebase.
"Clever" is a different thing to "complicated" or "opaque", and it's not always a negative. People can certainly make code much more opaque by doing "clever" things, but sometimes (maybe rather too rarely) they can do the opposite. A small, well thought out bit of "clever" code can often greatly reduce the opacity of a much larger amount of code that uses it. Thinking about what a particular "clever" idea will do to the opacity (as defined above) of the codebase can be a good way to figure out whether it is worth doing.
I want to raise a point that so far I have not seen in the top N comments: Often simplicity and cleverness are not opposites, because you can only find the simplest way to express something, when you are clever about it. Your solution doesn't have to "smell clever", no, it can be deceptively simple, making the casual reader think "of course it works this way", but when such casual reader was supposed to come up with such a solution, they might invent something unnecessarily convoluted, because their mind has not cut through all the stuff like a knife through butter and they were not so clever. To find the simplest abstraction, that is still able to represent everything you need to represent, and at the same time doesn't have unpleasant consequences like being very slow, or super difficult to understand is the actual cleverness. Writing code no one can understand is not actually that clever. It might be clever in terms of oneself solving a problem at a specific time, but it is not so clever in terms of that thing being maintainable, adaptable, readable, improvable, in the mid and long term.
We don't write everything in lambda calculus or SKI calculus or something like that, even though those things are made up of very simple parts, complexity would emerge. They are not the right choice for most tasks. How do you make it readable? Can you write something that is very clear and performant at the same time, while preventing yourself from programming yourself into a corner? That's where you need to get really clever.
Yeah, I don't know about other fields but the way "simple", "complex", "clever" are used in software is so backwards and inconsistent. These terms mean nothing in tech and we should excise them.
Much of what MacKinnon is referring to here would be seen as simple, complex, or clever by many people. He advocates for static languages but when he does so he talks about "useful" and "tradeoffs", which are radically better terms if you actually want to discuss these topics.
One case he talks about is an abstraction over Mongo, but then the queries were designed for mongo. Is that an issue of simple or complex? I have no idea, I'd say neither. The issue was that you abstracted away essential properties of your system.
TBH, despite the title, what he says is seemingly radically more about tradeoffs and actual concrete ways to write software - I didn't hear him talk that much about "simple" or "complex" and what he said were instead reasonable cases where things went right or wrong, or his more nuanced opinions on why certain techniques and technologies lead to better outcomes. Ultimately any productive conversation ends up that way, if you find yourself saying "simple" or "complex" a lot in a conversation then you're probably doing it wrong.
Legibility and domain modeling get mixed up with business models and incentives, so good-simple just isn’t the same thing to the people paying for simple and the ones fascinated by simple,
What mongo solved was user adoption and being legible to a specific type of person making a business decision with very high LTV (a web developer handling JSON and needing a scalable database for it they could dump it into). It’s aligned with the purchaser’s needs and incentives and honestly isn’t awful even if it does end up becoming enormously expensive later, because you might just really need a database your team understands and can use asap.
Mongo churns because what’s legible to a purchaser and what captures the essential qualities of some kind of application domain or abstraction are different things. Once immediate growth is covered and you can afford someone who knows a lot about databases to work on yours they’re like “what the hell” because it turns out that it didn’t do what you’d typically consider the bare minimum of a database: https://news.ycombinator.com/item?id=40901573
But what is consistency, and how nitpicky are you about it if your problems seem to be solved well enough that something’s no longer a problem? I like software because there is a certain mystique behind understanding things like “public key cryptography would be useful in any universe with numbers” or “you really couldn’t improve on TCP for what it does”, because these tie in to real world problems but are also universally applicable, so distilling them to their essential form reveals a kind of API of the universe. But you don’t need that to get value from software and computers, you just need to solve your problem, that being, you need a database now or you lose a sale. What’s “simple” or even what counts as a database isn’t legible between the two.
Tech has a lot of money wrapped up in it so there’s a lot of attempts at capturing mindshare and defining things because the payoff is enormous. It just doesn’t sound fun to me to build up a house of cards selling a database that isn’t actually a database and deciding to figure out that whole database thing later when churn, CAC, LTV, and market adoption metrics tell me to. But also, customers really did need to shove json into databases and weren’t too picky about the particulars. So fuck it, this is a database.
Simplicity and complexity are not opposites. Things become complex when we attend to multiple simple things at the same time.
For example, we have an algorithm that requires a key-value store with typical semantic. For the purposes of our algorithm we could simulate that store using an array and straightforward search and insert routines that just loop through the array without trying to be smart. Then we could attend to details of that key-value store and use a more efficient approach, this time without thinking about our original algorithm; or, perhaps, with a clear understanding of its access pattern.
In both cases the task at hand won't be more complex than necessary. But if we try to do both at the same time, it will be way more complex.
Here the separation is clear, but in real programming it is not clear and to discover these lines of separations is basically the essence of building a system. I think Brad Cox was occupied with that with his Software-IC concept and I kind of share his view that this is yet to happen. Things we build are not as composable as they should be; as they are in other industries.
An example: there is a text "shaping" library that takes a font, an input string and produces a sequence of glyphs to typeset that string. Modern fonts and certain scripts are very complex and this task is not trivial. Now, this particular library takes an UTF-8 string. Which means it has an UTF-8 decoder inside.
But a text shaping library does not need an UTF-8 decoder. The product it is used in will certainly have one or, if it works in UTF-16 or, as Python, uses 3-way encoding, may not even need it and thus will have to add an UTF-8 encoding step only to communicate with that library. A simpler design would be to remove that UTF-8 decoder and make the library to accept Unicode characters as integers. If we need UTF-8, it is trivial to decode a string and feed the resulting Unicode into the shaper; if we don't, it is equally trivial to use the library with any other encoding.
(I guess I ended up with a slightly different example than I intended.) Anyway, removing an UTF-8 decoder here would result in a simpler and more universal design, although - this is an unexpected development - it may superficially look more complex to many people who have the "standard" UTF-8 string and just need to get the job done.
If this makes the library harder to use because most people will have UTF-8 strings, I’m not sure that’s a win.
In other words, circling back to Brad Cox's Software ICs, we're all using devboards and Arduinos instead, because those look simple to newbies and save a little glue work here and there.
In hardware world, it's fine to use devboards and Arduinos to prototype things, but then you're supposed to stop being a newbie, stop using breadboards, and actually design circuits using relevant ICs directly, with minimal amount of glue in between. Unfortunately, in software, manufacturing costs are too cheap to meter, so we're fine with using bench-top prototypes in production, because we're not the ones paying the costs for the waste anyway, our users are.
(Our users, and hardware developers too, as they get the blame for "low battery life" of products running garbage software.)
> Unfortunately, in software, manufacturing costs are too cheap to meter, so we're fine with using bench-top prototypes in production, because we're not the ones paying the costs for the waste anyway, our users are.
I’m not sure what you’re trying to say - UTF-8 is the standard text encoding by a mile. It’s not a prototype.
UTF-8 here is like having a devboard with an USB controller chip, complete with power circuitry and USB port. It could all be high-quality components, it's super useful to have it on the board for prototyping, but in the actual product, you aren't going to ship three devboards wired by SPI, but each carrying some combination of USB, Ethernet, Wi-Fi, Bluetooth controllers and other stuff, all of it disabled/unused, all because you need three ICs and found it easier to order devboards. You're just going to use the ICs you care about, supply the USB controller and port and necessary wiring yourself, and otherwise use minimum amount of extra components necessary.
So, in context of Mikhail_Edoshin's example, I'm saying that this "text shaping library" they mention is basically a devboard - full of components not necessary for its core functions. Most software libraries are like that, so applications using them are basically like a device built from wiring up a bunch of devboards.
The reason this is so is that there is no way to say "the library accepts UTF-32, for other encodings use the standard decoder" because there is no such a decoder. "For want of a nail". So it circles back to the idea of easily composable software which is not yet there. Everybody bring their own nails and there is no way to move nails between projects.
What do you mean that there is no standard decoder? For what, UTF-8?
I agree on the composability. Accepting Unicode code points is more generic. I guess it depends on your environment. If every caller will combine it with a UTF-8 decoder, you might want to include it.
I meant something like a standard component that everybody uses (or a selection of components that vary here and there but are interchangeable in general). A catalog of such components; a - well, I'll say it - a software factory that builds them, a fast one, a slower but very detailed on errors, one that accepts single bytes, one that takes a pointer to a string, and so on, all different, yet all working in the same way as far as the main task is concerned.
While I agree with MacKinnon in principle, we must acknowledge that simplicity is a luxury enabled by the surplus of compute power. > There are domains where 'cleverness' isn't just a vanity project, but a hard requirement. Take the Fast Inverse Square Root from Quake III or modern Zero-copy networking in high-throughput systems like Kafka. If we prioritized 'simple and readable' code in those contexts, we would be leaving orders of magnitude of performance on the table.
Sometimes, 'clever' code is simply code that refuses to ignore the underlying reality of the hardware. The danger isn't cleverness itself, but unnecessary cleverness applied to problems where the bottleneck is human understanding rather than machine execution.
Also reminds me of the series of humorous posts saying essentially the same thing by Grug - https://grugbrain.dev/
"grug brain developer not so smart, but grug brain developer program many long year and learn some things although mostly still confused"
"grug brain developer try collect learns into small, easily digestible and funny page, not only for you, the young grug, but also for him because as grug brain developer get older he forget important things, like what had for breakfast or if put pants on"
I see it this way: simpler code can be smaller, say half the size. It takes half the time to write (at the most), half the time to read, half the time to compile and execute. That already gives it an eight-fold advantage.
You better have a good reason for spending the time and money to do more that the simple solution. Engineering is all about money spent for results. Not cleverness, except indirectly.
"Clear is better than clever" is often touted as a key part of the Go philosophy: https://www.youtube.com/watch?v=PAAkCSZUG1c&t=875s
You can see the impacts of this in the language design. I find Go code to be boring and fairly easy to understand.
What I truly enjoy is software that has:
- One tiny piece of extremely clever abstraction.
- A huge amount of simple pieces that would be more complex without that tiny piece.
In other words, the clever abstraction can be justified if it enables lots of simplicity. It has to do it right now, not in the future.
If your kernel is complicated but writing drivers is simple, people won't even notice the abstractions. They will think of the system as "simple", without realizing there is some clever stuff making that "simple" possible.
I think this is what software’s value truly is. But the challenge is delivering “clever” without “complex” or leaking out of the abstraction.
If there's a leaky, tangled messy piece of incredibly complex software, but it's small and enables lots of other pieces to be simpler, then it's great.
That's where typical ideas about complexity fail (selecting scope). It's easy to point out a specific part of the code and say it is complicated, without realizing it enables other parts to be simpler.
I've seen a fair share of refactorings that ended up simplifying a core logic but making whole sections that depend on it worse.
I completely agree. I don’t think it’s bad for code to be complex if it’s essential complexity that has to go somewhere. To me it’s a red flag when someone cares more about making the code look simple than solving the problem.
I guess what I mean is that you can’t just assume that you can get clever to work if it doesn’t capture some inherent quality of the problem or relies on unrealistic assumptions. In other words, you often can’t just define the abstraction and requirements and iterate/vibes your way from there. Things like the CAP theorem of the discrete number cells in a small blood sample make certain abstractions literally impossible to deliver.
Oh, I see.
You're reinforcing the "It has to do it right now, not in the future." part of what I said.
This simple requirement (it has to enable simplicity right now) is often enough of a reality check on designing abstractions.
In his "Power of Simplicity"[1] talk, Alan Kay had a great illustration of this specific phenomenon using astronomy:
Before Johannes Kepler had the insight of describing the orbits of the planets with ellipsis, peoples were using the (conceptually simpler) circles which didn't completely match the observed movement of celestial body such as Mars, thus resulted in complicated circle-within-circles orbits to try to model reality. By introducing a more complex basic shape (ellipsis instead of circle) which happened to match the underlying reality more, the overall description of orbits got greatly simplified.
It's a phenomenon I've seen a few time in my career so far: that while often there's complex code because there are actually complex hedge case to handle (essential complexity), sometime it's really because the data structure used to model the thing you're handling is slightly missing the mark, making things fit almost-but-not-quite, and many operation done around to handle data can be greatly simplified (if not avoided altogether) by changing the underlying data-structure.
(Also, Alan Kay apparently did another talk called "Is it really complex, or did we just make it complicated"[2] that seems pertinent to the thread, though I haven't watch it yet)
[1] https://www.youtube.com/watch?v=NdSD07U5uBs [2] https://www.youtube.com/watch?v=ubaX1Smg6pY
This is the correct way. Make it unnecessary to look at and into the clever code until it's absolutely necessary to look at and into the clever code.
The vast majority of those who are affected by what you're doing should be asking themselves why you never seem to be doing anything difficult.
Really solid discussion on maintainability. I liked the recurring theme that clarity beats cleverness, especially the points on over-abstraction RFCs as a cultural tool and documentation focusing on why instead of restating code. The examples from consulting and legacy systems made it feel very grounded and practical.
I try to put clever solutions in their own file so that I can later replace them with something boring.
Obvious is good. Optimization can come later. Cleverness is for when you are out of options.
The programming landscape 30+ years ago and its severely constrained resources strongly biased our idea of "good software" in favor of cleverness. I think we can say we know better now. Having been responsible for picking up someone else's clever code myself.
> severely constrained resources
Energy is a resource. Mobile computing devices demonstrate this constraint already. I predict that what is old will become new again.
Do we? I feel the layers of abstraction are quite extensive now. They are anything but simple.
(Good) Abstraction is there to hide complexity. I don't think it's controversial to say that software has become extremely complex. You need to support more spoken languages, more backends, more complex devices, etc.
The most complex thing to support is peoples' resumes. If carpenters were incentivized like software devs are, we'd quickly start seeing multi-story garden sheds in reinforced concrete because every carpenters dream job at Bunkers Inc. pays 10x more.
People talk about "complexity" and "simplicity" in code without defining them. I've arrived at a way of looking it that I think makes it reasonably unambiguous. I prefer "opaque" to "complicated" for this concept, but I think it's really the same thing people mean when they talk about code being over-complicated.
Opaque code is code that requires you to form an unnecessarily large, detailed mental model of it in order to answer a particular question you may have about it.
People rarely read code in its entirety, like a novel. There is almost always a specific question they want to answer. It might be "how will it behave in this use case?", "how will this change affect its behaviour?" or "what change should I make it to achieve this new behaviour?". Alternatively, it might be something more high level, but still specific, like "how does this fit together?" (i.e. there's a desire to understand the overall organisational principles of the code, rather than a specific detail).
Opaque code typically:
* Requires you to read and understand large volumes of what should be irrelevant code in order to answer your question, often across multiple codebases.
* Requires you to do difficult detective work in order to identify what code needs to be read and understood to answer the question with confidence.
* Only provides an answer to your question with caveats/assumptions about human behaviour, such as "well unless someone has done X somewhere, but I doubt anyone would do that and would have to read the entire codebase to be sure".
Of course, this doesn't yield some number as to how "opaque" the code is, and importantly it depends on the question you're asking. A codebase might be quite transparent to some questions and opaque to others. It can be a very useful exercise to think about what questions people are likely to seek answers for from a given codebase.
When you think about things this way, you come to realise a lot of supposedly good practices actually exacerbate code opacity, often for the sake of "reusability" of things that will never be reused. Dependency injection containers are a bête noire of mine for this reason. There's nothing wrong with dependency injection itself (giving things their dependencies rather than having them create them), but DI containers tend to end up being dependency obfuscators, and the worst ones import a huge amount of quirky, often poorly-documented behaviour into your system. They are probably the single biggest cause of having to spend an entire afternoon trawling through code, often including that of the blasted container itself (and runtime config!), to answer what should be a very simple and quick question about a codebase.
"Clever" is a different thing to "complicated" or "opaque", and it's not always a negative. People can certainly make code much more opaque by doing "clever" things, but sometimes (maybe rather too rarely) they can do the opposite. A small, well thought out bit of "clever" code can often greatly reduce the opacity of a much larger amount of code that uses it. Thinking about what a particular "clever" idea will do to the opacity (as defined above) of the codebase can be a good way to figure out whether it is worth doing.