What I love about this is how it demonstrates the impact the Clean Code advocates/movement have had on software. The post author refers to 200 line functions are feeling questionable. When Clean Code was first introduced, seeing functions that were thousands of lines was a regular occurrence, and there was no norm established that perhaps we should strive for more brevity.
Similarly, unit test have become accepted practice, and improved overall project quality, even if TDD is uncommon.
Clean Code is, IMO, too extreme in many respects, but it shifted the conversation and conventions in a much-needed direction.
The final point he makes is a major missing piece; using polymorphism and type systems well is difficult and requires up-front thought and design work that many programmers are both unaccustomed to, and uninterested in. So conditionals remain the norm. That is unfortunate, but also predictable.
Good post. Saved to cite the next time that someone in code review asks me to breakup a large function that is a series of linear steps that won’t be reused.
I think Clean Code is one of these things where it's a reasonably good place to start, but as a text it's a little infantalising and overly prescriptive for people with significant experience, ignoring the necessary trade-offs which are being made in the recommendations in favour of trying to present something which appears to be self-consistently opinionated. Nevertheless, at a certain level of critical thinking it becomes clear that the recommendations in Clean Code are in fact opinions made on broadly subjective criteria - it is something of an influencer tome more than it is one about engineering or developer psychology. I would prefer a methodology which was a bit more rooted in objectivity and with clear, honest evaluations of the trade-offs involved. For instance, Clean Code broadly shies away from talking about the negative aspects of atomising your code base which Codin' Dirty touches on.
In this sense I think there are criteria which we could talk about more concretely: code density and code sparsity, particularly where coherent functionaltiy ends up split across many code files and over many more hundreds of lines than would have been written had they been included in a few larger methods. This is not to say that what Codin' Dirty is talking about is necessarily right in an absolute sense, I would not be so prescriptive, but when organisations are thinking about how many developers might work on a project these kinds of metrics should probably be taken into consideration because I think small teams are generally speaking going to work better on denser code bases with larger methods and larger teams may work better where the code base is more spread out and so they are less likely to tread on each others' toes.
In general I get a sense of dishonesty in the world of Clean Code and such where we like to pretend that "clean" code is not only going to be more maintainable, that it is also going to be more performant and just better - but in fact these claims are, in general, not true. The most optimised code is pretty much the complete antithesis of Clean Code recommendations, for example, usually with as minimal indirection as possible and high levels of expertise required to understand the code base.
PS: I heard a podcast interview with dude calling himself "Uncle Bob" ..being interviewed about "clean code" by an Indian with a thick Hindi accent. (https://youtu.be/SNdzafT0wbs). It truly was one of the most hilarious things I have ever listened to and I grew up listening to Dr. Demento. (@19:49 I know what "Uncle Bob" is thinking ---all his peers do. It's a slow build throughout the hour and He finally loses it when the dev-el-op-er finally presses it with his brutally honest but completely F-up view of the art of software development.)
What I love about this is how it demonstrates the impact the Clean Code advocates/movement have had on software. The post author refers to 200 line functions are feeling questionable. When Clean Code was first introduced, seeing functions that were thousands of lines was a regular occurrence, and there was no norm established that perhaps we should strive for more brevity.
Similarly, unit test have become accepted practice, and improved overall project quality, even if TDD is uncommon.
Clean Code is, IMO, too extreme in many respects, but it shifted the conversation and conventions in a much-needed direction.
The final point he makes is a major missing piece; using polymorphism and type systems well is difficult and requires up-front thought and design work that many programmers are both unaccustomed to, and uninterested in. So conditionals remain the norm. That is unfortunate, but also predictable.
Good post. Saved to cite the next time that someone in code review asks me to breakup a large function that is a series of linear steps that won’t be reused.
I think Clean Code is one of these things where it's a reasonably good place to start, but as a text it's a little infantalising and overly prescriptive for people with significant experience, ignoring the necessary trade-offs which are being made in the recommendations in favour of trying to present something which appears to be self-consistently opinionated. Nevertheless, at a certain level of critical thinking it becomes clear that the recommendations in Clean Code are in fact opinions made on broadly subjective criteria - it is something of an influencer tome more than it is one about engineering or developer psychology. I would prefer a methodology which was a bit more rooted in objectivity and with clear, honest evaluations of the trade-offs involved. For instance, Clean Code broadly shies away from talking about the negative aspects of atomising your code base which Codin' Dirty touches on.
In this sense I think there are criteria which we could talk about more concretely: code density and code sparsity, particularly where coherent functionaltiy ends up split across many code files and over many more hundreds of lines than would have been written had they been included in a few larger methods. This is not to say that what Codin' Dirty is talking about is necessarily right in an absolute sense, I would not be so prescriptive, but when organisations are thinking about how many developers might work on a project these kinds of metrics should probably be taken into consideration because I think small teams are generally speaking going to work better on denser code bases with larger methods and larger teams may work better where the code base is more spread out and so they are less likely to tread on each others' toes.
In general I get a sense of dishonesty in the world of Clean Code and such where we like to pretend that "clean" code is not only going to be more maintainable, that it is also going to be more performant and just better - but in fact these claims are, in general, not true. The most optimised code is pretty much the complete antithesis of Clean Code recommendations, for example, usually with as minimal indirection as possible and high levels of expertise required to understand the code base.
https://youtu.be/8Z3N3KquVJY
That's me, everytime I visit this website ROFL
PS: I heard a podcast interview with dude calling himself "Uncle Bob" ..being interviewed about "clean code" by an Indian with a thick Hindi accent. (https://youtu.be/SNdzafT0wbs). It truly was one of the most hilarious things I have ever listened to and I grew up listening to Dr. Demento. (@19:49 I know what "Uncle Bob" is thinking ---all his peers do. It's a slow build throughout the hour and He finally loses it when the dev-el-op-er finally presses it with his brutally honest but completely F-up view of the art of software development.)
Let’s call it “Gross Coding”