The attribution to TigerBeetle should be at the top of the page with a link to the original tigerstyle, not buried at the bottom. Right now it reads like official TigerBeetle content until you scroll down, which isn't fair to either you or the original team.
I was enjoying what I was reading until the "Limit function length" part which made me jolt out of my chair.
This is a common misconception.
Limit function length: Keep functions concise, ideally under 70 lines. Shorter functions are easier to understand, test, and debug. They promote single responsibility, where each function does one thing well, leading to a more modular and maintainable codebase.
Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step.
New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing!_ Even better, they start putting stuff in various files because the files are getting too long.
Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. They're used exactly once, and they're only used as part of a long process. You've just increased the cognitive load of dealing with your product by a factor of 12. It's downright malignant.
Code should be split so that state is isolated, and business processes (intellectual property) is also self contained and testable. But don't buy into this "70 lines" rule. It makes no sense. 70 lines of python isn't the same as 70 lines of C, for starters. If code is sequential, and always running in that order and it reads like a long script; that's because it is!
Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance. These things weren't clear to me the first 10 years, but after 30 years, I've made those conclusions. I hope other old-timers can chime in on this.
The length of functions in terms of line count has absolutely nothing to do with "a more modular and maintainable codebase", as explained in the manifesto.
Just like "I committed 3,000 lines of code yesterday" has nothing to do with productivity. And a red car doesn't go faster.
“Ideally under 70 lines” is not “always under 70 lines under pain of death”.
It’s a guideline. There are exceptions. Most randomly-selected 100-line functions in the wild would probably benefit from being four 25-line functions. But many wouldn’t. Maturity is knowing when the guideline doesn’t apply. But if you find yourself constantly writing a lot of long functions, it’s a good signal something is off.
Sure, language matters. Domain matters too. Pick a number other than 70 if you’re using a verbose language like golang. Pick a number less if you’re using something more concise.
People need to stop freaking out over reasonable, well-intentioned guidelines as if they’re inviolable rules. 150 is way too many for almost all functions in mainstream languages. 20 would need to be violated way too often to be a useful rule of thumb.
As with all guidelines, some people will turn it into a hard rule, and build a linter to enforce it. Then they will split long functions into shorter ones, but with a lot of arguments. And then their other linter that limits argument count will kick in.
And someone else will use the idea that this is a misconception to justify putting hundreds of lines in one function.
Some junior dev will then come along with his specific user story and, not wanting to tear the whole thing up, will insert his couple of lines. Repeat over a 10 year lifecycle and it becomes completely unmanageable.
I remember trying to tease apart hundreds of lines of functions in a language which didn't have the IDE support we have these days. It was always painful.
Even the most script like functions I've ever worked with benefit from comments like:
# Validate that the payment was successfully sent.
These subheadings can just as easily become functions. If you limit the context in each function it becomes a genuine improvement.
It's using a parameter of secondary importance as a primary, so it's wrong with any number. The comment even has a helpful analogy to LOCs. People need to stop freaking out over reasonable, well-intentioned criticism of guidelines as if they were arguing something else like misundertstanding of that strictness of rules.
This is a balancing act between conflicting requirements. It is understandable that you don't want to jump back and forth between countless small subfunctions in order to meticulously trace a computation. But conceptually, the overall process still breaks down into subprocesses. Wouldn't it make sense to move these sub-processes into separate functions and name them accordingly? I have a colleague who has produced code blocks that are 6000 lines long. It is then almost impossible to get a quick overview of what the code actually does. So why not increase high-level readability by making the conceptual structure visible in this way?
A ReverseList function, for example, is useful not only because it can be used in many different places, but also because the same code would be more disruptive than helpful for understanding the overall process if it were inline. Of course, I understand that code does not always break down into such neat semantic building blocks.
> Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance.
> Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step.
> New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing
I would almost certainly split it up, not because "a function should only do one thing" but because invariably you get a run of several steps that can be chunked into one logical operation, and replacing those steps with the descriptive name reduces the cognitive load of reading and maintaining the original function.
Just chiming in here to say, absolutely you should keep functions small and doing one thing. Any junior reading this should go and read the pragmatic programmer.
Of course a function can be refactored in a wrongheaded way as you’ve suggested, but that’s true of any coding - there is taste.
The ideal of refactoring such a function you describe would be to make it more readable, not less. The whole point of modules is so you don’t have to hold in your head the detail they contain.
Long functions are in general a very bad idea. They don’t fit on a single screen, so to understand them you end up scrolling up and down. It’s hard to follow the state, because more things happen and there is more state as the function needs more parameters and intermediate variables. They’re far more likely to lead to complecting (see Rich Hickey) and intertwining different processes. Most importantly, for an inexperienced dev it increases the chance of a big ball of mud, eg a huge switch statement with inline code rather than a series of higher level abstractions that can be considered in isolation.
I don’t think years worked is an indicator of anything, but I’ve been coding for nearly 40 years FWIW.
> They don’t fit on a single screen, so to understand them you end up scrolling up and down.
Split them up into multiple functions and there is more scrolling, and now also jumping around because the functions could be anywhere.
> It’s hard to follow the state, because more things happen
It's easier to follow state, because state is encapsulated in one function, not constantly passed around to multiple.
> a huge switch statement with inline code
Huge switch statements are a common reason for large functions. Python has this architecture and largely won because the interpreter is surprisingly easy to understand and extend.
What would be a good example of the kinds of things a 100 line function would be doing?
I don't see that in my world so i'm naively trying to inline functions in codebases i'm familiar with and not really valuing the result i can dream up.
For one, my tests would be quite annoying, large and with too much setup for my taste. But i don't think i'd like to have to scroll a function, especially if i had to make changes to the start and end of the function in one commit.
I'm curious of the kinds of "long script" flavoured procedures, what are they doing typically?
I ask because some of the other stuff you mentioned i really strongly agree with like "Focus on separating pure code from stateful code" - this is such an under valued concept, and it's an absolute game changer for building robust software. Can i extract a pure function for this and separately have function to coordinate side effects - but that's incompatible with too long functions, those side effectfull functions would be so hard to test.
I think that you are describing an ideal scenario that does not reflect what I see in reality. In the "enterprise applications" that I work on, long functions evolve poorly. Meaning, even if a long function follows the ideal of "single thread, step by step" when it's first written, when devs add new code, they will typically add their next 5 lintes to the same function because it's already there. Then after 5 years you have a monster.
Agree that "splitting for splittings' sake" (only to stay below an arbitrary line count) does indeed not make sense.
On the other hand I often see functions like you describe - something has to be executed step-by-step (and the functionality is only used there) - where I _whish_ it was split up into separate functions, so we could have meaningful tests for each step, not only for the "whole thing".
I have been programming professionally for 17 years and I think this guideline is fine. I have difficulty imagining a function of 70 lines that would not be better off being split into multiple functions. It is true that if a function is just a list of stuff longer functions can be allowed then when it does multiple different things but 70 lines is really pushing that.
Funny this is the assertion in the I would most agree to use as general design principle to apply as thoroughly as possible, with tightened variable scope on equal position. Though no general principle should be followed blindly of course.
That's not the the function length per se. A function that is 1000 lines of mere basic assignments or holding a single giant switch can sometime be an apt option with careful consideration of tradeoffs as origin of the design. Number of line doesn't tell much of the function complexity and cognitive load of will imply to grasp what it does, though it can be a first proxy metric.
But most of the time giant functions found in the wild grow up organically with 5 levels of intertwined control control moving down and up, accumulating variables instead of const without consideration to scope span. In that case every time a change is needed, the cognitive load to grasp everything that need to be considered to change this finding is extremely huge. All the more as this giant function most likely won't have an test suit companion, because good engineering practices are more followed at equal level on several points.
To me, that's key here. That things are scattered over multiple files is a minor issue. Any competent IDE can more or less hide that and smoothen the experience. But if you have factored some code into a function, suddenly other places may call it. You have inadvertently created an API and any change you make needs to double check that other callers either don't exist or have their assumptions not suddenly violated. That's no issue if the code is right therr. No other users, and the API is the direct context of the lines of code right around it. (Yes you can limit visibility to other modules etc but that doesn't fully solve the issue of higher cognitive load.)
That is an important consideration which you should weigh against the clarity gained by breaking up your code in units of logic.
That is precisely the reason why for a function the scope of its container should be constrained. A class with a Singe Responsibility (SRP) would not suffer from several private methods. Breaking up your methods in a God Class brings both a decrease and an increase of mental load.
Also, in functional programming languages one can nest functions, and OOP languages oftentimes can nest classes.
Haskell does function hiding the most elegant imo, by means of `where`:
foo x =
if odd x then
short x
else
reallyLongFunc x
where
short y = y*2
reallyLongFunc z = z + 2
What has always baffled me is how CS uses the word "safety" where all other industries use "robustness".
A robust design is one that not only is correct, but also ensures the functionality even when boundary conditions deviate from the ideal. It's a mix of stability, predictability and fault tolerance. Probably "reliable" can be used as a synonym.
At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
In the design of a drill, for example, if the motor is guaranteed to spin at the intended rpm independently of orientation, temperature and state of charge of the battery, that's a robust design. You'll hear the word "safe" only if it has two triggers to ensure both hands are on the handles during operation.
You use safety more in relation to correctness aspects of algorithms. Some safety properties you can actually prove. When it comes to robustness it is more about dealing with things sometimes being incorrect regardless of the safety mechanisms. So, a try catch for something that isn't actually expected to normally fail makes you robust against the scenario when that does fail. But you'd use e.g. a type system to prevent classes of failures as safety mechanism.
It's a very soft distinction, I agree. And possibly one that translates less well to the physical world where wear and tear are a big factor for robustness. You can't prove an engine to be safe after thousands of hours. But you can make it robust against a lot of expected stuff it will encounter over those hours. Safety features tend to be more about protecting people than the equipment.
> What has always baffled me is how CS uses the word "safety" where all other industries use "robustness".
FWIW "safety factors" are an important part of all kinds of engineering. The term is overloaded and more elusive in CS because of the protean qualities of algorithmic constructs, but that's another discussion.
> At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
That makes sense, as CS is transversal to industries.
Same practices that can contribute to literally save lifes in one domain will just avoid minor irritating feeling in an other.
> Limit line lengths: Keep lines within a reasonable length (e.g., 100 characters) to ensure readability. This prevents horizontal scrolling and helps maintain an accessible code layout.
Do you not use word wrap? The downside of this rule is that vertical scrolling is increased (yes, it's easier, but with a wrap you can make that decision locally) and accessibility is reduced (and monitors are wide, not tall), which is especially an issue when such a style is applied to comments so you can't see all the code in a single screen due to multiple lines of comments in that long formal grammatically correct style
Similarly,
> Limit function length: Keep functions concise, ideally under 70 lines.
> and move non-branching logic to helper functions.
Break accessibility of logic, instead of linearly reading what's going on you have to jump around (though popups could help a bit). While you can use block collapse to hide those helper blocks without losing their locality and then expand only one helper block.
Pretty good list, but a hidden assumption is that the reader works in an imperative style. For instance, recursion is the bread and butter of functional and logical programming and is just fine.
The most important advice one can give to programmers is to
1. Know your problem domain.
2. Think excessively deep about a conceptual model that captures the
relevant aspects of your problem domain.
3. Be anal about naming your concepts. Thinking about naming oftentimes
feeds back to (1) and (2), forming a loop.
4. Use a language and type system that is powerful enough to implement
previous points.
Zero technical debt certainly is... ambitious. Sure, if we knew _what_ to build the first time around this would be possible. From my experience, the majority of technical debt is sourced from product requirement changes coupled with tight deadlines. I think even the most ardent follower of Tiger Style is going to find this nigh impossible.
I would even say that from a project management perspective, zero technical debt is undesirable. It means you have invested resources into perfecting something that, almost by definition, could have waited a while, instead of improving some more important metric such as user experience. (I do understand tech debt makes it harder to work with the codebase, impacting all metrics, I just don’t think zero tech debt is a good target.)
> perfecting something that, almost by definition, could have waited a while
No technical debt is not the same thing as “perfection”. Good enough doesn’t mean perfect.
Would it be ok to submit an essay with only 90% of the underlined spelling mistakes fixed? Do you paint your outdoor table but leave the underside for later?
Do it once, do it right. That doesn’t mean perfect, it means not cutting corners.
There are contexts where quick and dirty and (hopefully) come back later are warranted. Far more often it is just an excuse for shoddy work. You used the word “perfection” as the contrast to “technical debt”. Granted, technical debt is not a well defined term, but I am simply highlighting that “free from technical debt” in no way implies anything like perfect. It just implies well made.
> Do it right the first time: Take the time to design and implement solutions correctly from the start.
Doing good design is off course important, but on the other hand software design is a lot of times iterative because of unknown unknown s. Sometimes it can be better to create quick prototype(s) to see which direction is the best to actually "do it right", instead of spending effort designing something that in the end won't be build.
> Avoid recursion if possible to keep execution bounded and predictable, preventing stack overflows and uncontrolled resource use.
In languages with TCO (e.g. Haskell, Scheme, OCaml, etc.) the compiler can rewrite to a loop.
Some algorithms are conceptually recursive and even though you can rewrite them, the iterative version would be unreadable: backtracking solvers, parsing trees, quicksort partition & subprblems, divide-and-conquer, tree manipulation, compilers, etc.
Can you have a coding philosophy that ignores the time or cost taken to design and write code? Or a coding philosophy that doesn't factor in uncertainty and change?
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.
There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.
Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.
You forgot “get it right first time” which goes against the basic startup mode of being early to the market or die.
For some companies, trying to get it right the first time may make sense but that can easily lead to never shipping anything.
I don't really see anything in it that particularly difficult our counter-productive. Or, to be honest, anything that isn't just plain good coding practice. All suitably given as guidelines not hard and fast rules.
The real joy of having coding standards, is that it sets a good baseline when training junior programmers. These are the minimum things you need to know about good coding practice before we start training you up to be a real programmer.
If you are anything other than a junior programmer, and have a problem with it, I would not hire you.
This seems pretty knee-jerk. I do most of this and have delivered a hell of a lot of software in my life. Many projects are still running, unmodified, in production, at companies I’ve long since left.
You can get a surprising amount done when you aren’t spending 90% of your time fighting fires and playing whack-a-mole with bugs.
Well, I'm sure you're well aware of perils of premature optimization and know how to deliver a product within a reasonable timeframe. TigerStyle seems to me to not be developed through the lens of producing value for a company via software, but rather having a nice time as a developer (see: third axiom).
I'm not saying the principles themselves are poor, but I don't think they're suitable for a commercial environment.
I had the same association but interestingly this version appears to be a "remix" of TigerBeetle's style guide, by an unrelated individual. At a glance, there is a lot of a crossover but some changes as well.
I think the point is well made though. When you're building something like a transactions database, the margin for error is rather low.
Then I'm curious about what their principles on deadlines. I don't see how it aligns with their coding styleguide. Taking the TigerStyle at face value does not encourage deliverance. They're practically saying "take the time you need to polish your thing to perfection, 0 technical debt".
But ofc, I understand styleguides are... well.. guides. Not law.
This is the kind of development that one needs for safety critical applications. E.g., nuclear power plants or airplane control software. I don't think it is economically feasible for less critical software. It presumes a great degree of stability in requirements which is necessary for such applications.
I think they are hinting at the idea that a long running process on a dedicated server should allocate all necessary memory during initialization. That way you never accidentally run out of memory during high loads, a time when you really don't want your service to go down.
You would need so much more physical memory with this approach. For example, you may want to allocate a lot of memory for the browser, but then close the tab. 2 seconds later you want to use it for a compiler. And then you switch to a code editor.
Limit function length: Keep functions concise, ideally under 70 lines. Shorter functions are easier to understand, test, and debug.
The usual BS... yes, shorter functions are easier to understand by themselves but what matters, especially when debugging, is how the whole system works.
Edit: care to refute? Several decades of experience has shown me what happens. I'm surprised this crap is still being peddled.
This one, like some others in this style guide, can also be found in Clean Code. Not sure why you feel the need to call it "BS". Nobody is saying that your 75 line function is bad.
It's a reasonable guideline. Juniors won't do this automagically.
The "Clean Code" book is famous for making bad suggestions and producing unreadable code with too many functions just for the sake of following its rules.
Don't get me wrong, I often apply it myself and refactor code into smaller functions. But readability, understandability and ease of maintenance comes first.
Especially juniors will apply such rules overzealously, bending over backwards to stay within an arbitrary limit while at the same time not really understanding why.
Frustratingly, their lack of experience makes it impossible to discuss such dogmatic rules in any kind of nuanced way, while they energetically push for stricter linter rules etc.
I've tried, and one even argued there's never ever a reason to go over N lines because their book is "best-practice" and said so. And you should not deviate from "best-practice" so that's it. I'm not making this up!
That said, I'm always open to discuss the pros and cons for specific cases, and I do agree the default should be to lean towards smaller functions in 90% of cases.
Yeah, you can’t build a building unless you’re certain that every brick, rebar, and cement is all solid and meets a quality bar. Of course you need architectural schematics to make sense of the broader picture, but you need quality control on the pieces being put in place too.
I like this, to me it reads like a collection of fairly obvious best practices (even if there might be practical reasons to avoid when shipper fast etc), so I'm surpised to see so many enraged comments.
Any recommendations for other coding philosophies or "first principle" guides? I know of "extreme programming" but not much else.
The attribution to TigerBeetle should be at the top of the page with a link to the original tigerstyle, not buried at the bottom. Right now it reads like official TigerBeetle content until you scroll down, which isn't fair to either you or the original team.
The author graciously gifted the domain to us and we’re literally days away from launching original TigerStyle here.
I was enjoying what I was reading until the "Limit function length" part which made me jolt out of my chair.
This is a common misconception.
Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step.New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing!_ Even better, they start putting stuff in various files because the files are getting too long.
Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. They're used exactly once, and they're only used as part of a long process. You've just increased the cognitive load of dealing with your product by a factor of 12. It's downright malignant.
Code should be split so that state is isolated, and business processes (intellectual property) is also self contained and testable. But don't buy into this "70 lines" rule. It makes no sense. 70 lines of python isn't the same as 70 lines of C, for starters. If code is sequential, and always running in that order and it reads like a long script; that's because it is!
Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance. These things weren't clear to me the first 10 years, but after 30 years, I've made those conclusions. I hope other old-timers can chime in on this.
The length of functions in terms of line count has absolutely nothing to do with "a more modular and maintainable codebase", as explained in the manifesto.
Just like "I committed 3,000 lines of code yesterday" has nothing to do with productivity. And a red car doesn't go faster.
“Ideally under 70 lines” is not “always under 70 lines under pain of death”.
It’s a guideline. There are exceptions. Most randomly-selected 100-line functions in the wild would probably benefit from being four 25-line functions. But many wouldn’t. Maturity is knowing when the guideline doesn’t apply. But if you find yourself constantly writing a lot of long functions, it’s a good signal something is off.
Sure, language matters. Domain matters too. Pick a number other than 70 if you’re using a verbose language like golang. Pick a number less if you’re using something more concise.
People need to stop freaking out over reasonable, well-intentioned guidelines as if they’re inviolable rules. 150 is way too many for almost all functions in mainstream languages. 20 would need to be violated way too often to be a useful rule of thumb.
As with all guidelines, some people will turn it into a hard rule, and build a linter to enforce it. Then they will split long functions into shorter ones, but with a lot of arguments. And then their other linter that limits argument count will kick in.
And someone else will use the idea that this is a misconception to justify putting hundreds of lines in one function.
Some junior dev will then come along with his specific user story and, not wanting to tear the whole thing up, will insert his couple of lines. Repeat over a 10 year lifecycle and it becomes completely unmanageable.
I remember trying to tease apart hundreds of lines of functions in a language which didn't have the IDE support we have these days. It was always painful.
Even the most script like functions I've ever worked with benefit from comments like:
These subheadings can just as easily become functions. If you limit the context in each function it becomes a genuine improvement.Sounds like an opportunity to guide that person.
Good luck Arguing with stubborn devs who know it all
> It’s a guideline ... Pick a number
It's using a parameter of secondary importance as a primary, so it's wrong with any number. The comment even has a helpful analogy to LOCs. People need to stop freaking out over reasonable, well-intentioned criticism of guidelines as if they were arguing something else like misundertstanding of that strictness of rules.
This is a balancing act between conflicting requirements. It is understandable that you don't want to jump back and forth between countless small subfunctions in order to meticulously trace a computation. But conceptually, the overall process still breaks down into subprocesses. Wouldn't it make sense to move these sub-processes into separate functions and name them accordingly? I have a colleague who has produced code blocks that are 6000 lines long. It is then almost impossible to get a quick overview of what the code actually does. So why not increase high-level readability by making the conceptual structure visible in this way?
A ReverseList function, for example, is useful not only because it can be used in many different places, but also because the same code would be more disruptive than helpful for understanding the overall process if it were inline. Of course, I understand that code does not always break down into such neat semantic building blocks.
> Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance.
100%!
> Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step. > New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing
I would almost certainly split it up, not because "a function should only do one thing" but because invariably you get a run of several steps that can be chunked into one logical operation, and replacing those steps with the descriptive name reduces the cognitive load of reading and maintaining the original function.
Just chiming in here to say, absolutely you should keep functions small and doing one thing. Any junior reading this should go and read the pragmatic programmer.
Of course a function can be refactored in a wrongheaded way as you’ve suggested, but that’s true of any coding - there is taste.
The ideal of refactoring such a function you describe would be to make it more readable, not less. The whole point of modules is so you don’t have to hold in your head the detail they contain.
Long functions are in general a very bad idea. They don’t fit on a single screen, so to understand them you end up scrolling up and down. It’s hard to follow the state, because more things happen and there is more state as the function needs more parameters and intermediate variables. They’re far more likely to lead to complecting (see Rich Hickey) and intertwining different processes. Most importantly, for an inexperienced dev it increases the chance of a big ball of mud, eg a huge switch statement with inline code rather than a series of higher level abstractions that can be considered in isolation.
I don’t think years worked is an indicator of anything, but I’ve been coding for nearly 40 years FWIW.
> They don’t fit on a single screen, so to understand them you end up scrolling up and down.
Split them up into multiple functions and there is more scrolling, and now also jumping around because the functions could be anywhere.
> It’s hard to follow the state, because more things happen
It's easier to follow state, because state is encapsulated in one function, not constantly passed around to multiple.
> a huge switch statement with inline code
Huge switch statements are a common reason for large functions. Python has this architecture and largely won because the interpreter is surprisingly easy to understand and extend.
What would be a good example of the kinds of things a 100 line function would be doing?
I don't see that in my world so i'm naively trying to inline functions in codebases i'm familiar with and not really valuing the result i can dream up.
For one, my tests would be quite annoying, large and with too much setup for my taste. But i don't think i'd like to have to scroll a function, especially if i had to make changes to the start and end of the function in one commit.
I'm curious of the kinds of "long script" flavoured procedures, what are they doing typically?
I ask because some of the other stuff you mentioned i really strongly agree with like "Focus on separating pure code from stateful code" - this is such an under valued concept, and it's an absolute game changer for building robust software. Can i extract a pure function for this and separately have function to coordinate side effects - but that's incompatible with too long functions, those side effectfull functions would be so hard to test.
Containing the giant switch statement in a byte code interpreter or a tokeniser.
I think that you are describing an ideal scenario that does not reflect what I see in reality. In the "enterprise applications" that I work on, long functions evolve poorly. Meaning, even if a long function follows the ideal of "single thread, step by step" when it's first written, when devs add new code, they will typically add their next 5 lintes to the same function because it's already there. Then after 5 years you have a monster.
Agree that "splitting for splittings' sake" (only to stay below an arbitrary line count) does indeed not make sense.
On the other hand I often see functions like you describe - something has to be executed step-by-step (and the functionality is only used there) - where I _whish_ it was split up into separate functions, so we could have meaningful tests for each step, not only for the "whole thing".
I have been programming professionally for 17 years and I think this guideline is fine. I have difficulty imagining a function of 70 lines that would not be better off being split into multiple functions. It is true that if a function is just a list of stuff longer functions can be allowed then when it does multiple different things but 70 lines is really pushing that.
And here's John Carmack on the subject of 1000s of lines of code in a single function: http://number-none.com/blow/blog/programming/2014/09/26/carm...
Funny this is the assertion in the I would most agree to use as general design principle to apply as thoroughly as possible, with tightened variable scope on equal position. Though no general principle should be followed blindly of course.
That's not the the function length per se. A function that is 1000 lines of mere basic assignments or holding a single giant switch can sometime be an apt option with careful consideration of tradeoffs as origin of the design. Number of line doesn't tell much of the function complexity and cognitive load of will imply to grasp what it does, though it can be a first proxy metric.
But most of the time giant functions found in the wild grow up organically with 5 levels of intertwined control control moving down and up, accumulating variables instead of const without consideration to scope span. In that case every time a change is needed, the cognitive load to grasp everything that need to be considered to change this finding is extremely huge. All the more as this giant function most likely won't have an test suit companion, because good engineering practices are more followed at equal level on several points.
> They're used exactly once
To me, that's key here. That things are scattered over multiple files is a minor issue. Any competent IDE can more or less hide that and smoothen the experience. But if you have factored some code into a function, suddenly other places may call it. You have inadvertently created an API and any change you make needs to double check that other callers either don't exist or have their assumptions not suddenly violated. That's no issue if the code is right therr. No other users, and the API is the direct context of the lines of code right around it. (Yes you can limit visibility to other modules etc but that doesn't fully solve the issue of higher cognitive load.)
That is an important consideration which you should weigh against the clarity gained by breaking up your code in units of logic.
That is precisely the reason why for a function the scope of its container should be constrained. A class with a Singe Responsibility (SRP) would not suffer from several private methods. Breaking up your methods in a God Class brings both a decrease and an increase of mental load.
Also, in functional programming languages one can nest functions, and OOP languages oftentimes can nest classes.
Haskell does function hiding the most elegant imo, by means of `where`:
What has always baffled me is how CS uses the word "safety" where all other industries use "robustness".
A robust design is one that not only is correct, but also ensures the functionality even when boundary conditions deviate from the ideal. It's a mix of stability, predictability and fault tolerance. Probably "reliable" can be used as a synonym.
At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
In the design of a drill, for example, if the motor is guaranteed to spin at the intended rpm independently of orientation, temperature and state of charge of the battery, that's a robust design. You'll hear the word "safe" only if it has two triggers to ensure both hands are on the handles during operation.
Without any context safety _can_ mean a lot of things, but it's usually used as a property of a system and used alongside liveness.
Basically, safety is "bad things won't happen" and liveness is "good things eventually happen".
This is almost always the way safety is used in a CS context.
You use safety more in relation to correctness aspects of algorithms. Some safety properties you can actually prove. When it comes to robustness it is more about dealing with things sometimes being incorrect regardless of the safety mechanisms. So, a try catch for something that isn't actually expected to normally fail makes you robust against the scenario when that does fail. But you'd use e.g. a type system to prevent classes of failures as safety mechanism.
It's a very soft distinction, I agree. And possibly one that translates less well to the physical world where wear and tear are a big factor for robustness. You can't prove an engine to be safe after thousands of hours. But you can make it robust against a lot of expected stuff it will encounter over those hours. Safety features tend to be more about protecting people than the equipment.
> What has always baffled me is how CS uses the word "safety" where all other industries use "robustness".
FWIW "safety factors" are an important part of all kinds of engineering. The term is overloaded and more elusive in CS because of the protean qualities of algorithmic constructs, but that's another discussion.
> At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
That makes sense, as CS is transversal to industries. Same practices that can contribute to literally save lifes in one domain will just avoid minor irritating feeling in an other.
I think in things like philosophy/debate/etc it's common to talk about "safe assumptions". I always figured that's the metaphor being leaned on here.
I agree it's confusing when you step outside of CS world though.
> Limit line lengths: Keep lines within a reasonable length (e.g., 100 characters) to ensure readability. This prevents horizontal scrolling and helps maintain an accessible code layout.
Do you not use word wrap? The downside of this rule is that vertical scrolling is increased (yes, it's easier, but with a wrap you can make that decision locally) and accessibility is reduced (and monitors are wide, not tall), which is especially an issue when such a style is applied to comments so you can't see all the code in a single screen due to multiple lines of comments in that long formal grammatically correct style
Similarly, > Limit function length: Keep functions concise, ideally under 70 lines.
> and move non-branching logic to helper functions.
Break accessibility of logic, instead of linearly reading what's going on you have to jump around (though popups could help a bit). While you can use block collapse to hide those helper blocks without losing their locality and then expand only one helper block.
Pretty good list, but a hidden assumption is that the reader works in an imperative style. For instance, recursion is the bread and butter of functional and logical programming and is just fine.
The most important advice one can give to programmers is to
Zero technical debt certainly is... ambitious. Sure, if we knew _what_ to build the first time around this would be possible. From my experience, the majority of technical debt is sourced from product requirement changes coupled with tight deadlines. I think even the most ardent follower of Tiger Style is going to find this nigh impossible.
I would even say that from a project management perspective, zero technical debt is undesirable. It means you have invested resources into perfecting something that, almost by definition, could have waited a while, instead of improving some more important metric such as user experience. (I do understand tech debt makes it harder to work with the codebase, impacting all metrics, I just don’t think zero tech debt is a good target.)
> perfecting something that, almost by definition, could have waited a while
No technical debt is not the same thing as “perfection”. Good enough doesn’t mean perfect.
Would it be ok to submit an essay with only 90% of the underlined spelling mistakes fixed? Do you paint your outdoor table but leave the underside for later?
Do it once, do it right. That doesn’t mean perfect, it means not cutting corners.
Would you keep fixing the underlined spelling mistakes on your “watch out for holes in the pavement” sign while people are already walking there?
There are contexts where quick and dirty and (hopefully) come back later are warranted. Far more often it is just an excuse for shoddy work. You used the word “perfection” as the contrast to “technical debt”. Granted, technical debt is not a well defined term, but I am simply highlighting that “free from technical debt” in no way implies anything like perfect. It just implies well made.
The orginal piece is a much better read.
https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
> Do it right the first time: Take the time to design and implement solutions correctly from the start.
Doing good design is off course important, but on the other hand software design is a lot of times iterative because of unknown unknown s. Sometimes it can be better to create quick prototype(s) to see which direction is the best to actually "do it right", instead of spending effort designing something that in the end won't be build.
> Avoid recursion if possible to keep execution bounded and predictable, preventing stack overflows and uncontrolled resource use.
In languages with TCO (e.g. Haskell, Scheme, OCaml, etc.) the compiler can rewrite to a loop.
Some algorithms are conceptually recursive and even though you can rewrite them, the iterative version would be unreadable: backtracking solvers, parsing trees, quicksort partition & subprblems, divide-and-conquer, tree manipulation, compilers, etc.
Presumably why it says "Avoid recursion if possible".
Can you have a coding philosophy that ignores the time or cost taken to design and write code? Or a coding philosophy that doesn't factor in uncertainty and change?
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
Their product is a high-throughput database for financial transactions, so they might have different design requirements than the things you work on.
I would argue that these:
- 'commitment to zero technical debt'
- 'design for performance early'
Will save you time and cost in designing, even in the relatively near term of a few months when you have to add new features etc.
There's obviously extremes of "get something out the door fast and broken then maybe neaten it up later" vs "refactor the entire codebase any time you think soemthing could be better", but I've seen more projects hit a wall due to leaning to far to the first than the second.
Either way, I definitely wouldn't call it "privileged" as if it isn't a practical engineering choice. That seems to judt frame things in a way where you're already assuming early design and commitment to refactoring is a bad idea.
You forgot “get it right first time” which goes against the basic startup mode of being early to the market or die. For some companies, trying to get it right the first time may make sense but that can easily lead to never shipping anything.
I would not hire a monk of TigerStyle. We'd get nothing done! This amount of coding perfection is best for hobby projects without deadlines.
I don't really see anything in it that particularly difficult our counter-productive. Or, to be honest, anything that isn't just plain good coding practice. All suitably given as guidelines not hard and fast rules.
The real joy of having coding standards, is that it sets a good baseline when training junior programmers. These are the minimum things you need to know about good coding practice before we start training you up to be a real programmer.
If you are anything other than a junior programmer, and have a problem with it, I would not hire you.
This seems pretty knee-jerk. I do most of this and have delivered a hell of a lot of software in my life. Many projects are still running, unmodified, in production, at companies I’ve long since left.
You can get a surprising amount done when you aren’t spending 90% of your time fighting fires and playing whack-a-mole with bugs.
Well, I'm sure you're well aware of perils of premature optimization and know how to deliver a product within a reasonable timeframe. TigerStyle seems to me to not be developed through the lens of producing value for a company via software, but rather having a nice time as a developer (see: third axiom).
I'm not saying the principles themselves are poor, but I don't think they're suitable for a commercial environment.
TigerStyle is developed in the context of TigerBeetle, who seem to be a successful company getting a lot done.
I had the same association but interestingly this version appears to be a "remix" of TigerBeetle's style guide, by an unrelated individual. At a glance, there is a lot of a crossover but some changes as well.
I think the point is well made though. When you're building something like a transactions database, the margin for error is rather low.
Then I'm curious about what their principles on deadlines. I don't see how it aligns with their coding styleguide. Taking the TigerStyle at face value does not encourage deliverance. They're practically saying "take the time you need to polish your thing to perfection, 0 technical debt".
But ofc, I understand styleguides are... well.. guides. Not law.
Why do you think developer enjoyment is orthogonal to productivity and delivery?
Which ones, specifically?
Which parts of it seem objectionable?
Related: https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
This is the kind of development that one needs for safety critical applications. E.g., nuclear power plants or airplane control software. I don't think it is economically feasible for less critical software. It presumes a great degree of stability in requirements which is necessary for such applications.
> Allocate all necessary memory during startup and avoid dynamic memory allocation after initialization.
Why?
I think they are hinting at the idea that a long running process on a dedicated server should allocate all necessary memory during initialization. That way you never accidentally run out of memory during high loads, a time when you really don't want your service to go down.
You would need so much more physical memory with this approach. For example, you may want to allocate a lot of memory for the browser, but then close the tab. 2 seconds later you want to use it for a compiler. And then you switch to a code editor.
You can run out of memory and trigger a crash.
Things an engineer early in his career would write.
Or things a senior engineer would write late in their career to make it easier to train junior programmers. <shrugs>
Limit function length: Keep functions concise, ideally under 70 lines. Shorter functions are easier to understand, test, and debug.
The usual BS... yes, shorter functions are easier to understand by themselves but what matters, especially when debugging, is how the whole system works.
Edit: care to refute? Several decades of experience has shown me what happens. I'm surprised this crap is still being peddled.
This one, like some others in this style guide, can also be found in Clean Code. Not sure why you feel the need to call it "BS". Nobody is saying that your 75 line function is bad.
It's a reasonable guideline. Juniors won't do this automagically.
The "Clean Code" book is famous for making bad suggestions and producing unreadable code with too many functions just for the sake of following its rules.
Yes and it was BS in Clean Code too, not a fan.
Don't get me wrong, I often apply it myself and refactor code into smaller functions. But readability, understandability and ease of maintenance comes first.
Especially juniors will apply such rules overzealously, bending over backwards to stay within an arbitrary limit while at the same time not really understanding why.
Frustratingly, their lack of experience makes it impossible to discuss such dogmatic rules in any kind of nuanced way, while they energetically push for stricter linter rules etc.
I've tried, and one even argued there's never ever a reason to go over N lines because their book is "best-practice" and said so. And you should not deviate from "best-practice" so that's it. I'm not making this up!
That said, I'm always open to discuss the pros and cons for specific cases, and I do agree the default should be to lean towards smaller functions in 90% of cases.
It’s a lot easier to understand entire systems when they’re composed of small, pure, straightforward parts.
Yeah, you can’t build a building unless you’re certain that every brick, rebar, and cement is all solid and meets a quality bar. Of course you need architectural schematics to make sense of the broader picture, but you need quality control on the pieces being put in place too.
The low level stuff is also important.
I like this, to me it reads like a collection of fairly obvious best practices (even if there might be practical reasons to avoid when shipper fast etc), so I'm surpised to see so many enraged comments.
Any recommendations for other coding philosophies or "first principle" guides? I know of "extreme programming" but not much else.
A Philosophy of Software Design by John Ousterhout is often recommended, and is very good.
There was an interesting debate between John and Uncle Bob on their differences in style recently[1], with a related HN discussion[2].
[1] https://github.com/johnousterhout/aposd-vs-clean-code/blob/m...
[2] https://news.ycombinator.com/item?id=43166362
this smells like Clean Code BS. no real substance either, I could have made ChatGPT generate this article