I agree with the general message of this article but I want to rant a bit on what exactly is "boring architecture". I haven't quite managed to put this into succinct words, but in my experience building the innards of both backends and frontends for a couple of decades, I found that there's a type of "boring architecture" or "conventional wisdom" that can actually make you build a worse product, slower.
There's nothing odd about having a horizontally scaling (stateless) API server that synchronizes with a database. And then when you need longer running processes, maybe you add in a task queue. But then you want status updates so maybe you set up a pub/sub solution. And then of course your client codebase has to merge all this information into current state. I won't say this is a bad architecture. But if you're a startup with less than 1,000 active users and a handful of engineers splitting their time between working on these systems and your product, I would say to you: how about a single fat server and a transactional database? Maybe I'm old school, but it removes so many modes of failure while increasing performance (except in the rare case that a region goes down) and yet I think people don't tend to look at this option early on if they've been exposed to too much cloudification literature.
And don't get me started on microservices.
* Disclaimer: Every business sooner or later will need to create more distributed systems as they serve more regions, promise more availability, and just generally speaking more users mean more chances for things to go wrong in the infrastructure. But I've found myself several times helping startups revert prematurely distributed architectures to get their current product failing less and running faster, and I would argue it'd be nicer to start stupid simple and expand later.
I've been building systems for two decades as well and two lessons:
1) Never act like you're smarter than other developers. This is an industry with a lot of incredibly smart people and with a thousand different ways to solve every problem. There is no right way to do anything. Just the best effort given the cards you are dealt.
2) Anyone who has been involved in re-platforming knows how traumatic, destructive and error-prone it is. Statistically most are a failure. That is why so many developers don't gravitate towards your approach of building a basic architecture and then building something scalable later. They try to do it all in one go. And especially for startups where everyone goes into it assuming they will be building the next Google or Netflix.
Good points, and I agree. I think what bothers me is that a lot of the advice you might read out there is biased towards what you should build if you already had hundreds of employees and millions of users. Sprinkle in a bit of incentives from cloud providers, and you end up with a world where it only seems reasonable to start out with a Kubernetes cluster or multiple distributed communication systems from the get-go. I do think a lot of developers would intuitively start out with a much simpler system, but it almost looks wrong if you do. But it's not wrong, you can't predict the future so if you try to predict and set up your infrastructure for future success, "just in case", you'll still have to re-platform it, as you say.
I should add that some form of re-platforming is pretty much inevitable at some point in the lifetime of a business no matter which approach you take. Because infrastructure is informed by the structure of your organization as much as the needs of your product. If your number of engineers increases by a couple of orders of magnitude, it's likely any infrastructure you set up will need to reshape to account for that.
>Every business sooner or later will need to create more distributed systems
In real, most businesses won't survive till that, even that will survive can go a long way with a high latency service with a single deployment. A well-designed monolith will take to 10s of millions of DAU from across the world. Worst case you'll add 2-300ms of latency to your call, but if you look around most popular and successful of enterprise software have more than that and people are "happy" using them.
> As engineers, we are, by nature, attracted to novel solutions.
I'm not, I'm attracted to proven battle-hardened solutions that have stood the test of time.
I won't dismiss new frameworks and languages and I might try them out in my spare time, but I approach them with skepticism until enough time has passed.
I'm attracted to doing more with less. I find that people are really bad at factoring in all costs, though, and, in particular, the maintenance cost. Battle-hardened solutions are generally easier to maintain and easier to find people who can maintain them. Shiny new frameworks might reduce some immediate costs, but it might not be a good overall solution.
This failure to factor in costs is everywhere. Like people will talk about how fast their car is. Yes, but how much energy is it using? Going twice as fast but using four times as much energy isn't impressive, nor are the enormous environmental and social costs. As a society we are constantly being conditioned to do this, of course. It's basically the whole point of marketing.
Except when doing personal projects for fun! Indeed, this is imho the number one reason (tied with a couple of others e.g. mental health) to do personal coding projects in the first place: it is a safe space to give those shiny new tech toys a go and see how they really perform (sometimes it is: very well! In those cases you know you can then use them for business).
For Systems of Innovation, well, the whole point is to demonstrate how new technologies may positively impact your application portfolio. The purpose is to achieve organizational experience with the technology. These technologies aren't picked willy-nilly - they're serious contenders for developing applications moving forward.
For Systems of Distinction, the point is the features being developed are unique in the market amongst your competitors and/or provide you a key competitive advantage in your market. Failure represents a high business risk. You'd favor existing, well-known architectures, but you may believe newer technologies could yield much better results. The typical strategy is to take the riskiest part of the implementation and develop it using the newer technology. If that goes well, then proceed with the newer technology. Otherwise stick with the tried-and-true.
For all other projects outside of the above, stick with your current technology stack. The strategy is to get as much delivered with the least amount of implementation risk and having the fewest operational surprises and business disruptions.
Bottom line - technological change needs to be carefully managed in an organization.
>> As engineers, we are, by nature, attracted to novel solutions
If the context is professional work, then no, I'm afraid.
I'm always looking for least suprising, well-understood way. After all, I'm working on a team.
The hilarious part is that usually it's about not using Kubernetes, Microservices, NoSQL etc. even though they've all been around for over a decade now. They are boring, stable and well understood by any definition.
I agree with the general message of this article but I want to rant a bit on what exactly is "boring architecture". I haven't quite managed to put this into succinct words, but in my experience building the innards of both backends and frontends for a couple of decades, I found that there's a type of "boring architecture" or "conventional wisdom" that can actually make you build a worse product, slower.
There's nothing odd about having a horizontally scaling (stateless) API server that synchronizes with a database. And then when you need longer running processes, maybe you add in a task queue. But then you want status updates so maybe you set up a pub/sub solution. And then of course your client codebase has to merge all this information into current state. I won't say this is a bad architecture. But if you're a startup with less than 1,000 active users and a handful of engineers splitting their time between working on these systems and your product, I would say to you: how about a single fat server and a transactional database? Maybe I'm old school, but it removes so many modes of failure while increasing performance (except in the rare case that a region goes down) and yet I think people don't tend to look at this option early on if they've been exposed to too much cloudification literature.
And don't get me started on microservices.
* Disclaimer: Every business sooner or later will need to create more distributed systems as they serve more regions, promise more availability, and just generally speaking more users mean more chances for things to go wrong in the infrastructure. But I've found myself several times helping startups revert prematurely distributed architectures to get their current product failing less and running faster, and I would argue it'd be nicer to start stupid simple and expand later.
I've been building systems for two decades as well and two lessons:
1) Never act like you're smarter than other developers. This is an industry with a lot of incredibly smart people and with a thousand different ways to solve every problem. There is no right way to do anything. Just the best effort given the cards you are dealt.
2) Anyone who has been involved in re-platforming knows how traumatic, destructive and error-prone it is. Statistically most are a failure. That is why so many developers don't gravitate towards your approach of building a basic architecture and then building something scalable later. They try to do it all in one go. And especially for startups where everyone goes into it assuming they will be building the next Google or Netflix.
Good points, and I agree. I think what bothers me is that a lot of the advice you might read out there is biased towards what you should build if you already had hundreds of employees and millions of users. Sprinkle in a bit of incentives from cloud providers, and you end up with a world where it only seems reasonable to start out with a Kubernetes cluster or multiple distributed communication systems from the get-go. I do think a lot of developers would intuitively start out with a much simpler system, but it almost looks wrong if you do. But it's not wrong, you can't predict the future so if you try to predict and set up your infrastructure for future success, "just in case", you'll still have to re-platform it, as you say.
I should add that some form of re-platforming is pretty much inevitable at some point in the lifetime of a business no matter which approach you take. Because infrastructure is informed by the structure of your organization as much as the needs of your product. If your number of engineers increases by a couple of orders of magnitude, it's likely any infrastructure you set up will need to reshape to account for that.
>Every business sooner or later will need to create more distributed systems
In real, most businesses won't survive till that, even that will survive can go a long way with a high latency service with a single deployment. A well-designed monolith will take to 10s of millions of DAU from across the world. Worst case you'll add 2-300ms of latency to your call, but if you look around most popular and successful of enterprise software have more than that and people are "happy" using them.
Premature optimization is the root of all evil
> As engineers, we are, by nature, attracted to novel solutions.
I'm not, I'm attracted to proven battle-hardened solutions that have stood the test of time.
I won't dismiss new frameworks and languages and I might try them out in my spare time, but I approach them with skepticism until enough time has passed.
And I'm sure I'm not the only one.
I'm attracted to doing more with less. I find that people are really bad at factoring in all costs, though, and, in particular, the maintenance cost. Battle-hardened solutions are generally easier to maintain and easier to find people who can maintain them. Shiny new frameworks might reduce some immediate costs, but it might not be a good overall solution.
This failure to factor in costs is everywhere. Like people will talk about how fast their car is. Yes, but how much energy is it using? Going twice as fast but using four times as much energy isn't impressive, nor are the enormous environmental and social costs. As a society we are constantly being conditioned to do this, of course. It's basically the whole point of marketing.
Except when doing personal projects for fun! Indeed, this is imho the number one reason (tied with a couple of others e.g. mental health) to do personal coding projects in the first place: it is a safe space to give those shiny new tech toys a go and see how they really perform (sometimes it is: very well! In those cases you know you can then use them for business).
Dan McKinley's original, epic presentation on this subject (now nearly 10 years old?) can be found at https://boringtechnology.club.
It's up there with https://xyproblem.info in terms of URLs I know off by heart.
Depends on what you're trying to achieve.
For Systems of Innovation, well, the whole point is to demonstrate how new technologies may positively impact your application portfolio. The purpose is to achieve organizational experience with the technology. These technologies aren't picked willy-nilly - they're serious contenders for developing applications moving forward.
For Systems of Distinction, the point is the features being developed are unique in the market amongst your competitors and/or provide you a key competitive advantage in your market. Failure represents a high business risk. You'd favor existing, well-known architectures, but you may believe newer technologies could yield much better results. The typical strategy is to take the riskiest part of the implementation and develop it using the newer technology. If that goes well, then proceed with the newer technology. Otherwise stick with the tried-and-true.
For all other projects outside of the above, stick with your current technology stack. The strategy is to get as much delivered with the least amount of implementation risk and having the fewest operational surprises and business disruptions.
Bottom line - technological change needs to be carefully managed in an organization.
>> As engineers, we are, by nature, attracted to novel solutions
If the context is professional work, then no, I'm afraid. I'm always looking for least suprising, well-understood way. After all, I'm working on a team.
But for personal work, well why not?
This topic is like catnip on here.
The hilarious part is that usually it's about not using Kubernetes, Microservices, NoSQL etc. even though they've all been around for over a decade now. They are boring, stable and well understood by any definition.
The general thought is right, but I've read this a hundred times last year alone and usually in much better presentation.