At least in my experience, when I've heard that the "mainframe is going to die", it's specifically referencing the IBM ecosystem - COBOL and friends.
To me it looks like the author is grasping for straws when saying "well, a server rack is just like a mainframe, so the mainframe is not dying!".
To me it's the opposite: yes, a server rack is just like a mainframe. That's why the mainframe is dying - a bunch of servers can do the same work much more cheaply.
Well, in a rather simplistic world view the argument boils down to the following, which does seem to happen:
(A) There are two dominating uses for large computers:
(Use 1) HPC a.k.a. Floating point numbers, Fortran, LLM, CERN, NASA, GPGPU, numerical analysis, etc. These examples all fall into the same bucket at this level of (coarse) granularity.
Dominating use number (2) is accounting. Yes accounting. Append-only logs, git, Merkle trees, PostgreSQL MVCC (vacuum is necessary, because it boils down to an append-only log that is cleaned up after the fact — see also Reddit’s two giant key-value tables), CRDTs, credit cards, insurance, and accounting ledgers.
(Use 2) is dominated by the CAP theorem and favoring consistency over availability during network partitions, because it has to provide and enforce a central coherent view of the world. Even Bitcoin cannot fork too hard for account balances to still be meaningful. (Philosophical nit picking: How does this relate to General Relativity and differential geometry? Can this then only ever be "local" in the sense of general relativity?)
This is where mainframe style hardware redundancy always enters the picture (or your system sucks). Examples: (i) RAIM (RAID for RAM), (ii) basically ZFS, and (iii) Running VMs/Docker containers concurrently in two data centers in the same availability zone (old style: two mainframes within 50 miles and a “coupling facility”)
(B) All other uses like playing Minecraft or Factorio, smartphones, game consoles and running MS Excel locally are rounding error in the grand scheme of things.
Note: Even oxide computers seems to be going down this route. IBM ended up there, because everybody who can pay for the problem to be fixed makes you fix your hardware and processes. Period.
In the end all processes and hardware are locked down and redundant/HA/failure-resistant/anti-fragile. It is a mainframe in all but name by being isomorphic in every conceivable aspect. This is Hyrum’s law for our physical and geometric environment. The other systems die out.
Even Linux user space ABI, JVM, SQLite, cURL, and (partially) JavaScript are so focused on backwards compatability. Everything else breaks and is abandoned. Effectively every filesystem that isn’t at least as good as ZFS is a waste of time.
(D) Look at what MongoDB promised in the beginning and what they actually ended up delivering and what problems they had to solve and how much work that ended up being.
Did you stop reading somewhere in the middle? I cannot fathom how else you could miss the point so completely.
The point is not about technological similarities at all. It's about who controls the hardware and thus ultimately has the power over its use.
"The mainframe" which the author is talking about is not characterized by COBOL, but by having huge corporations control the hardware which everyone is using in their daily lives, giving them power over everyone.
At least in my experience, when I've heard that the "mainframe is going to die", it's specifically referencing the IBM ecosystem - COBOL and friends.
To me it looks like the author is grasping for straws when saying "well, a server rack is just like a mainframe, so the mainframe is not dying!".
To me it's the opposite: yes, a server rack is just like a mainframe. That's why the mainframe is dying - a bunch of servers can do the same work much more cheaply.
A mainframe is also a rack server now. Newer IBM mainframes are available to fit in standard 19" racks.
It seems to be more about the data tables, data models and use cases, rather than the hardware.
Maybe we can build a CSV file that only fits into a 19-inch rack. Someone would buy it for owning their own vendor-lockin story…
Well, in a rather simplistic world view the argument boils down to the following, which does seem to happen:
(A) There are two dominating uses for large computers:
(Use 1) HPC a.k.a. Floating point numbers, Fortran, LLM, CERN, NASA, GPGPU, numerical analysis, etc. These examples all fall into the same bucket at this level of (coarse) granularity.
Dominating use number (2) is accounting. Yes accounting. Append-only logs, git, Merkle trees, PostgreSQL MVCC (vacuum is necessary, because it boils down to an append-only log that is cleaned up after the fact — see also Reddit’s two giant key-value tables), CRDTs, credit cards, insurance, and accounting ledgers.
(Use 2) is dominated by the CAP theorem and favoring consistency over availability during network partitions, because it has to provide and enforce a central coherent view of the world. Even Bitcoin cannot fork too hard for account balances to still be meaningful. (Philosophical nit picking: How does this relate to General Relativity and differential geometry? Can this then only ever be "local" in the sense of general relativity?)
This is where mainframe style hardware redundancy always enters the picture (or your system sucks). Examples: (i) RAIM (RAID for RAM), (ii) basically ZFS, and (iii) Running VMs/Docker containers concurrently in two data centers in the same availability zone (old style: two mainframes within 50 miles and a “coupling facility”)
(B) All other uses like playing Minecraft or Factorio, smartphones, game consoles and running MS Excel locally are rounding error in the grand scheme of things.
Note: Even oxide computers seems to be going down this route. IBM ended up there, because everybody who can pay for the problem to be fixed makes you fix your hardware and processes. Period.
In the end all processes and hardware are locked down and redundant/HA/failure-resistant/anti-fragile. It is a mainframe in all but name by being isomorphic in every conceivable aspect. This is Hyrum’s law for our physical and geometric environment. The other systems die out.
Even Linux user space ABI, JVM, SQLite, cURL, and (partially) JavaScript are so focused on backwards compatability. Everything else breaks and is abandoned. Effectively every filesystem that isn’t at least as good as ZFS is a waste of time.
(C) https://datademythed.com/posts/3-tier_data_solution/
(D) Look at what MongoDB promised in the beginning and what they actually ended up delivering and what problems they had to solve and how much work that ended up being.
EDIT: Add points (C) and (D).
Did you stop reading somewhere in the middle? I cannot fathom how else you could miss the point so completely.
The point is not about technological similarities at all. It's about who controls the hardware and thus ultimately has the power over its use.
"The mainframe" which the author is talking about is not characterized by COBOL, but by having huge corporations control the hardware which everyone is using in their daily lives, giving them power over everyone.
Interesting article and makes a lot of sense to me.