I'm not sold the evidence is there to show inheritance is a good idea - it basically says that constructors, data storage and interfaces need to be intertwined. That isn't a very powerful abstraction, because they don't need to be and there isn't an obvious advantage from doing so over picking up the concepts separately as required. And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things. Weird edge cases like HTTP over non-TCP protocols or rendering without screens start throwing spanners into a tree of assumptions that never needed to be made and pull the truth-in-code away from the truth-in-world.
All that makes a lot of sense if it was introduced as a performance hack rather than a thoughtfully designed concept.
Yeah ya... everyone likes to go on and on about how inheritance is the root of all evil and if you just don't use it, everything will be fine. Sorry, it won't be fine. Your software will still be a mess unless it is small and written three times by the same person who knows what they are doing.
The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves). People created AbstractFactoryFactoryBuilders not because they wanted to, but because "books" said to do stuff like and people were just signaling to the tribe.
So now, we are now all signaling to the new tribe that "inheritance is bad" even though we proudly created multiple AFFs in the past. Not very original in my opinion since Go and Rust don't have inheritance. The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
> The bottom line is, no one ever really used inheritance that much anyway
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.
I don't think Inheritance is always bad - sometimes it's a useful tool. But it was definitely overused and composition, interfaces work much better for most problems.
Inheritance really shines when you want to encapsulate behaviour behind a common interface and also provide a standard implementation.
I.e: I once wrote a RN app which talked to ~10 vacuum robots. All of these robots behaved mostly the same, but each was different in a unique way.
E.g. 9 robots returned to station when the command "STOP" was send, one would just stop in place. Or some robots would rotate 90 degrees when a "LEFT" command was send, others only 30 degrees.
We wrote a base class which exposed all needed commands and each robot had an inherited class which overwrote the parts which needed adjustment (e.g. sending left three times so it's also 90 degrees or send "MOVE TO STATION" instead of "STOP").
> I don't think Inheritance is always bad - sometimes it's a useful tool.
I can only think of one or two instances where I've really been convinced that inheritance is the right tool. The only one that springs to mind is a View hierarchy in UI libraries. But even then, I notice React (& friends) have all moved away from this approach. Modern web development usually makes components be functions. (And yes, javascript supports many kinds of inheritance. Early versions of react even used them for components. But it proved to be a worse approach.)
I've been writing a lot of rust lately. Rust doesn't support inheritance, but it wouldn't be needed in your example. In rust, you'd implement that by having a trait with functions (+default behaviour). Then have each robot type implement the trait. Eg:
trait Robot {
fn stop(&mut self) { /* default behaviour */ }
}
struct BenderRobot;
impl Robot for BenderRobot {
// If this is missing, we default to Robot::stop above.
fn stop(&mut self) { /* custom behaviour */ }
}
Inheritance is not the only way to share behavior across different implementations — it'a just the only way available in the traditional 1990s crop of static OOP languages like C++, Java and C#.
There are many other ways to share an implementation of a common feature:
1. Another comment already mentioned default method implementations in an interface (or a trait, since the example was in Rust). This technique is even available in Java (since Java 8), so it's as mainstream as it gets.
The main disadvantage is that you can have just one default implementation for the stop() method. With inheritance you could use hierarchies to create multiple shared implementations and choose which one your object should adopt by inheriting from it. You also cannot associate any member fields with the implementation. On the bright side, this technique still avoids all the issues with hierarchies and single and multiple inheritance.
2. Another technique is implementation delegation. This is basically just like using composition and manually forwarding all methods to the embedded implementer object, but the language has syntax sugar that does that for you. Kotlin is probably the most well-known language that supports this feature[1]. Object Pascal (at least in Delphi and Free Pascal) supports this feature as well[2].
This method is slightly more verbose than inheritance (you need to define a member and initialize it). But unlike inheritance, it doesn't requires forwarding the class's constructors, so in many cases you might even end up with less boilerplate than using inheritance (e.g. if you have multiple overloaded constructors you need to forward).
The only real disadvantage of this method is that you need to be careful with hierarchies. For instance, if you have a Storage interface (with the load() and store() methods) you can create EncryptedStorage interface that wraps another Storage implementation and delegates to it, but not before encrypting everything it sends to the storage (and decrypting the content on load() calls). You can also create a LimitedStorage wrapper than enforces size quotas, and then combine both LimitedStorage and EncryptedStorage. Unlike traditional class hierarchies (where you'd have to implement LimitedStorage, EncryptedStorage and LimitedEncryptedStorage), you've got a lot more flexibility: you don't have to reimplement every combination of storage and you can combine storages dynamically and freely. But let's assume you want to create ParanoidStorage, which stores two copies of every object, just to be safe. The easiest way to do that is to make ParanoidStorage.store() calls wrapped.store() twice. The thing you have to keep in mind, is that this doesn't work like inheritance: For instance, if you wrap your objects in the order EncryptedStorage(ParanoidStorage(LimitedStorage(mainStorage))), ParanoidStorage will call LimitedStorage.store(). This is unlike the inheritance chain EncryptedStorage <- ParanoidStorage <- LimitedStorage <- BaseStorage, where ParanoidStorage.store() will call EncryptedStorage.store(). In our case this is a good thing (we can avoid a stack overflow), but it's important to keep this difference in mind.
3. Dynamic languages almost always have at least one mechanism that you can use to automatically implement delegation. For instance, Python developers can use metaclasses or __getattr__[3] while Ruby developers can use method_missing or Forwaradable[4].
4. Some languages (most famously Ruby[5]) have the concept of mixins, which let you include code from other classes (or modules in Ruby) inside your classes without inheritance. Mixins are also supported in D (mixin templates). PHP has traits.
5. Rust supports (and actively promotes) implementing traits using procedural macros, especially derive macros[6]. This is by far the most complex but also the most powerful approach. You can use it to create a simple solution for generic delegation[7], but you can go far beyond that. Using derive macros to automatically implement traits like Debug, Eq, Ord is something you can find in every codebase, and some of the most popular crates like serde, clap and thiserror rely on heavily on derive.
To me (as a Java programmer) inheritance is very useful to reuse code and avoid copy paste. There many cases in which decorators or template methods are very useful and in general I find it "natural" in the sense that the concepts of abstraction and specialization can be found in plenty of real world examples (animals, plants, vehicles etc etc).
As usual there is no silver bullet, so it's just a tool and like any other tool you need to use it wisely, when it makes sense.
Yeah there can be a ton of derivative and convenience methods that would either have to be duplicated in all implementations or even worse duplicated at call sites.
Call them interfaces with default implementations or super classes, they are the same thing and very useful.
> The reason is that inheritance is almost always a bad idea in practice.
It's just slightly too strong of a statement.
I'm working in a very large Spring codebase right now, with a lot of horrible inheritance abuse (seriously, every component extended common hierarchy of classes that pulled in a ton of behavior). I suspect part of the reason is the Spring context got out of control, and the easiest way to reliably "inject" behavior is by subclassing. Terrible.
On the other hand, inheritance is sometimes the most elegant solution to a problem. I've done this at multiple companies:
Payment
+ PayPalPayment
+ StripePayment
Sometimes you have data (not just behavior!) that genuinely follows an IS-A relationship, and you want more than just interface polymorphism. Yes you can model this with composition, but the end result ends up being more complex and uglier.
It doesn't have to be all one or the other. But I agree, it should be mostly composition.
There used to be times when language-level composition did not exist, so inheritance was practically all you had. There used to be ugly hacks to implement mix-ins, for example, in PHP (first versions of Symfony used them and did their best to make them not ugly, but they had to devote a whole chapter on how to do them right anyway). I suspect a lot of contention comes from those times — and from the fact that even when you can do better, many folks still have the muscle memory wired to "if inheritance is the only tool you have, everything looks like a subclass".
I like languages where I can have both, and where the language authors are not trying to preach at me.
That is a great example! Abstraction is most useful when it captures the way several things are more-specific versions of a more general thing. At that point it's not just about the functionality: it communicates to the reader. Anyone coming in can now easily answer the question, "what kinds of payments exist?"
> But there's a reason the crowd is moving against inheritance.
I doubt it; the majority of code is in enterprise projects, and they do Java and C# in the idiomatic way, with inheritance.
I'm working on an Android project right now, and inheritance is everywhere!
So, sure, if you ignore all mobile development, and ignore almost all enterprise software, and almost all internal line-of-business software, and restrict yourself to what various "influencers" say, then sure THAT crowd is moving away from inheritance.
Java and C# are already a huge step up from what came before, since they at least introduce the concept of an interface as a distinct thing from a parent class. The fact that you don't notice that is proof that progress does happen, if only slowly.
I definitely agree that the crusade against inheritance is just a fad and not based on good reasoning. Every time people say "inheritance is garbage that people only use because they learned it in school" it pains me because it's like, really? You can't imagine that it's because those people have thought about the options and concluded that inheritance is the best way to model the problem they are facing?
Contrary to what the hype of the 90s said, I don't think OOP is the ultimate programming technique which will obsolete all others. But I think that it's equally inaccurate to make wild claims about how OOP is useless garbage that only makes software worse. Yes, you can make an unholy mess of class structures, but you can do that with every programming language. The prejudice some people have against OOP is really unfounded.
I’m surprised this is considered a controversial take.
You can write spaghetti in any language or paradigm. People will go overboard on DRY while ignoring that inheritance is more or less just a mechanism for achieving DRY for methods and fields.
FP wizards can easily turn your codebase into a complex organism that is just as “impenetrable” as OOP. But as you say, fads are fads are fads, and OOP was the previous fad so it behooves anyone who wants to look “up to date” to be performative about how they know better.
Personally I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects. I expect you can even base a trait off of another trait in Rust.
But don’t dare call it what it actually is, because this industry really is as petulant as you describe.
As they say about OOP, everything is somewhere else.
The only part of inheritance I’ve ever found useful is allowing objects to conform to a certain interface so that they can fulfill a role needed by a generic function. I’ve always preferred the protocol approach or Rust’s traits for that over classicist inheritance though.
> Usually, the calls were via some abstract interface with a single implementor
What's described here is over-generic code, instead of KISS and just keeping an eye on extensibility instead of generalizing ahead of time. This can happen in any paradigm.
We're all flavoured by our experience. You can for sure make a mess with flat C-style code that uses structs and global functions. But whenever I've seen a mess in C, its a sort of "lego on the floor" type of mess. Code is everywhere, but all the pieces are uniquely named and mostly self contained.
Classes - and class hierarchies - really let you go to town. I've seen codebases that seem totally impossible to get your head around. The best is when you have 18 classes which all implicitly or explicitly depend on each other. In that case, just starting the program up requires an insane, fragile dance where lots of objects need to be initialized in just the perfect order, otherwise something hits a null pointer exception in its initialization code. You reorder two lines in a constructor somewhere and something on the other side of your codebase breaks, and you have no idea why.
For some reason I've never seen anyone make that kind of mess just using composition. Maybe I just haven't been around long enough.
"But there's a reason the crowd is moving against inheritance"
Yep: it requires skills that aren't taught in schools or exercised in big companies organized around microservices. We've gone back to a world where most developers are code monkeys, converting high-level design documents into low-level design documents into code.
That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time. But that doesn't get you a promotion right now, so why would engineers value it?
> That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time.
Whoa that’s quite the claim. Most large projects built heavily on OO principles I’ve seen or worked on have become an absolute unmaintainable mess over time, with spider webs of classes referencing classes. To say nothing of DI, factoryfactories and all the rest.
I believe you might have had some good experiences here. But I’m jealous, and my career doesn’t paint the same rosy picture from the OO projects I’ve seen.
I believe most heavily OO projects could be written in about 1/3 as many lines if the developers used an imperative / dataflow oriented design instead. And I’m not just saying that - I’ve seen ports and rewrites which have born out around that ratio. (And yes, the result is plenty maintainable).
> This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
Yeah; I agree with this. I think this is both the best and worst aspect of Go: Go is a language designed to force everyone's code to look like vaguely the same, from beginners to experts. Its a tool to force even mediocre teams to program in an inoffensive, bland way that will be readable by anyone.
Yeah, I have seen things like you describe. But I have also seen the same code, copy-pasted a dozen times throughout a codebase and modified over years. That is a much worse situation; the links between the abstractions still exist without the inheritance, but now they are untraceable. At least with inheritance there are links between the methods and classes for you to follow. Without it, you've got to crawl the entire codebase to find these things. OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
I would even go so far as to argue that a small team of devs can learn an OOP heirarchy and work with it indefinitely, but a similar small team will drown in maintenance overhead without OOP and inheritance. This is highly relevant as we head into an age of decreased headcounts. This style of abandoning OOP will age poorly as teams decrease in size.
Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens.
> OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
Inheritance isn't the only way to avoid duplicating code. Composition works great - and it results in much more maintainable code. Rust, for example, doesn't have class based inheritance at all. And the principle of DRY is maintained in everything I've made in it. And everything I've read by others. Its composition all the way down, and it works great. Go is just the same.
If anything, I think if you've got a weak team it makes even more sense to stick to composition over inheritance. The reason is that composition is easier to read and reason about. You don't get "spooky action from a distance" when you use composition, since a struct is made up of exactly the list of fields you list. Nothing more, nothing less. There's no overridden methods and inherited fields to worry about.
I think you have the consequences of AI exactly backwards. AI provides virtual headcount and will vastly increase the ability of small teams to manage sprawling codebases. LLM context lengths are already on the order of millions of tokens. It takes a human days of work to come to grips with a codebase an LLM can grok in two seconds.
The cost of working with code is much lower with LLMs than with humans and it's falling by an order of magnitude every year.
> The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves).
Inheritance is most definitely used in many popular C++ libraries, e.g., protobuf::Message [1] (which is base class to all user message classes and also has its own base class of MessageLite) or QWidget [2] (which sits in a large class hierarchy) or tinyxml2::XMLNode (base class to other node types). These are honestly the first three libraries that I thought of that have a non-trivial collection of classes in them. They're all stateful base classes by the way, not pure interfaces. And remember, I'm not trying to justify whether these are good or bad designs, just the make the observation that inheritance certainly is well used in practice.
(The fourth library I thought of with a reasonably complex collection of classes is Boost ASIO [4] which actually doesn't use inheritance. Instead it uses common interfaces to allow some compile-time polymorphism. Ironically, this is the only library in the list that I've been so unsatisfied with that I've written my own wrapper more than once for a little part of it: allowing auto-(re)connecting outbound and accepting incoming connections with the same interface. Guess what: I used inheritance!)
>> People created AbstractFactoryFactoryBuilders not because they wanted to,
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
> Your software will still be a mess unless it is small and written three times by the same person who knows what they are doing.
100% this! And I've recently been wondering whether this is the right workflow for AI-assisted development: use vibe-coding to build the one that you plan to throw away [0], use that to validate your assumptions and implement proper end-to-end tests, then recreate it again once or more with AI asked to try different approaches, and then eventually throw these away too and more manually create "the third one".
[0] "In most projects, the first system built is barely usable....Hence plan to throw one away; you will, anyhow." Fred Brooks, The Mythical Man-Month
The reason people don't have original opinions is because it isn't worth it. The stakes are extremely low. How one chooses to write code is ultimately a matter of personal preference.
The lower the stakes, the more dogmatic people become about their choices, because they know on some level it's a matter of taste and nothing more. Counterintuitively, it becomes even more tied to one's ego than the choices that actually have major consequences.
I believe you just summed up 90% of popular wisdom about software engineering.
With enough patience you will see many fads pass twice like a tide raising and falling. OOP, runtime typing, schema-less databases and TDD are the first to come to mind.
I feel "self-describing" data formats and everything agile are fading already.
Very few ideas stick, but some do: I do not expect GOTO to ever come back, but who knows where vibe coding will lead us :)
Objects are a pretty good abstraction for when you have data that represents, well, objects. In 3D graphics it's a very useful abstraction. Significantly less good when you're trying to model process, pipeline, or flow IMHO (I know there are some people who swear by them for anything they would bash together with UML first, and I just... Don't see it. I've used more than enough object-oriented flowchart-description languages to fundamentally disagree; charts are two-dimensional, text-represented code is one-dimensional, making the code "objects" doesn't fix that problem).
(Probably also worth noting that high performance 3D graphics torture the object abstraction past recognizability, because maintaining those runtime abstractions costs resources that could be better spent slamming pixels into a screen).
> The bottom line is, no one ever really used inheritance that much anyway
That's just false. Before Java abstract factory era there was already a culture of creating deep inheritance hierarchies in C++ code. Interfaces and design patterns (including factories) were adopted as a solution to that mess and as bad as they were - they were still an improvement.
I've written a bunch of code in languages without inheritance per se—OCaml, Haskell, Rust—and things have been more than fine. Hell, I barely use any sort of subtyping! I definitely miss structural subtyping in Haskell and Rust on occasion, but even in those situations the code has never reduced to a thrice-written mess.
I've also written some code that's gotten a lot of mileage out of inheritance, including multiple inheritance. Some of my Python abstractions would not have worked anywhere near as well as they did without it. But even then, I could build APIs at least as usable in languages without inheritance, as long as those languages had sufficient facilities for abstraction of their own. (Which OCaml, Haskell and Rust absolutely do!)
> Weird edge cases like HTTP over non-TCP protocols or rendering without screens start throwing spanners into a tree of assumptions that never needed to be made
yes, but that's true of other abstractions too. Whether you use inheritance or not, you usually don't know what abstractions you need until you need them: even if you were using composability rather than inheritance, chances are that you'd have encoded assumptions that HTTP goes over TCP until you need to handle the fact that actually you need higher-level abstractions there.
If you don't use inheritance, you switch to an interface (or a different interface) in your composition. If you did use inheritance, you stop doing so and start using composition. The latter is probably some more work but i don't think it's fundamentally very different.
I'm on the fence about inheritance myself; I often regret having used it, and I never regret having not used it. On the other hand, it's awfully expedient. I designed and implemented a programming language called Bicicleta whose only argument-passing mechanism is inheritance, and I'm not sure that was a bad idea.
The object-oriented part of OCaml, by the way, has inheritance that's entirely orthogonal to interfaces, which in OCaml are static types. Languages like Smalltalk and, for the most part, Python don't have interfaces at all.
Very interesting work! It is an attempt to extract the interfaces that were in the minds of the implementors of the Smalltalk-80 system's collection classes, but which couldn't be expressed in the language itself, because it has no interface construct. That's what I meant by "Languages like Smalltalk (...) don't have interfaces at all."
Don't have manifest types and don't have manifest interfaces.
Someone has already referenced "Adding Dynamic Interfaces to Smalltalk" [0] and looking back there doesn't seem to be any kind of demonstration that use of interfaces makes software faster to develop or less error prone or... [1]
Sure, but for the most part people don't use them, because you don't have to; Python method calls are always potentially polymorphic, unlike Golang method calls.
Rigid, "family tree"-style inheritance as in classical OOP is pretty much garbage. "A cow is a mammal is an animal" is largely useless for the day to day work we do except in extremely well-planned, large and elaborate ontologies -- something you typically only see in highly structured software like windowing systems. It just isn't useful for the majority of our work.
"Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules [1]. You can bolt behaviors and their types onto structures at will. This is how OO should be.
I put together a visual argument on another thread on HN a few weeks ago:
> "Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules.
Yes-and-no.
Interfaces still participate in inheritance hierarchies (`interface Bar extends Foo`), and that's in a way that prohibits removing/subtracting type members (so interfaces are not in any way a substitute for mixins). Composition (of interfaces) can be used instead of `extends`, but then you lose guarantees of reference-identity - oh, and only reference-types can implement interfaces which makes interfaces impractical for scalars and unusable in a zero-heap-alloc program.
Interface-types can only expose virtual members: no public fields - which seems silly to me because a vtable-like mechanism could be used to allow raw pointer access to fields via interfaces, but I digress: so many of these limitations (or unneeded functionality) are consequences of the JVM/CLR's design decisions which won't change in my lifetime.
Rust-style traits are an overall improvement, yes - but (as far as my limited Rust experience tells me) there's no succinct way to tell the compiler to delegate the implementation of a trait to some composed type: I found myself needing to write an unexpectedly large amount of forwarding methods by hand (so I hope that Rust is better than this and that I was just doing Rust the-completely-wrong-way).
How are interfaces with ability to provide default implementations for members (which both C# and Java allow today) not a substitute for mixins?
"Only reference types can implement interfaces" is simply not true in C#. Not only can structs implement them, but they can also be used through the interface without boxing (via generics).
Rust actually allows one to express "family tree" object inheritance quite cleanly via the generic typestate pattern. It isn't "garbage", it totally has its uses. It is however quite antithetical to modularity: the "inheritance hierarchy" can only really be understood as a unit, and "extensibility" for such a hierarchy is not really well defined. Hence why in practice it mostly gets used in cases where the improved static checking made possible by the "typestate" pattern can be helpful, which has remarkably little to do with "OOP" design as generally understood.
> And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things.
"This doesn't represent the fundamental truth" does not imply "this has little value". Your navigation software likely doesn't account for cars passing each other on the road either -- or probably red lights for that matter -- and yet it's still pretty damn useful. The sweet spot is problem- and model-dependent.
I'm not sold on the evidence of much in the way of programming language features from the "object oriented" era.
They were pushed by cultish types with little evidence. There was this assertion that all these things were wonderful and would reduce effort and therefore they must be good and we all must use them. We got object oriented everything including object oriented CPUs, object oriented relational databases, object oriented "xtUML". If you weren't object oriented you were a pile of garbage in those days.
For all that, I don't know if there was ever any good evidence at all that any of it worked. It was like the entire industry all fell for snakeoil salesmen and are collectively too embarrassed about it to have much introspection or talk about it. Not that it was the last time the industry has fallen for snakeoil...
That's not evidence though even if we take it as true. You can of course make layers of abstraction or encapsulation without "object oriented" languages.
Inheritance was oversold, but it can help remove a lot of boilerplate code. Early windows notoriously had hundreds of lines of code for a hello world program. Setting your own defaults and getting on with your day is great for dealing with a less refined API etc.
Complex inheritance trees can make sense in niche application for similar reasons.
I'd still generally prefer intrusive lists to be done via composition. I've seen plenty of intrusive lists where each item was a member of multiple lists at the same time - which is quite hard to do if you need to inherit from an intrusive list element superclass.
So... what even is the purpose of computer language abstractions?
To provide building blocks useful for the construction of programs.
There's a number of properties that are good for such building blocks... composability, flexibility, simplicity, comprehensibility, etc.
Naturally, these properties can conflict, so the goal would be to provide a minimal set of interoperable building blocks providing good coverage of the desirable properties, to allow the developer can choose the appropriate one for a give circumstance and to change when needed. E.g., they could choose to use a simple but less flexible block in one situation, or a more complicated or less performant block in another.
IMO, inheritance is a decent building block -- simple and easy to understand, though with somewhat limited applicability.
We can imagine improvements (particularly to implementation) but I think it got a bad rep mostly due to people not understanding its uses and limitations.
...I've got to say, though, if you aren't figuring out how to use the simple and easy tools, you're really not going to do better with more complicated and capable tools. People hate to admit it, but the best of us are still highly confused monkeys haphazardly banging away at keyboards, barely able to hold a few concepts in our heads at one time. Simple is good for us.
It is a good idea because it's the most fundamental idea.
You have two objects. A and B. How do you merge the two objects? A + B?
The most straight forward way is inheritance. The idea is fundamental.
The reason why it's not practical has more to do with human nature and the limitations of our capabilities in handling complexity then it has to do with the concept of inheritance itself.
Literally think about it. How else do you merge two structs if not using inheritance?
The idea that inheritance is not fundamental and is wrong in nature is in itself mistaken.
I found myself in a situation where I had to reinvent data structures from scratch in an assembly-like language: "properties" or "fields" are just offsets relative to a pointer (runtime) or an address (compile-time).
The need to extend a data structure to add more fields comes almost immediately. Think: something like the C "hack" of embedding the "base" structure as the first field of the "derived" structure:
struct t_derived
{
struct t_base base;
int extra_data;
};
Then you can pass a derived_t instead of base_t with some type casting and caveats. This is "legal" in C because the standard guarantees that base has offset 0.
Of course our "extra_data" could be a structure, but although it would look like "A+B" it is actually a concatenation.
> How else do you merge two structs if not using inheritance?
By merging them. Structs are product types. If you merge them, you get a bigger product type. You don't need inheritance (ADTs) for that.
The more useful point of inheritance is having shared commonality. But modern languages make it convenient to express that without using ADTs/inheritance.
TypeScript is fully structurally typed. If you combine a Foo and a Bar it is something new, but keeps being both a Foo and a Bar as well.
Go is structurally typed to a relatively high degree as well. You can embed types (including structs) into structs and only care about the individual parts in your functions. And you have composable and implicit interfaces.
Clojure has protocols and generally only cares about the things you use or define to use in functions. It allows you to do hierarchical keyword ontologies if you want, but I see it rarely used.
These languages and many others favor two fundamental building blocks: composition and signatures. The latter being either about data fields or function signatures. The neat part is these aren't entangled: You can use and talk about them separately.
How fundamental is inheritance if it can be fully replaced by simpler building blocks?
>By merging them. Structs are product types. If you merge them, you get a bigger product type. You don't need inheritance (ADTs) for that.
Merging structs and inheritance are fundamentally the same thing.
>How fundamental is inheritance if it can be fully replaced by simpler building blocks?
It can't be replaced. Combining Foo and Bar in the way you're thinking involves additional primitives and concepts like nesting. If Foo and Bar share a same property the most straight forward way of handling is overriding one property with the other. Overriding IS inheritance.
We aren't dealing with product types in the purest form either. These product types have named properties and you need additional rules to handle conflicting names.
In fact once you have named properties the resulting algebra from multiplying structs is not consistent with the concept of multiplication whether you use inheritance or "object composition"
> Literally think about it. How else do you merge two structs if not using inheritance?
What? Using multiple inheritence? That's one of the worst ideas I've ever seen in all of computer science. You can't just glue two arbitrary classes together and expect their invariants to somehow hold true. Even if they do, what happens when both classes implement a method or field with the same name? Bugs. You get bugs.
I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.
The way to merge two structs is via composition:
struct C {
a: A,
b: B,
}
If you want to expose methods from A or B, either wrap the methods or make the a or b fields public / protected and let callers call c.a.foo().
Don't take my word for it, here's google's C++ style guide[1]
> Composition is often more appropriate than inheritance.
> Multiple inheritance is especially problematic, because it often imposes a higher performance overhead (in fact, the performance drop from single inheritance to multiple inheritance can often be greater than the performance drop from ordinary to virtual dispatch), and because it risks leading to "diamond" inheritance patterns, which are prone to ambiguity, confusion, and outright bugs.
> Multiple inheritance is permitted, but multiple implementation inheritance is strongly discouraged.
You just threw this in out of nowhere. I didn't mention anything about "multiple" inheritance. Just inheritance which by default people usually mean single inheritance.
That being said multiple inheritance is equivalent to single inheritance of 3 objects. The only problem is because two objects are on the same level it's hard to know which property overrides which. With a single chain of inheritance the parent always overrides the child. But with two parents, we don't know which parent overrides which parent. That's it. But assume there are 3 objects with distinct properties.
A -> B -> C
would be equivalent to
A -> C <- B.
They are isomorphic. Merging distinct objects with distinct properties is commutative which makes inheritance of distinct objects commutative.
C -> B -> A == A -> B -> C
>I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.
Don't ever tell me that programming for 30 years is a reason for being correct. It's not. In fact you can be doing it for 30 years and be completely and utterly wrong. Then the 30 years of experience is more of a marker of your intelligence.
The point is YOU are NOT understanding WHAT i am saying. Read what I wrote. The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively.
But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting.
Think about it. You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
Say you have two structs. The structs contain redundant properties. HOW do you define one struct in terms of the other? There's no simpler way then inheritance.
>> Composition is often more appropriate than inheritance.
You can use composition but that's literally the same thing but wierder, where instead of identical properties overriding other properties you duplicate the properties via nesting.
So inheritance
A = {a, b}, C = {a1}, A -> C = {a1, b}
Composition:
A = {a, b}, C = {a1}, C(A) = {a1, {a, b}}
That's it. It's just two arbitrary rules for merging data.
If you have been programming for 30 years you tell me how to fit this requirement with the most minimal code:
given this:
A = {a, b, c, d}
I want to create this:
B = {a, b, c, d, e}
But I don't want to rewrite a, b, c, d multiple times. What's the best way to define B while reusing code? Inheritance.
Like I said the problem with inheritance is not the concept itself. It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity. The idea itself is fundamental not stupid. It's the human brain that is too stupid to handle the emergent complexity.
Also I don't give two flying shits about google style guides after the fiasco with golang error handling. They could've done a better job.
Like any map, the inheritance pattern is bad, except when it works. It’s a strategic capability to be able to guess well which is which in given context.
My first foray into serious programming was by way of Django, which made a choice of representing content structure as classes in the codebase. It underwent the usual evolution of supporting inheritance, then mixins, etc. Today I’d probably have mixed feelings about conflating software architecture with subject domain so blatantly: of course it could never represent the fundamental truth. However, I also know that 1) fundamental truth is not losslessly representable anyway (the map cannot be the territory), 2) the only software that is perfectly isolated from imperfections of real world is software that is useless, and 3) Django was easy to understand, easy to build with, and effectively fit the purpose.
Any map (requirement, spec, abstraction, pattern) is both a blessing that allows software to be useful over longer time, and a curse that leads to its obsolescence. A good one is better at the former than the latter.
> And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things.
The fundamental truth of things? What are you even talking about? What fundamental truth of things? And what does that have anything to do with building software?
If you pretend/imagine it was intentional, and insightful, you've created a nerd trap for amateur ontologists. Some of which decide to become professional ontologists and sell books on objected oriented design.
Botanical trees appearing in nature don't make them "the fundamental truth of things". And in what way are trees the basis of human society? Thats such a strange claim. Are you talking about family trees? Because they're actually directed acyclic graphs.
Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
fundamental truth of things is probably the wrong word choice.
It's more like there are many fundamental concepts and trees are one such concept. I don't think there is a singular fundamental truth of things.
>Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
I never made this claim though?
>Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
I mentioned this because parent poster is talking about fundamental truths. I'm saying trees are fundamental... But they may not be practical.
"Taxonomies are entirely and completely worthless."
Hard disagree. Knowing that AES and Twofish are block ciphers is useful when dealing with cryptography. Many categories of algorithms and objects are naturally taxonomic.
There is this wonderful presentation by Herb Sutter talking about how the C++ concept “class” covers over 20 other abstractions, and that Bjarne’s choice for C++ was the right choice since it offers so much power and flexibility and expressive power in a concise abstraction.
Other languages (just like the article) only saw the downsides to such a generic abstraction that they added N times more abstractions (so split inheritance, interfaces, traits, etc) and rules for interactions that it significantly complicated the language with fundamentally no effective gains.
In summary, Herb will always do a better job than me explaining why the choices in the design of C++ classes, even with multiple inheritance, is one of the key factors of C++ success. With cppfront, he extends this idea with metaclasses to clearly describe intent. I think he is on the right track.
I find that structural typing is the most useful thing and unfortunately few languages support it. I'd like a language where:
- If I have a class Foo and interface Bar, I should be easily able to pass a Foo where Bar is required, provided that Foo has all the methods that Bar has (sometimes I don't control Foo and can't add the "implements Bar" in it).
- I can declare "class Foo implements Bar", but that only means "give me a compilation error if Bar has a method that Foo doesn't implement" - it is NOT required in order to be able to pass a Foo object to a method that takes a Bar parameter
- Conversely, I should be able to also declare "interface Foo implementedBy Baz" and get a compilation error if either one of them is modified in a way that makes them incompatible (again - this does not mean that Baz is the _only_ implementor, just that it's one of them)
- Especially with immutable values - the same should apply to data. record A extends B, C only means "please verify that A has all the members that B & C have, and as such whenever a B record is required, I can pass an A instead". I should be able to do the reverse too (record B extendedBy A). Notably, this doesn't mean "silently import members from B, and create a multiple-inheritance-mess like C++ does".
(I do understand that there'd be some performance implications, but especially with a JIT a feel these could be solved; and we live in a world where I think a lot of code cares more about expressiveness/ understandability than raw performance)
TypeScript does supports all of these - `C implements I` is not necessary but gives compile errors if not fulfilled.
You can use `o satisfies T` wherever you want to ensure that any object/instance o implements T structurally.
To verify a type implements/extends another type from any third-party context (as your third point), you could use `(null! as T1) satisfies T2;`, though usually you'd find a more idiomatic way depending on the context.
Of course it's all type-level - if you are getting untrusted data you'll need a library for verification. And the immutable story in TS (readonly modifier) is not amazing.
We use the word inheritance to refer to two concepts. There is implementation-inheritance. There is type-inheritance. These ideas are easily confused, which should be cause to have distinct words for them. Yet we don't. (Although Java does, effectively)
I think a lot of the arguments against inheritance come from C++'s peculiar implementation of it, which it clearly, ah, inherited from Simula. Slicing, ambiguous diamond inheritance, stuff like that are C++ problems, not inheritance problems. This isn't to say inheritance isn't problematic, but when you're making a properly substitutable sub-type of something, it's hard to beat.
For the specific problems I mentioned like slicing, literally anything else that actually abstracts the memory layout.
For OOP in general, I'd say anything with a metaobject protocol for starters, like Smalltalk, Lisp (via CLOS), Python, Perl (via Moose). All but the first support multiple inheritance, but also have well-defined method resolution orders. Multiple inheritance might still lead frequently to nasty spaghetti code even in those languages, but it will still be predictable.
CLOS and Dylan have multiple dispatch, which is just all kinds of awesome, but alas is destined to remain forever niche.
IMHO Inheritance (especially the C++ flavored inheritance with its access specifiers and myriad rules) has always scared me. It makes a codebase confusing and hard to reason with. I feel the eschewing of inheritance by languages such as Go and Rust is a step in the right direction.
As an aside, I have noticed that the robotics frameworks (ROS and ROS2) heavily rely on inheritance and some co-dependent C++ features like virtual destructors (to call the derived class's destructor through a base class pointer). I was once invited to an interview for a robotics company due to my "C++ experience"and grilled on this pattern of C++ that I was completely unfamiliar with. I seriously considered removing C++ from my resume that day.
To me, inheritence makes sense if you view your codebase as actual "Objects"
The reality is that a codebase is not that simple. Many things you create are not representable as realworld "objects" - to me, this is where is gets confusing to follow especially when the code gets bigger.
I remember those OOP books (I cannot comment on modern OOP books) where the first few chaptors would use Shapes as an example. Where A Circle, Square, Triangle, etc.. would inherit the Shape object. Sure, in simple examples like this.. it makes sense.
I remember covering inheritence and how to tell if its better or composition... which is the "Object IS X" or "Object HAS X" - so you base you're heirarchy around that mindset.
- "A Chair is Furniture" (Chair inherits Furniture)
- "A Chair has Legs" (Chair has array of Leg)
I will always remember my first job - creating shop floor diagrams where you get to select a Shelf or Rack and see the visual representation of goods, etc. My early codebase was OOP... a Product, Merchandise, Shelf, Bay, Pegboard, etc. Each object inherits something in one way or another. Keeping on top of it eventually became a pain. I think there was, overall, about 5 levels of inheritence.
I reviewed my codebase one day and decided to screw it -- I would experiment other approaches. I ended up created simple classes with no inheritence. Each class was isolated from one another with the exception of a special Id which represented "something" like a Pin, or Shelf, etc. Now my code was flexible... "A Shelf has this and this"
In later years I realised what I did was following along the lines of what is commonly known as ECS or Entity-Component-System. Seems popular in games (and I viewed that project is a game-like fashion so it makes sense)
To be fair, deleting a derived object through a base class pointer is pretty basic C++. Slicing and virtual destructors are usually the first couple of things you learn about after virtual methods and copy constructors/assignment.
Quite a few sections of C++ can be classified as "pretty basic C++". None of the rules are complicated in isolation but that doesn't necessarily make it easy to reason about it.
Huh, I was always told that inheritance hurt performance as it requires additional address lookups. Thats why many game engines are moving away from it.
I guess it could simplify the GC but modern garbage collectors have come a long way.
As I understand it, back when Simula and LISP were invented it was generally the case that loads and stores took 1 cycle and there were no CPU caches. These pointer-chasing languages and techniques really weren't technically bad for the computers of the time - it's just that we have a larger relative penalty for randomly accessing our Random Access Memory these days so locallity is important (hence data-oriented design, ECS, etc).
I am kind of amused they _removed_ first-class functions though!
Function arguments weren't actually first-class to begin with. In Algol 60 (of which Simula started as a superset), you could pass functions as arguments to other functions, but that's it - it wasn't a proper type so you couldn't return it, shove it into a variable, have an array of functions etc. Basically, it had just enough restrictions that you would never get up in a situation where you could possibly call a function for which the corresponding activation frame (i.e. locals) could be gone. But when Simula added classes and objects, now you could suddenly capture arguments in a way that allows them to outlive the callee.
I was in the game industry when we originally transitioned from C to C++, and here's my recollection of the conversations at the time, more or less.
In C++, inheritance of data is efficient because the memory layout of base class members stays the same in different derived classes, so fields don't cost any more to access.
And construction is (relatively fast, compared to alternatives) because setting a single vtable pointer is faster than filling in a bunch of variable fields.
And non-virtual functions were fast because, again, static memory layouts and access and inlining.
Virtual functions were a bit slower, but ultimately that just raised the larger question of when and where a codebase was using function pointers more broadly - virtual functions were just one way of corralling that issue.
And the fact that there were idiomatic ways to use classes in C++ without dynamically allocating memory was crucial to selling game developers on the idea, too.
So at least from my time when this was happening, the general sense was that, of all the ways OO could be implemented, C++ style OO seemed to be by far the most performant, for the concerns of game developers in the late 90's / early 2000's.
I've been out of the industry for a while, so I haven't followed the subsequent conversations since too closely. But I do think, even when I was there, the actual reality of OO class hierarchies were starting to rear their ugly heads. Giant base classes are indeed drastically bad for caches, for example, because they do tend to produce giant, bloated data structures. And deep class hierarchies turn out to be highly sub-optimal, in a lot of cases, for information hiding and evolving code bases (especially for game code, which was one of my specialties). As a practical matter, as you evolve code, you don't get the benefits of information hiding that were advertised on the tin (hence the current boosting of composition over inheritance). I think you can better, smart discussions about those issues in this thread, so I won't cover them.
But that was a snapshot of those early experiences - the specific ways C++ implemented inheritance for performance reasons were definitely, originally, much of the draw to game programmers.
No, inheritance does not require additional address lookups. Single inheritance as discussed here doesn't even require additional address arithmetic; the address of the subclass instance is the same as the address of the superclass instance.
Yes, current GCs are very fast and do not suffer from the problems Simula's GC suffered from. Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B. Allocation may not be any faster, because in either case the compiler can bump the nursery pointer just once (with a copying collector). Deallocation is maybe slightly faster, because with a copying collector, deallocation cost is sort of proportional to how much space you allocate, and the total size of record B is smaller with record A embedded in it than the total size of record A plus record B with a pointer linking them. (That's one pointer bigger.) But tracing gets much faster when there are no pointers to trace.
You will also notice from this example that it's failing to embed the superclass (or whatever) that requires an additional record lookup. And probably a cache miss, too.
I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general, and more generally the Lisp model of memory as a directed graph of objects linked by pointers, because although inheritance reduces the number of cache misses in OO code, it doesn't reduce them enough.
> No, inheritance does not require additional address lookups. Single inheritance as discussed here doesn't even require additional address arithmetic; the address of the subclass instance is the same as the address of the superclass instance.
Yes it does! Inheritance itself is fine, but inheritance almost always means virtual functions - which can have a significant performance cost because of vtable lookups. Using virtual functions also prevents inlining - which can have a big performance cost in critical code.
> Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B.
Huh? No - if you put A and B in separate allocations, you get worse performance. Both because of pointer chasing (which matters a great deal for performance). And also because you're putting more pressure on the allocator / garbage collector. The best way to combine A and B is via simple composition:
struct C { a: A, b: B }
In this case, there's a single allocation. (At least in languages with value types - like C, C++, C#, Rust, Swift, Zig, etc). In C++, the bytes in memory are actually identical to the case where B inherits from A. But you don't get any class entanglement, or any of the bugs that come along with that.
> I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general
Games are moving away from OO because C++ style OO is a fundamentally bad way to structure software. Even if it wasn't, struct-of-arrays usually performs better than arrays-of-structs because of how caching works. And modern ECS (entity component systems) can take good advantage of SoA style memory layouts.
The performance gap between CPU cache and memory speed has been steadily growing over the last few decades. This means, relatively speaking, pointers are getting slower and big arrays are getting faster on modern computers.
I agree with what you say about ECS and the memory hierarchy. But not much else.
> inheritance almost always means virtual functions
Inheritance and "virtual functions" (dynamic method dispatch) are almost, but not completely, unrelated. You can easily have either one without the other. Golang and Lua have dynamic method dispatch without inheritance; C++ bends over backwards so that you can use all the inheritance you want without incurring any of the costs of dynamic method dispatch, as long as you don't declare anything virtual. This is actually a practical thing to do with modern C++ with templates and type inference.
> No - if you put A and B in separate allocations, you get worse performance
Yes, that's what I was saying.
> you're putting more pressure on the allocator / garbage collector
Yes, I explained how that happens in greater detail in the comment you were replying to.
With your struct C, it's somewhat difficult to solve the problem catern was saying Simula invented inheritance to solve; if A is "list node" and B is "truck", when you navigate to a list node p of type A*, to get the truck, you have to do something like &((struct C *)p)->b, relying on the fact that the struct's first field address is the same as the struct's address and on the fact that the A is the first field. While this is certainly a workable thing to do, I don't think we can recommend it without reservation on the basis that "you don't get any class entanglement, or any of the bugs"! It's very error-prone.
> Games are moving away from OO because C++ style OO
There are a lot of things to criticize about C++, but I think one of its worst effects is that it has tricked people into thinking that C++ is OO. "C++ style OO" is a contradiction in terms. I mean, it's possible to do OO in C++, but the language fights you viciously every step of the way; the moment you make a concession to C++ style, OO collapses.
Simple inheritance makes the class hierarchy complicated through issues like the diamond inheritance problem, which C++ resolves in typical C++ fashion: attempt to satisfy everybody, actually satisfy nobody.
Simple inheritance doesn't have the diamond problem, because that requires multiple inheritance, which isn't simple. Smalltalk doesn't have multiple inheritance; I don't think SIMULA did either.
The best implementation inheritance hierarchy is none :)
If you must, you can use the implementation inheritance for mix-ins / cross-cutting concerns that are the same for all parties involved, e.g. access control. But even that may be better done with composition, especially when you have an injection framework that wires up certain constructor parameters for you.
Where inheritance (extension) properly belongs is the definition of interfaces.
Dynamic invocation, not strict inheritance is the issue here. Simply getting functions and fields from a superclass costs nothing if at each callsite the compiler knows enough to say where it is from.
But this may only happen when no virtual / overridden methods are involved, no VMT to look up in, no polymorphism at play. This is tanamount to composition, which should be preferred over inheritance anyway.
In this regard, Go and Rust do classes / objects right, Java provides the classical pitfalls, and C++ is the territory where unspeakable horrors can be freely implemented, as usual.
Overriding is fine. The issue comes with polymorphism and would even without inheritance per se, as can be seen in Go where interfaces provide polymorphism without inheritance.
Parent is correct - if the compiler has the information to devirtualize it becomes direct dispatch regardless of the mechanisms involved at the source level. This is also typically true for JITs.
Nah. Classic C++/Java style inheritance with vtable dispatch is very fast. Generally no slower than a C-style function call, and actually sometimes faster depending on how the C code is linked, characteristics of the CPU, etc.
This assumes that the vtables stay in at least L2 cache, which may be a correct assumption for the few hot-path classes. In this regard, I remember how Facebook's android app once failed to build when the codebase exceeded the limit of 64k classes.
No, Java does Class hierarchy analysis and has multiple way not to use v-table calls.
Single site (no class found overriding a method) are static and can be inlined directly. Dual call sites use a class check (which is a simple equality), can be inlined, no v-table. 3-5 call sites use inline caches (e.g. the compiler records what class have been used) that are similar and some can be inlined, usually plus a guard check.
Only high polymorphic calls use v-table and in practice is a very rare occasion, even with Java totally embracing inheritance (or polymorphic interfaces)
Note: CHA is dynamic and happens at runtime, depending which classes have been loaded. Loading new classes causes CHA to be performed again and if there are affected sites, the latter are to be deoptimized (and re-JIT again)
if the dispatches do use vtable they won't be inline and won't be faster. The real deal is inlining when necessary, which inheritance doesn't really prevent.
Interestingly enough, my first non-class-related experience with "intrusive lists" was in C, and we implemented it via macros; you'd add a LINKED_LIST macro in the body of a struct definition, and it would unspool into the pointer declarations. Then the list-manipulation functions were also macros so they would unspool at compile time into C code that was type-aware enough to know where the pointers lived in that individual struct.
Of course, this meant incurring the cost of a new definition of function families for each intrusive-list structure, but this was in the context of bashing together a demo kernel for a class, so we assumed modern PCs that have more memory than sense. The bigger problem was that C macros are little bastards to debug and maintain (especially a macro'd function... so much escaping).
C++, of course, ameliorates almost all those problems. And replaces them with other problems. ;)
My first assumption was something like ACL lists (though this could be for something like NTFS, or directory permissions) or even Firewall rules but I guess we all bring our background to assumptions
All of what we take for granted in modern computing architecture was invented as a performance hack by von Neumann in 1945 to take advantage of then-novel vacuum tube tech.
Before my time but I reject the idea that inheritance was "invented" in the context of a particular high level programming language. It was used widely in assembly/machine code programming. It's essentially a manifestation of two things: categorization of things (at least thousands of years old); and pointers to data structures (at least as old as the Manchester Mk1).
What do folks think of the OCaml/SML style approach with its signatures+modules+functors? It's a bit obscure, and some people find it inconvenient. Inheritance in their approach can be approximated using functors.
I always thought this was common knowledge. I guess it isn’t.
The only reason inheritance continues to be around is social convention. It’s how programmers are taught to program in school and there is an entire generation of people who cannot imagine programming without it.
Aside from common social practice inheritance is now largely a net negative that has long outlived its usefulness. Yes, I understand people will always argue that without their favorite abstraction everything will be a mess, but we shouldn’t let the most ignorant among us baselessly dictate our success criteria only to satisfy their own inability to exercise a tiny level of organizational capacity.
It's not just school. There are a lot pieces of literature, tutorials/guides, discussions, papers I came across over the years that tell you something very useful _plus_ wrap everything either into OO (or sometimes FP) noise and treat this part as just as important. Often there are vague rationales sprinkled in without much backing.
So you get interfaces that are much bigger than they need to be, visitor pattern this, manager that. As someone who isn't used to OO it is sometimes difficult or cumbersome to compile these kinds of examples and explanations into its essence.
I also noticed that AI assistants often want to blow up every interface with a whole bunch of useless stuff like getter/setter style functions and the like. That's obviously not the fault of these assistants, but I think it's something to consider.
Or because there are some situations where inheritance is useful. There was a reason Simula, Smalltalk, C++, Common Lisp (CLOS), Java, OCaml, Ruby, etc. implemented OOP. That's a lot of different languages. The program designers found it to be a useful abstraction and so did the language users.
There's no reason to be dogmatic about programming abstractions. Just because OOP became dogma for a while and got abused doesn't mean we have to be dogmatic entirely in the opposite direction. Abstractions have their use for those programming languages that choose to implement them.
I absolutely disagree. Some things in programming exist to bring products to market, but many things in programming only exist to bring programmers to market. That is a terrible and striking difference that results ultimately from an absence of ethics. Actions/decisions that exist only to discard ethical considerations serve only two objectives: 1) normalization of lower competence, 2) narcissism. It does not matter which of those two objectives are served, because the conclusions are the same either way.
Interfaces are indeed much nicer, but you have to make sure that your program language doesn't introduce additional overhead.
Don't be the guy that makes Abstract Factory Factories the default way to call methods. Be aware that there are a lot of people out there that would love to ask a web-server for instructions each time they want to call a method. Always remember that the IT-Crowd isn't sane.
I think many developers, especially in the range on 1999-2020, has gone through many pitfalls in programming. More specifically.. OOP.
As someone who was blessed/lucky to learn C and Pascal.. with some VB6.. I understood how to write clean code with simple structs and functions. By the time I was old enough to get a job, I realised most (if not all) job adverts required OOP, Design Patterns, etc. I remember getting my first Java book. About 1,000 pages, half of which was about OOP (not Java directly)
I remember my first job. Keeping my mouth shut and respecting the older, more experienced developers. I would write code the way I believed was correct -- proper OOP. Doing what the books tell me. Doing what is "cool" and "popular" is modern programming. Hiding the data you should not see, and wrapping what you should in Methods... all that.
Nobody came to me and offered guidance but I learned that some of my older codebase with Inheritence, Overrides.. while it was "proper" code, would end up a jumbled mess when it required new features. One class that was correctly setup one day needed to be moved about, affecting the class hierarchy of others. It brings me back to thinking of my earlier programming days with C -- and to have things in simples structs and functions is better.
I do not hate on OOP. Afterall, in my workplace, am using C# or Python - and make use of classes and, at times, some inheritence here and there. The difference is not to go all religious in OOP land. I use things sparingly.
At work, I use what the Companies has already laid out. Typically languages that are OOP, with a GC, etc. I have no problem with that. At home or personal projects, I lead more towards C or Odin these days. I use Scheme from time-to-time. I would jump at the opportunity to using Odin in the workplace but I am surrounded by developers who dont share my mindset, and stick to what they are familiar with.
Overall, his Conclusion matches my own.
"Personally, for code reuse and extensibility, I prefer composition and modules."
I learned about OOP from a Turbo Pascal v5.5 book circa 1993. Drawing triangles, squares, circles, all the good stuff. Turbo Vision library was a powerful demonstration of the power of OOP which made MSFT MFC look like a mess in comparison.
I'm not sold the evidence is there to show inheritance is a good idea - it basically says that constructors, data storage and interfaces need to be intertwined. That isn't a very powerful abstraction, because they don't need to be and there isn't an obvious advantage from doing so over picking up the concepts separately as required. And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things. Weird edge cases like HTTP over non-TCP protocols or rendering without screens start throwing spanners into a tree of assumptions that never needed to be made and pull the truth-in-code away from the truth-in-world.
All that makes a lot of sense if it was introduced as a performance hack rather than a thoughtfully designed concept.
Yeah ya... everyone likes to go on and on about how inheritance is the root of all evil and if you just don't use it, everything will be fine. Sorry, it won't be fine. Your software will still be a mess unless it is small and written three times by the same person who knows what they are doing.
The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves). People created AbstractFactoryFactoryBuilders not because they wanted to, but because "books" said to do stuff like and people were just signaling to the tribe.
So now, we are now all signaling to the new tribe that "inheritance is bad" even though we proudly created multiple AFFs in the past. Not very original in my opinion since Go and Rust don't have inheritance. The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
> The bottom line is, no one ever really used inheritance that much anyway
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.
I don't think Inheritance is always bad - sometimes it's a useful tool. But it was definitely overused and composition, interfaces work much better for most problems.
Inheritance really shines when you want to encapsulate behaviour behind a common interface and also provide a standard implementation. I.e: I once wrote a RN app which talked to ~10 vacuum robots. All of these robots behaved mostly the same, but each was different in a unique way. E.g. 9 robots returned to station when the command "STOP" was send, one would just stop in place. Or some robots would rotate 90 degrees when a "LEFT" command was send, others only 30 degrees. We wrote a base class which exposed all needed commands and each robot had an inherited class which overwrote the parts which needed adjustment (e.g. sending left three times so it's also 90 degrees or send "MOVE TO STATION" instead of "STOP").
> I don't think Inheritance is always bad - sometimes it's a useful tool.
I can only think of one or two instances where I've really been convinced that inheritance is the right tool. The only one that springs to mind is a View hierarchy in UI libraries. But even then, I notice React (& friends) have all moved away from this approach. Modern web development usually makes components be functions. (And yes, javascript supports many kinds of inheritance. Early versions of react even used them for components. But it proved to be a worse approach.)
I've been writing a lot of rust lately. Rust doesn't support inheritance, but it wouldn't be needed in your example. In rust, you'd implement that by having a trait with functions (+default behaviour). Then have each robot type implement the trait. Eg:
Inheritance is not the only way to share behavior across different implementations — it'a just the only way available in the traditional 1990s crop of static OOP languages like C++, Java and C#.
There are many other ways to share an implementation of a common feature:
1. Another comment already mentioned default method implementations in an interface (or a trait, since the example was in Rust). This technique is even available in Java (since Java 8), so it's as mainstream as it gets.
The main disadvantage is that you can have just one default implementation for the stop() method. With inheritance you could use hierarchies to create multiple shared implementations and choose which one your object should adopt by inheriting from it. You also cannot associate any member fields with the implementation. On the bright side, this technique still avoids all the issues with hierarchies and single and multiple inheritance.
2. Another technique is implementation delegation. This is basically just like using composition and manually forwarding all methods to the embedded implementer object, but the language has syntax sugar that does that for you. Kotlin is probably the most well-known language that supports this feature[1]. Object Pascal (at least in Delphi and Free Pascal) supports this feature as well[2].
This method is slightly more verbose than inheritance (you need to define a member and initialize it). But unlike inheritance, it doesn't requires forwarding the class's constructors, so in many cases you might even end up with less boilerplate than using inheritance (e.g. if you have multiple overloaded constructors you need to forward).
The only real disadvantage of this method is that you need to be careful with hierarchies. For instance, if you have a Storage interface (with the load() and store() methods) you can create EncryptedStorage interface that wraps another Storage implementation and delegates to it, but not before encrypting everything it sends to the storage (and decrypting the content on load() calls). You can also create a LimitedStorage wrapper than enforces size quotas, and then combine both LimitedStorage and EncryptedStorage. Unlike traditional class hierarchies (where you'd have to implement LimitedStorage, EncryptedStorage and LimitedEncryptedStorage), you've got a lot more flexibility: you don't have to reimplement every combination of storage and you can combine storages dynamically and freely. But let's assume you want to create ParanoidStorage, which stores two copies of every object, just to be safe. The easiest way to do that is to make ParanoidStorage.store() calls wrapped.store() twice. The thing you have to keep in mind, is that this doesn't work like inheritance: For instance, if you wrap your objects in the order EncryptedStorage(ParanoidStorage(LimitedStorage(mainStorage))), ParanoidStorage will call LimitedStorage.store(). This is unlike the inheritance chain EncryptedStorage <- ParanoidStorage <- LimitedStorage <- BaseStorage, where ParanoidStorage.store() will call EncryptedStorage.store(). In our case this is a good thing (we can avoid a stack overflow), but it's important to keep this difference in mind.
3. Dynamic languages almost always have at least one mechanism that you can use to automatically implement delegation. For instance, Python developers can use metaclasses or __getattr__[3] while Ruby developers can use method_missing or Forwaradable[4].
4. Some languages (most famously Ruby[5]) have the concept of mixins, which let you include code from other classes (or modules in Ruby) inside your classes without inheritance. Mixins are also supported in D (mixin templates). PHP has traits.
5. Rust supports (and actively promotes) implementing traits using procedural macros, especially derive macros[6]. This is by far the most complex but also the most powerful approach. You can use it to create a simple solution for generic delegation[7], but you can go far beyond that. Using derive macros to automatically implement traits like Debug, Eq, Ord is something you can find in every codebase, and some of the most popular crates like serde, clap and thiserror rely on heavily on derive.
[1] https://kotlinlang.org/docs/delegation.html
[2] https://www.freepascal.org/docs-html/ref/refse48.html
[3] https://erikscode.space/index.php/2020/08/01/delegate-and-de...
[4] https://blog.appsignal.com/2023/07/19/how-to-delegate-method...
[5] https://ruby-doc.com/docs/ProgrammingRuby/html/tut_modules.h...
[6] https://doc.rust-lang.org/reference/procedural-macros.html#d...
[7] https://crates.io/crates/ambassador
To me (as a Java programmer) inheritance is very useful to reuse code and avoid copy paste. There many cases in which decorators or template methods are very useful and in general I find it "natural" in the sense that the concepts of abstraction and specialization can be found in plenty of real world examples (animals, plants, vehicles etc etc).
As usual there is no silver bullet, so it's just a tool and like any other tool you need to use it wisely, when it makes sense.
Yeah there can be a ton of derivative and convenience methods that would either have to be duplicated in all implementations or even worse duplicated at call sites.
Call them interfaces with default implementations or super classes, they are the same thing and very useful.
> The reason is that inheritance is almost always a bad idea in practice.
It's just slightly too strong of a statement.
I'm working in a very large Spring codebase right now, with a lot of horrible inheritance abuse (seriously, every component extended common hierarchy of classes that pulled in a ton of behavior). I suspect part of the reason is the Spring context got out of control, and the easiest way to reliably "inject" behavior is by subclassing. Terrible.
On the other hand, inheritance is sometimes the most elegant solution to a problem. I've done this at multiple companies:
Sometimes you have data (not just behavior!) that genuinely follows an IS-A relationship, and you want more than just interface polymorphism. Yes you can model this with composition, but the end result ends up being more complex and uglier.It doesn't have to be all one or the other. But I agree, it should be mostly composition.
There used to be times when language-level composition did not exist, so inheritance was practically all you had. There used to be ugly hacks to implement mix-ins, for example, in PHP (first versions of Symfony used them and did their best to make them not ugly, but they had to devote a whole chapter on how to do them right anyway). I suspect a lot of contention comes from those times — and from the fact that even when you can do better, many folks still have the muscle memory wired to "if inheritance is the only tool you have, everything looks like a subclass".
I like languages where I can have both, and where the language authors are not trying to preach at me.
That is a great example! Abstraction is most useful when it captures the way several things are more-specific versions of a more general thing. At that point it's not just about the functionality: it communicates to the reader. Anyone coming in can now easily answer the question, "what kinds of payments exist?"
> But there's a reason the crowd is moving against inheritance.
I doubt it; the majority of code is in enterprise projects, and they do Java and C# in the idiomatic way, with inheritance.
I'm working on an Android project right now, and inheritance is everywhere!
So, sure, if you ignore all mobile development, and ignore almost all enterprise software, and almost all internal line-of-business software, and restrict yourself to what various "influencers" say, then sure THAT crowd is moving away from inheritance.
Java and C# are already a huge step up from what came before, since they at least introduce the concept of an interface as a distinct thing from a parent class. The fact that you don't notice that is proof that progress does happen, if only slowly.
"But there's a reason the crowd is moving against inheritance"
Yes, in our fad-chasing industry the pendulum has moved in the other direction. Let's wait few years.
There is nothing wrong with OOP, inheritance, FP, procedural, declarative or whatever. What is bad is religious dogma overtaking engineering work.
I definitely agree that the crusade against inheritance is just a fad and not based on good reasoning. Every time people say "inheritance is garbage that people only use because they learned it in school" it pains me because it's like, really? You can't imagine that it's because those people have thought about the options and concluded that inheritance is the best way to model the problem they are facing?
Contrary to what the hype of the 90s said, I don't think OOP is the ultimate programming technique which will obsolete all others. But I think that it's equally inaccurate to make wild claims about how OOP is useless garbage that only makes software worse. Yes, you can make an unholy mess of class structures, but you can do that with every programming language. The prejudice some people have against OOP is really unfounded.
I’m surprised this is considered a controversial take.
You can write spaghetti in any language or paradigm. People will go overboard on DRY while ignoring that inheritance is more or less just a mechanism for achieving DRY for methods and fields.
FP wizards can easily turn your codebase into a complex organism that is just as “impenetrable” as OOP. But as you say, fads are fads are fads, and OOP was the previous fad so it behooves anyone who wants to look “up to date” to be performative about how they know better.
Personally I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects. I expect you can even base a trait off of another trait in Rust.
But don’t dare call it what it actually is, because this industry really is as petulant as you describe.
> Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better.
Of course, in the functional programming community we know that it is pointfree abstraction that makes your software better.
https://wiki.haskell.org/Pointfree
(Please pardon the pun.)
As they say about OOP, everything is somewhere else.
The only part of inheritance I’ve ever found useful is allowing objects to conform to a certain interface so that they can fulfill a role needed by a generic function. I’ve always preferred the protocol approach or Rust’s traits for that over classicist inheritance though.
And Rust's traits can sort-of inherit from each other.
Ah you have used spring/spring boot I see. That thing. It has humbled me. I didnt know you could do that much abstraction.
> Usually, the calls were via some abstract interface with a single implementor
What's described here is over-generic code, instead of KISS and just keeping an eye on extensibility instead of generalizing ahead of time. This can happen in any paradigm.
We're all flavoured by our experience. You can for sure make a mess with flat C-style code that uses structs and global functions. But whenever I've seen a mess in C, its a sort of "lego on the floor" type of mess. Code is everywhere, but all the pieces are uniquely named and mostly self contained.
Classes - and class hierarchies - really let you go to town. I've seen codebases that seem totally impossible to get your head around. The best is when you have 18 classes which all implicitly or explicitly depend on each other. In that case, just starting the program up requires an insane, fragile dance where lots of objects need to be initialized in just the perfect order, otherwise something hits a null pointer exception in its initialization code. You reorder two lines in a constructor somewhere and something on the other side of your codebase breaks, and you have no idea why.
For some reason I've never seen anyone make that kind of mess just using composition. Maybe I just haven't been around long enough.
"But there's a reason the crowd is moving against inheritance"
Yep: it requires skills that aren't taught in schools or exercised in big companies organized around microservices. We've gone back to a world where most developers are code monkeys, converting high-level design documents into low-level design documents into code.
That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time. But that doesn't get you a promotion right now, so why would engineers value it?
> That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time.
Whoa that’s quite the claim. Most large projects built heavily on OO principles I’ve seen or worked on have become an absolute unmaintainable mess over time, with spider webs of classes referencing classes. To say nothing of DI, factoryfactories and all the rest.
I believe you might have had some good experiences here. But I’m jealous, and my career doesn’t paint the same rosy picture from the OO projects I’ve seen.
I believe most heavily OO projects could be written in about 1/3 as many lines if the developers used an imperative / dataflow oriented design instead. And I’m not just saying that - I’ve seen ports and rewrites which have born out around that ratio. (And yes, the result is plenty maintainable).
I think that's part of the charm of Go, as a language/community.
I've worked with countless people who came from Java, who try to create the same abstractions and factories and layers.
When I chide them, it's like realizing the shackles are off, and they have fun again with the basics. It leads to much more readable, simple code.
This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
> This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
Yeah; I agree with this. I think this is both the best and worst aspect of Go: Go is a language designed to force everyone's code to look like vaguely the same, from beginners to experts. Its a tool to force even mediocre teams to program in an inoffensive, bland way that will be readable by anyone.
Yeah, I have seen things like you describe. But I have also seen the same code, copy-pasted a dozen times throughout a codebase and modified over years. That is a much worse situation; the links between the abstractions still exist without the inheritance, but now they are untraceable. At least with inheritance there are links between the methods and classes for you to follow. Without it, you've got to crawl the entire codebase to find these things. OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
I would even go so far as to argue that a small team of devs can learn an OOP heirarchy and work with it indefinitely, but a similar small team will drown in maintenance overhead without OOP and inheritance. This is highly relevant as we head into an age of decreased headcounts. This style of abandoning OOP will age poorly as teams decrease in size.
Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens.
> OOP is easily the lesser of the two evils; without it, you're doomed to violate DRY in ways that will make your project unmaintainable.
Inheritance isn't the only way to avoid duplicating code. Composition works great - and it results in much more maintainable code. Rust, for example, doesn't have class based inheritance at all. And the principle of DRY is maintained in everything I've made in it. And everything I've read by others. Its composition all the way down, and it works great. Go is just the same.
If anything, I think if you've got a weak team it makes even more sense to stick to composition over inheritance. The reason is that composition is easier to read and reason about. You don't get "spooky action from a distance" when you use composition, since a struct is made up of exactly the list of fields you list. Nothing more, nothing less. There's no overridden methods and inherited fields to worry about.
I think you have the consequences of AI exactly backwards. AI provides virtual headcount and will vastly increase the ability of small teams to manage sprawling codebases. LLM context lengths are already on the order of millions of tokens. It takes a human days of work to come to grips with a codebase an LLM can grok in two seconds.
The cost of working with code is much lower with LLMs than with humans and it's falling by an order of magnitude every year.
Horrible code is a constant that will not be fixed by not using inheritance.
> The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves).
Inheritance is most definitely used in many popular C++ libraries, e.g., protobuf::Message [1] (which is base class to all user message classes and also has its own base class of MessageLite) or QWidget [2] (which sits in a large class hierarchy) or tinyxml2::XMLNode (base class to other node types). These are honestly the first three libraries that I thought of that have a non-trivial collection of classes in them. They're all stateful base classes by the way, not pure interfaces. And remember, I'm not trying to justify whether these are good or bad designs, just the make the observation that inheritance certainly is well used in practice.
(The fourth library I thought of with a reasonably complex collection of classes is Boost ASIO [4] which actually doesn't use inheritance. Instead it uses common interfaces to allow some compile-time polymorphism. Ironically, this is the only library in the list that I've been so unsatisfied with that I've written my own wrapper more than once for a little part of it: allowing auto-(re)connecting outbound and accepting incoming connections with the same interface. Guess what: I used inheritance!)
[1] https://protobuf.dev/reference/cpp/api-docs/google.protobuf....
[2] https://doc.qt.io/qt-6/qwidget.html
[3] https://leethomason.github.io/tinyxml2/classtinyxml2_1_1_x_m...
[4] https://www.boost.org/doc/libs/1_88_0/doc/html/boost_asio/re...
>> People created AbstractFactoryFactoryBuilders not because they wanted to,
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
> Your software will still be a mess unless it is small and written three times by the same person who knows what they are doing.
100% this! And I've recently been wondering whether this is the right workflow for AI-assisted development: use vibe-coding to build the one that you plan to throw away [0], use that to validate your assumptions and implement proper end-to-end tests, then recreate it again once or more with AI asked to try different approaches, and then eventually throw these away too and more manually create "the third one".
[0] "In most projects, the first system built is barely usable....Hence plan to throw one away; you will, anyhow." Fred Brooks, The Mythical Man-Month
The reason people don't have original opinions is because it isn't worth it. The stakes are extremely low. How one chooses to write code is ultimately a matter of personal preference.
The lower the stakes, the more dogmatic people become about their choices, because they know on some level it's a matter of taste and nothing more. Counterintuitively, it becomes even more tied to one's ego than the choices that actually have major consequences.
I believe you just summed up 90% of popular wisdom about software engineering.
With enough patience you will see many fads pass twice like a tide raising and falling. OOP, runtime typing, schema-less databases and TDD are the first to come to mind.
I feel "self-describing" data formats and everything agile are fading already.
Very few ideas stick, but some do: I do not expect GOTO to ever come back, but who knows where vibe coding will lead us :)
Objects are a pretty good abstraction for when you have data that represents, well, objects. In 3D graphics it's a very useful abstraction. Significantly less good when you're trying to model process, pipeline, or flow IMHO (I know there are some people who swear by them for anything they would bash together with UML first, and I just... Don't see it. I've used more than enough object-oriented flowchart-description languages to fundamentally disagree; charts are two-dimensional, text-represented code is one-dimensional, making the code "objects" doesn't fix that problem).
(Probably also worth noting that high performance 3D graphics torture the object abstraction past recognizability, because maintaining those runtime abstractions costs resources that could be better spent slamming pixels into a screen).
> The bottom line is, no one ever really used inheritance that much anyway
That's just false. Before Java abstract factory era there was already a culture of creating deep inheritance hierarchies in C++ code. Interfaces and design patterns (including factories) were adopted as a solution to that mess and as bad as they were - they were still an improvement.
I've written a bunch of code in languages without inheritance per se—OCaml, Haskell, Rust—and things have been more than fine. Hell, I barely use any sort of subtyping! I definitely miss structural subtyping in Haskell and Rust on occasion, but even in those situations the code has never reduced to a thrice-written mess.
I've also written some code that's gotten a lot of mileage out of inheritance, including multiple inheritance. Some of my Python abstractions would not have worked anywhere near as well as they did without it. But even then, I could build APIs at least as usable in languages without inheritance, as long as those languages had sufficient facilities for abstraction of their own. (Which OCaml, Haskell and Rust absolutely do!)
The problem is every library and framework uses a ton of class hierarchies with big inheritance trees.
> Your software will still be a mess
Your software will still be a mess but a mess you can work with. Not a horror beyond comprehension. We should aim for workable mess.
This is from experience working with both procedural/functional mess and OO mess.
I've long been searching for a concise example of "good" inheritance, can you recommend one?
> Weird edge cases like HTTP over non-TCP protocols or rendering without screens start throwing spanners into a tree of assumptions that never needed to be made
yes, but that's true of other abstractions too. Whether you use inheritance or not, you usually don't know what abstractions you need until you need them: even if you were using composability rather than inheritance, chances are that you'd have encoded assumptions that HTTP goes over TCP until you need to handle the fact that actually you need higher-level abstractions there.
If you don't use inheritance, you switch to an interface (or a different interface) in your composition. If you did use inheritance, you stop doing so and start using composition. The latter is probably some more work but i don't think it's fundamentally very different.
I'm on the fence about inheritance myself; I often regret having used it, and I never regret having not used it. On the other hand, it's awfully expedient. I designed and implemented a programming language called Bicicleta whose only argument-passing mechanism is inheritance, and I'm not sure that was a bad idea.
The object-oriented part of OCaml, by the way, has inheritance that's entirely orthogonal to interfaces, which in OCaml are static types. Languages like Smalltalk and, for the most part, Python don't have interfaces at all.
1992 "Interfaces and Specifications for the Smalltalk Collection Classes"
https://dl.acm.org/doi/pdf/10.1145/141936.141938
Very interesting work! It is an attempt to extract the interfaces that were in the minds of the implementors of the Smalltalk-80 system's collection classes, but which couldn't be expressed in the language itself, because it has no interface construct. That's what I meant by "Languages like Smalltalk (...) don't have interfaces at all."
Don't have manifest types and don't have manifest interfaces.
Someone has already referenced "Adding Dynamic Interfaces to Smalltalk" [0] and looking back there doesn't seem to be any kind of demonstration that use of interfaces makes software faster to develop or less error prone or... [1]
Python has Protocols. They work like Go interfaces
Sure, but for the most part people don't use them, because you don't have to; Python method calls are always potentially polymorphic, unlike Golang method calls.
Rigid, "family tree"-style inheritance as in classical OOP is pretty much garbage. "A cow is a mammal is an animal" is largely useless for the day to day work we do except in extremely well-planned, large and elaborate ontologies -- something you typically only see in highly structured software like windowing systems. It just isn't useful for the majority of our work.
"Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules [1]. You can bolt behaviors and their types onto structures at will. This is how OO should be.
I put together a visual argument on another thread on HN a few weeks ago:
https://imgur.com/a/class-inheritance-vs-traits-oop-isnt-bad...
[1] Though if you want rules on bounds and associated types, you can have them.
> "Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules.
Yes-and-no.
Interfaces still participate in inheritance hierarchies (`interface Bar extends Foo`), and that's in a way that prohibits removing/subtracting type members (so interfaces are not in any way a substitute for mixins). Composition (of interfaces) can be used instead of `extends`, but then you lose guarantees of reference-identity - oh, and only reference-types can implement interfaces which makes interfaces impractical for scalars and unusable in a zero-heap-alloc program.
Interface-types can only expose virtual members: no public fields - which seems silly to me because a vtable-like mechanism could be used to allow raw pointer access to fields via interfaces, but I digress: so many of these limitations (or unneeded functionality) are consequences of the JVM/CLR's design decisions which won't change in my lifetime.
Rust-style traits are an overall improvement, yes - but (as far as my limited Rust experience tells me) there's no succinct way to tell the compiler to delegate the implementation of a trait to some composed type: I found myself needing to write an unexpectedly large amount of forwarding methods by hand (so I hope that Rust is better than this and that I was just doing Rust the-completely-wrong-way).
Also, oblig: https://boxbase.org/entries/2020/aug/3/case-against-oop/
How are interfaces with ability to provide default implementations for members (which both C# and Java allow today) not a substitute for mixins?
"Only reference types can implement interfaces" is simply not true in C#. Not only can structs implement them, but they can also be used through the interface without boxing (via generics).
Rust actually allows one to express "family tree" object inheritance quite cleanly via the generic typestate pattern. It isn't "garbage", it totally has its uses. It is however quite antithetical to modularity: the "inheritance hierarchy" can only really be understood as a unit, and "extensibility" for such a hierarchy is not really well defined. Hence why in practice it mostly gets used in cases where the improved static checking made possible by the "typestate" pattern can be helpful, which has remarkably little to do with "OOP" design as generally understood.
> And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things.
"This doesn't represent the fundamental truth" does not imply "this has little value". Your navigation software likely doesn't account for cars passing each other on the road either -- or probably red lights for that matter -- and yet it's still pretty damn useful. The sweet spot is problem- and model-dependent.
I'm not sold on the evidence of much in the way of programming language features from the "object oriented" era.
They were pushed by cultish types with little evidence. There was this assertion that all these things were wonderful and would reduce effort and therefore they must be good and we all must use them. We got object oriented everything including object oriented CPUs, object oriented relational databases, object oriented "xtUML". If you weren't object oriented you were a pile of garbage in those days.
For all that, I don't know if there was ever any good evidence at all that any of it worked. It was like the entire industry all fell for snakeoil salesmen and are collectively too embarrassed about it to have much introspection or talk about it. Not that it was the last time the industry has fallen for snakeoil...
If encapsulation wasn't useful, we wouldn't write microservices.
If abstraction wasn't useful, we wouldn't use containers.
That's not evidence though even if we take it as true. You can of course make layers of abstraction or encapsulation without "object oriented" languages.
Inheritance was oversold, but it can help remove a lot of boilerplate code. Early windows notoriously had hundreds of lines of code for a hello world program. Setting your own defaults and getting on with your day is great for dealing with a less refined API etc.
Complex inheritance trees can make sense in niche application for similar reasons.
But they're not building trees, that's how inheritance is mostly used today.
After reading this, I'm thinking that intrusive lists is the one use of inheritance in C++ that makes any sense.
I'd still generally prefer intrusive lists to be done via composition. I've seen plenty of intrusive lists where each item was a member of multiple lists at the same time - which is quite hard to do if you need to inherit from an intrusive list element superclass.
metoo, but how do you pull that off in C++? How do you get back from node to containing value?
Multiple inheritance, possible but you'd have to jump some hoops to disambiguate since you're dealing with multiple copies of the same base class.
So... what even is the purpose of computer language abstractions?
To provide building blocks useful for the construction of programs.
There's a number of properties that are good for such building blocks... composability, flexibility, simplicity, comprehensibility, etc.
Naturally, these properties can conflict, so the goal would be to provide a minimal set of interoperable building blocks providing good coverage of the desirable properties, to allow the developer can choose the appropriate one for a give circumstance and to change when needed. E.g., they could choose to use a simple but less flexible block in one situation, or a more complicated or less performant block in another.
IMO, inheritance is a decent building block -- simple and easy to understand, though with somewhat limited applicability.
We can imagine improvements (particularly to implementation) but I think it got a bad rep mostly due to people not understanding its uses and limitations.
...I've got to say, though, if you aren't figuring out how to use the simple and easy tools, you're really not going to do better with more complicated and capable tools. People hate to admit it, but the best of us are still highly confused monkeys haphazardly banging away at keyboards, barely able to hold a few concepts in our heads at one time. Simple is good for us.
It is a good idea because it's the most fundamental idea.
You have two objects. A and B. How do you merge the two objects? A + B?
The most straight forward way is inheritance. The idea is fundamental.
The reason why it's not practical has more to do with human nature and the limitations of our capabilities in handling complexity then it has to do with the concept of inheritance itself.
Literally think about it. How else do you merge two structs if not using inheritance?
The idea that inheritance is not fundamental and is wrong in nature is in itself mistaken.
I found myself in a situation where I had to reinvent data structures from scratch in an assembly-like language: "properties" or "fields" are just offsets relative to a pointer (runtime) or an address (compile-time).
The need to extend a data structure to add more fields comes almost immediately. Think: something like the C "hack" of embedding the "base" structure as the first field of the "derived" structure:
Then you can pass a derived_t instead of base_t with some type casting and caveats. This is "legal" in C because the standard guarantees that base has offset 0.Of course our "extra_data" could be a structure, but although it would look like "A+B" it is actually a concatenation.
> How else do you merge two structs if not using inheritance?
By merging them. Structs are product types. If you merge them, you get a bigger product type. You don't need inheritance (ADTs) for that.
The more useful point of inheritance is having shared commonality. But modern languages make it convenient to express that without using ADTs/inheritance.
TypeScript is fully structurally typed. If you combine a Foo and a Bar it is something new, but keeps being both a Foo and a Bar as well.
Go is structurally typed to a relatively high degree as well. You can embed types (including structs) into structs and only care about the individual parts in your functions. And you have composable and implicit interfaces.
Clojure has protocols and generally only cares about the things you use or define to use in functions. It allows you to do hierarchical keyword ontologies if you want, but I see it rarely used.
These languages and many others favor two fundamental building blocks: composition and signatures. The latter being either about data fields or function signatures. The neat part is these aren't entangled: You can use and talk about them separately.
How fundamental is inheritance if it can be fully replaced by simpler building blocks?
>By merging them. Structs are product types. If you merge them, you get a bigger product type. You don't need inheritance (ADTs) for that.
Merging structs and inheritance are fundamentally the same thing.
>How fundamental is inheritance if it can be fully replaced by simpler building blocks?
It can't be replaced. Combining Foo and Bar in the way you're thinking involves additional primitives and concepts like nesting. If Foo and Bar share a same property the most straight forward way of handling is overriding one property with the other. Overriding IS inheritance.
We aren't dealing with product types in the purest form either. These product types have named properties and you need additional rules to handle conflicting names.
In fact once you have named properties the resulting algebra from multiplying structs is not consistent with the concept of multiplication whether you use inheritance or "object composition"
> Literally think about it. How else do you merge two structs if not using inheritance?
What? Using multiple inheritence? That's one of the worst ideas I've ever seen in all of computer science. You can't just glue two arbitrary classes together and expect their invariants to somehow hold true. Even if they do, what happens when both classes implement a method or field with the same name? Bugs. You get bugs.
I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.
The way to merge two structs is via composition:
If you want to expose methods from A or B, either wrap the methods or make the a or b fields public / protected and let callers call c.a.foo().Don't take my word for it, here's google's C++ style guide[1]
> Composition is often more appropriate than inheritance.
> Multiple inheritance is especially problematic, because it often imposes a higher performance overhead (in fact, the performance drop from single inheritance to multiple inheritance can often be greater than the performance drop from ordinary to virtual dispatch), and because it risks leading to "diamond" inheritance patterns, which are prone to ambiguity, confusion, and outright bugs.
> Multiple inheritance is permitted, but multiple implementation inheritance is strongly discouraged.
[1] https://google.github.io/styleguide/cppguide.html#Inheritanc...
> Even if they do, what happens when both classes implement a method or field with the same name?
It's done in Java with interfaces with default implementations, and the world hasn't imploded. It just doesn't seem like that big of a problem.
I guess the point was never how to do thing properly, but:
"how to join two struts with least amount of work and thinking so my manager can tick off a box in excel"
in such case inheritance is a nice temporary crutch
>What? Using multiple inheritence?
You just threw this in out of nowhere. I didn't mention anything about "multiple" inheritance. Just inheritance which by default people usually mean single inheritance.
That being said multiple inheritance is equivalent to single inheritance of 3 objects. The only problem is because two objects are on the same level it's hard to know which property overrides which. With a single chain of inheritance the parent always overrides the child. But with two parents, we don't know which parent overrides which parent. That's it. But assume there are 3 objects with distinct properties.
would be equivalent to They are isomorphic. Merging distinct objects with distinct properties is commutative which makes inheritance of distinct objects commutative. >I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.Don't ever tell me that programming for 30 years is a reason for being correct. It's not. In fact you can be doing it for 30 years and be completely and utterly wrong. Then the 30 years of experience is more of a marker of your intelligence.
The point is YOU are NOT understanding WHAT i am saying. Read what I wrote. The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively.
But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting.
Think about it. You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
Say you have two structs. The structs contain redundant properties. HOW do you define one struct in terms of the other? There's no simpler way then inheritance.
>> Composition is often more appropriate than inheritance.
You can use composition but that's literally the same thing but wierder, where instead of identical properties overriding other properties you duplicate the properties via nesting.
So inheritance
Composition: That's it. It's just two arbitrary rules for merging data.If you have been programming for 30 years you tell me how to fit this requirement with the most minimal code:
given this:
I want to create this: But I don't want to rewrite a, b, c, d multiple times. What's the best way to define B while reusing code? Inheritance.Like I said the problem with inheritance is not the concept itself. It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity. The idea itself is fundamental not stupid. It's the human brain that is too stupid to handle the emergent complexity.
Also I don't give two flying shits about google style guides after the fiasco with golang error handling. They could've done a better job.
Like any map, the inheritance pattern is bad, except when it works. It’s a strategic capability to be able to guess well which is which in given context.
My first foray into serious programming was by way of Django, which made a choice of representing content structure as classes in the codebase. It underwent the usual evolution of supporting inheritance, then mixins, etc. Today I’d probably have mixed feelings about conflating software architecture with subject domain so blatantly: of course it could never represent the fundamental truth. However, I also know that 1) fundamental truth is not losslessly representable anyway (the map cannot be the territory), 2) the only software that is perfectly isolated from imperfections of real world is software that is useless, and 3) Django was easy to understand, easy to build with, and effectively fit the purpose.
Any map (requirement, spec, abstraction, pattern) is both a blessing that allows software to be useful over longer time, and a curse that leads to its obsolescence. A good one is better at the former than the latter.
> And inheritance naturally suggests grouping interfaces into a tree in the way that seems of little value because in practice a tree probably doesn't represent the fundamental truth of things.
The fundamental truth of things? What are you even talking about? What fundamental truth of things? And what does that have anything to do with building software?
If you pretend/imagine it was intentional, and insightful, you've created a nerd trap for amateur ontologists. Some of which decide to become professional ontologists and sell books on objected oriented design.
Your lovely typo there makes me realize how often I’ve had to deal with objection-oriented programming.
> a tree probably doesn't represent the fundamental truth of things
It does. Trees appear in nature all the time. It's the basis of human society, evolution and many things.
Most of programming moves towards practicality rather then fundamental truth. That's why you get languages like golang which are ugly but practical.
A city is not a tree: https://www.patternlanguage.com/archive/cityisnotatree.html
Even trees are not trees: https://en.wikipedia.org/wiki/Anastomosis
Evolution is most definitely not a tree.
Nature also tends towards practicality, even more so than programming. Trees aren’t a fundamental truth, they’re a made-up oversimplified abstraction.
evolution is a tree. Follow the ancestral lines. Even the term inheritance comes from evolution.
Botanical trees appearing in nature don't make them "the fundamental truth of things". And in what way are trees the basis of human society? Thats such a strange claim. Are you talking about family trees? Because they're actually directed acyclic graphs.
Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
fundamental truth of things is probably the wrong word choice.
It's more like there are many fundamental concepts and trees are one such concept. I don't think there is a singular fundamental truth of things.
>Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
I never made this claim though?
>Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
I mentioned this because parent poster is talking about fundamental truths. I'm saying trees are fundamental... But they may not be practical.
"Taxonomies are entirely and completely worthless."
Hard disagree. Knowing that AES and Twofish are block ciphers is useful when dealing with cryptography. Many categories of algorithms and objects are naturally taxonomic.
Even HTML+CSS has (messy) inheritance.
Discussed at the time:
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=26988839 - April 2021 (252 comments)
plus this bit:
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=35261638 - March 2023 (1 comment)
There is this wonderful presentation by Herb Sutter talking about how the C++ concept “class” covers over 20 other abstractions, and that Bjarne’s choice for C++ was the right choice since it offers so much power and flexibility and expressive power in a concise abstraction.
Other languages (just like the article) only saw the downsides to such a generic abstraction that they added N times more abstractions (so split inheritance, interfaces, traits, etc) and rules for interactions that it significantly complicated the language with fundamentally no effective gains.
In summary, Herb will always do a better job than me explaining why the choices in the design of C++ classes, even with multiple inheritance, is one of the key factors of C++ success. With cppfront, he extends this idea with metaclasses to clearly describe intent. I think he is on the right track.
Could you share a link?
I find that structural typing is the most useful thing and unfortunately few languages support it. I'd like a language where:
- If I have a class Foo and interface Bar, I should be easily able to pass a Foo where Bar is required, provided that Foo has all the methods that Bar has (sometimes I don't control Foo and can't add the "implements Bar" in it).
- I can declare "class Foo implements Bar", but that only means "give me a compilation error if Bar has a method that Foo doesn't implement" - it is NOT required in order to be able to pass a Foo object to a method that takes a Bar parameter
- Conversely, I should be able to also declare "interface Foo implementedBy Baz" and get a compilation error if either one of them is modified in a way that makes them incompatible (again - this does not mean that Baz is the _only_ implementor, just that it's one of them)
- Especially with immutable values - the same should apply to data. record A extends B, C only means "please verify that A has all the members that B & C have, and as such whenever a B record is required, I can pass an A instead". I should be able to do the reverse too (record B extendedBy A). Notably, this doesn't mean "silently import members from B, and create a multiple-inheritance-mess like C++ does".
(I do understand that there'd be some performance implications, but especially with a JIT a feel these could be solved; and we live in a world where I think a lot of code cares more about expressiveness/ understandability than raw performance)
TypeScript does supports all of these - `C implements I` is not necessary but gives compile errors if not fulfilled.
You can use `o satisfies T` wherever you want to ensure that any object/instance o implements T structurally.
To verify a type implements/extends another type from any third-party context (as your third point), you could use `(null! as T1) satisfies T2;`, though usually you'd find a more idiomatic way depending on the context.
Of course it's all type-level - if you are getting untrusted data you'll need a library for verification. And the immutable story in TS (readonly modifier) is not amazing.
The problem with TypeScript is that it is ridiculously verbose. It is everything people used to complain Java was back in the 90s.
The first 3 are provided by Go, basically.
I want the same things you do, and the closest I've found is writing Ruby with RDocs on all the public methods of a class.
We use the word inheritance to refer to two concepts. There is implementation-inheritance. There is type-inheritance. These ideas are easily confused, which should be cause to have distinct words for them. Yet we don't. (Although Java does, effectively)
I think a lot of the arguments against inheritance come from C++'s peculiar implementation of it, which it clearly, ah, inherited from Simula. Slicing, ambiguous diamond inheritance, stuff like that are C++ problems, not inheritance problems. This isn't to say inheritance isn't problematic, but when you're making a properly substitutable sub-type of something, it's hard to beat.
What are examples of better inheritance?
For the specific problems I mentioned like slicing, literally anything else that actually abstracts the memory layout.
For OOP in general, I'd say anything with a metaobject protocol for starters, like Smalltalk, Lisp (via CLOS), Python, Perl (via Moose). All but the first support multiple inheritance, but also have well-defined method resolution orders. Multiple inheritance might still lead frequently to nasty spaghetti code even in those languages, but it will still be predictable.
CLOS and Dylan have multiple dispatch, which is just all kinds of awesome, but alas is destined to remain forever niche.
IMHO Inheritance (especially the C++ flavored inheritance with its access specifiers and myriad rules) has always scared me. It makes a codebase confusing and hard to reason with. I feel the eschewing of inheritance by languages such as Go and Rust is a step in the right direction.
As an aside, I have noticed that the robotics frameworks (ROS and ROS2) heavily rely on inheritance and some co-dependent C++ features like virtual destructors (to call the derived class's destructor through a base class pointer). I was once invited to an interview for a robotics company due to my "C++ experience"and grilled on this pattern of C++ that I was completely unfamiliar with. I seriously considered removing C++ from my resume that day.
To me, inheritence makes sense if you view your codebase as actual "Objects"
The reality is that a codebase is not that simple. Many things you create are not representable as realworld "objects" - to me, this is where is gets confusing to follow especially when the code gets bigger.
I remember those OOP books (I cannot comment on modern OOP books) where the first few chaptors would use Shapes as an example. Where A Circle, Square, Triangle, etc.. would inherit the Shape object. Sure, in simple examples like this.. it makes sense.
I remember covering inheritence and how to tell if its better or composition... which is the "Object IS X" or "Object HAS X" - so you base you're heirarchy around that mindset.
- "A Chair is Furniture" (Chair inherits Furniture) - "A Chair has Legs" (Chair has array of Leg)
I will always remember my first job - creating shop floor diagrams where you get to select a Shelf or Rack and see the visual representation of goods, etc. My early codebase was OOP... a Product, Merchandise, Shelf, Bay, Pegboard, etc. Each object inherits something in one way or another. Keeping on top of it eventually became a pain. I think there was, overall, about 5 levels of inheritence.
I reviewed my codebase one day and decided to screw it -- I would experiment other approaches. I ended up created simple classes with no inheritence. Each class was isolated from one another with the exception of a special Id which represented "something" like a Pin, or Shelf, etc. Now my code was flexible... "A Shelf has this and this"
In later years I realised what I did was following along the lines of what is commonly known as ECS or Entity-Component-System. Seems popular in games (and I viewed that project is a game-like fashion so it makes sense)
I’m not on the cutting edge of gamedev, but I still believe that ECS is a solid pattern with lots of use cases.
Sounds like relational databases: Entities are IDs. Components are tables with an ID column.
To be fair, deleting a derived object through a base class pointer is pretty basic C++. Slicing and virtual destructors are usually the first couple of things you learn about after virtual methods and copy constructors/assignment.
Quite a few sections of C++ can be classified as "pretty basic C++". None of the rules are complicated in isolation but that doesn't necessarily make it easy to reason about it.
Huh, I was always told that inheritance hurt performance as it requires additional address lookups. Thats why many game engines are moving away from it.
I guess it could simplify the GC but modern garbage collectors have come a long way.
As I understand it, back when Simula and LISP were invented it was generally the case that loads and stores took 1 cycle and there were no CPU caches. These pointer-chasing languages and techniques really weren't technically bad for the computers of the time - it's just that we have a larger relative penalty for randomly accessing our Random Access Memory these days so locallity is important (hence data-oriented design, ECS, etc).
I am kind of amused they _removed_ first-class functions though!
Function arguments weren't actually first-class to begin with. In Algol 60 (of which Simula started as a superset), you could pass functions as arguments to other functions, but that's it - it wasn't a proper type so you couldn't return it, shove it into a variable, have an array of functions etc. Basically, it had just enough restrictions that you would never get up in a situation where you could possibly call a function for which the corresponding activation frame (i.e. locals) could be gone. But when Simula added classes and objects, now you could suddenly capture arguments in a way that allows them to outlive the callee.
I was in the game industry when we originally transitioned from C to C++, and here's my recollection of the conversations at the time, more or less.
In C++, inheritance of data is efficient because the memory layout of base class members stays the same in different derived classes, so fields don't cost any more to access.
And construction is (relatively fast, compared to alternatives) because setting a single vtable pointer is faster than filling in a bunch of variable fields.
And non-virtual functions were fast because, again, static memory layouts and access and inlining.
Virtual functions were a bit slower, but ultimately that just raised the larger question of when and where a codebase was using function pointers more broadly - virtual functions were just one way of corralling that issue.
And the fact that there were idiomatic ways to use classes in C++ without dynamically allocating memory was crucial to selling game developers on the idea, too.
So at least from my time when this was happening, the general sense was that, of all the ways OO could be implemented, C++ style OO seemed to be by far the most performant, for the concerns of game developers in the late 90's / early 2000's.
I've been out of the industry for a while, so I haven't followed the subsequent conversations since too closely. But I do think, even when I was there, the actual reality of OO class hierarchies were starting to rear their ugly heads. Giant base classes are indeed drastically bad for caches, for example, because they do tend to produce giant, bloated data structures. And deep class hierarchies turn out to be highly sub-optimal, in a lot of cases, for information hiding and evolving code bases (especially for game code, which was one of my specialties). As a practical matter, as you evolve code, you don't get the benefits of information hiding that were advertised on the tin (hence the current boosting of composition over inheritance). I think you can better, smart discussions about those issues in this thread, so I won't cover them.
But that was a snapshot of those early experiences - the specific ways C++ implemented inheritance for performance reasons were definitely, originally, much of the draw to game programmers.
No, inheritance does not require additional address lookups. Single inheritance as discussed here doesn't even require additional address arithmetic; the address of the subclass instance is the same as the address of the superclass instance.
Yes, current GCs are very fast and do not suffer from the problems Simula's GC suffered from. Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B. Allocation may not be any faster, because in either case the compiler can bump the nursery pointer just once (with a copying collector). Deallocation is maybe slightly faster, because with a copying collector, deallocation cost is sort of proportional to how much space you allocate, and the total size of record B is smaller with record A embedded in it than the total size of record A plus record B with a pointer linking them. (That's one pointer bigger.) But tracing gets much faster when there are no pointers to trace.
You will also notice from this example that it's failing to embed the superclass (or whatever) that requires an additional record lookup. And probably a cache miss, too.
I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general, and more generally the Lisp model of memory as a directed graph of objects linked by pointers, because although inheritance reduces the number of cache misses in OO code, it doesn't reduce them enough.
I've written about this at greater length in http://canonical.org/~kragen/memory-models/, but I never really finished that essay.
> No, inheritance does not require additional address lookups. Single inheritance as discussed here doesn't even require additional address arithmetic; the address of the subclass instance is the same as the address of the superclass instance.
Yes it does! Inheritance itself is fine, but inheritance almost always means virtual functions - which can have a significant performance cost because of vtable lookups. Using virtual functions also prevents inlining - which can have a big performance cost in critical code.
> Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B.
Huh? No - if you put A and B in separate allocations, you get worse performance. Both because of pointer chasing (which matters a great deal for performance). And also because you're putting more pressure on the allocator / garbage collector. The best way to combine A and B is via simple composition:
In this case, there's a single allocation. (At least in languages with value types - like C, C++, C#, Rust, Swift, Zig, etc). In C++, the bytes in memory are actually identical to the case where B inherits from A. But you don't get any class entanglement, or any of the bugs that come along with that.> I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general
Games are moving away from OO because C++ style OO is a fundamentally bad way to structure software. Even if it wasn't, struct-of-arrays usually performs better than arrays-of-structs because of how caching works. And modern ECS (entity component systems) can take good advantage of SoA style memory layouts.
The performance gap between CPU cache and memory speed has been steadily growing over the last few decades. This means, relatively speaking, pointers are getting slower and big arrays are getting faster on modern computers.
I agree with what you say about ECS and the memory hierarchy. But not much else.
> inheritance almost always means virtual functions
Inheritance and "virtual functions" (dynamic method dispatch) are almost, but not completely, unrelated. You can easily have either one without the other. Golang and Lua have dynamic method dispatch without inheritance; C++ bends over backwards so that you can use all the inheritance you want without incurring any of the costs of dynamic method dispatch, as long as you don't declare anything virtual. This is actually a practical thing to do with modern C++ with templates and type inference.
> No - if you put A and B in separate allocations, you get worse performance
Yes, that's what I was saying.
> you're putting more pressure on the allocator / garbage collector
Yes, I explained how that happens in greater detail in the comment you were replying to.
With your struct C, it's somewhat difficult to solve the problem catern was saying Simula invented inheritance to solve; if A is "list node" and B is "truck", when you navigate to a list node p of type A*, to get the truck, you have to do something like &((struct C *)p)->b, relying on the fact that the struct's first field address is the same as the struct's address and on the fact that the A is the first field. While this is certainly a workable thing to do, I don't think we can recommend it without reservation on the basis that "you don't get any class entanglement, or any of the bugs"! It's very error-prone.
> Games are moving away from OO because C++ style OO
There are a lot of things to criticize about C++, but I think one of its worst effects is that it has tricked people into thinking that C++ is OO. "C++ style OO" is a contradiction in terms. I mean, it's possible to do OO in C++, but the language fights you viciously every step of the way; the moment you make a concession to C++ style, OO collapses.
Simple inheritance makes the class hierarchy complicated through issues like the diamond inheritance problem, which C++ resolves in typical C++ fashion: attempt to satisfy everybody, actually satisfy nobody.
The designers of StarCraft ran into the pitfalls of designing a sensible inheritance hierarchy, as described here (C-f "Game engine architecture"): https://www.codeofhonor.com/blog/tough-times-on-the-road-to-...
Simple inheritance doesn't have the diamond problem, because that requires multiple inheritance, which isn't simple. Smalltalk doesn't have multiple inheritance; I don't think SIMULA did either.
Simula is strictly single inheritance (and no interfaces).
The best implementation inheritance hierarchy is none :)
If you must, you can use the implementation inheritance for mix-ins / cross-cutting concerns that are the same for all parties involved, e.g. access control. But even that may be better done with composition, especially when you have an injection framework that wires up certain constructor parameters for you.
Where inheritance (extension) properly belongs is the definition of interfaces.
really amazing read thank you.
Dynamic invocation, not strict inheritance is the issue here. Simply getting functions and fields from a superclass costs nothing if at each callsite the compiler knows enough to say where it is from.
But this may only happen when no virtual / overridden methods are involved, no VMT to look up in, no polymorphism at play. This is tanamount to composition, which should be preferred over inheritance anyway.
In this regard, Go and Rust do classes / objects right, Java provides the classical pitfalls, and C++ is the territory where unspeakable horrors can be freely implemented, as usual.
Overriding is fine. The issue comes with polymorphism and would even without inheritance per se, as can be seen in Go where interfaces provide polymorphism without inheritance.
Parent is correct - if the compiler has the information to devirtualize it becomes direct dispatch regardless of the mechanisms involved at the source level. This is also typically true for JITs.
Nah. Classic C++/Java style inheritance with vtable dispatch is very fast. Generally no slower than a C-style function call, and actually sometimes faster depending on how the C code is linked, characteristics of the CPU, etc.
This assumes that the vtables stay in at least L2 cache, which may be a correct assumption for the few hot-path classes. In this regard, I remember how Facebook's android app once failed to build when the codebase exceeded the limit of 64k classes.
No, Java does Class hierarchy analysis and has multiple way not to use v-table calls.
Single site (no class found overriding a method) are static and can be inlined directly. Dual call sites use a class check (which is a simple equality), can be inlined, no v-table. 3-5 call sites use inline caches (e.g. the compiler records what class have been used) that are similar and some can be inlined, usually plus a guard check.
Only high polymorphic calls use v-table and in practice is a very rare occasion, even with Java totally embracing inheritance (or polymorphic interfaces)
Note: CHA is dynamic and happens at runtime, depending which classes have been loaded. Loading new classes causes CHA to be performed again and if there are affected sites, the latter are to be deoptimized (and re-JIT again)
if the dispatches do use vtable they won't be inline and won't be faster. The real deal is inlining when necessary, which inheritance doesn't really prevent.
What game engines are moving away from inheritance? Composition is preferred, tho. It's just easier to refactor things that way.
Fascinating.
Interestingly enough, my first non-class-related experience with "intrusive lists" was in C, and we implemented it via macros; you'd add a LINKED_LIST macro in the body of a struct definition, and it would unspool into the pointer declarations. Then the list-manipulation functions were also macros so they would unspool at compile time into C code that was type-aware enough to know where the pointers lived in that individual struct.
Of course, this meant incurring the cost of a new definition of function families for each intrusive-list structure, but this was in the context of bashing together a demo kernel for a class, so we assumed modern PCs that have more memory than sense. The bigger problem was that C macros are little bastards to debug and maintain (especially a macro'd function... so much escaping).
C++, of course, ameliorates almost all those problems. And replaces them with other problems. ;)
Favorite part of this is that I had no idea if this article was going to be about biology, code or money. I love a good surprise
I literally read it as money first, then code, but clicked thinking it may be biology too
My first assumption was something like ACL lists (though this could be for something like NTFS, or directory permissions) or even Firewall rules but I guess we all bring our background to assumptions
All of what we take for granted in modern computing architecture was invented as a performance hack by von Neumann in 1945 to take advantage of then-novel vacuum tube tech.
Before my time but I reject the idea that inheritance was "invented" in the context of a particular high level programming language. It was used widely in assembly/machine code programming. It's essentially a manifestation of two things: categorization of things (at least thousands of years old); and pointers to data structures (at least as old as the Manchester Mk1).
What do folks think of the OCaml/SML style approach with its signatures+modules+functors? It's a bit obscure, and some people find it inconvenient. Inheritance in their approach can be approximated using functors.
I always thought this was common knowledge. I guess it isn’t.
The only reason inheritance continues to be around is social convention. It’s how programmers are taught to program in school and there is an entire generation of people who cannot imagine programming without it.
Aside from common social practice inheritance is now largely a net negative that has long outlived its usefulness. Yes, I understand people will always argue that without their favorite abstraction everything will be a mess, but we shouldn’t let the most ignorant among us baselessly dictate our success criteria only to satisfy their own inability to exercise a tiny level of organizational capacity.
It's not just school. There are a lot pieces of literature, tutorials/guides, discussions, papers I came across over the years that tell you something very useful _plus_ wrap everything either into OO (or sometimes FP) noise and treat this part as just as important. Often there are vague rationales sprinkled in without much backing.
So you get interfaces that are much bigger than they need to be, visitor pattern this, manager that. As someone who isn't used to OO it is sometimes difficult or cumbersome to compile these kinds of examples and explanations into its essence.
I also noticed that AI assistants often want to blow up every interface with a whole bunch of useless stuff like getter/setter style functions and the like. That's obviously not the fault of these assistants, but I think it's something to consider.
Or because there are some situations where inheritance is useful. There was a reason Simula, Smalltalk, C++, Common Lisp (CLOS), Java, OCaml, Ruby, etc. implemented OOP. That's a lot of different languages. The program designers found it to be a useful abstraction and so did the language users.
There's no reason to be dogmatic about programming abstractions. Just because OOP became dogma for a while and got abused doesn't mean we have to be dogmatic entirely in the opposite direction. Abstractions have their use for those programming languages that choose to implement them.
> There's no reason to be dogmatic
I absolutely disagree. Some things in programming exist to bring products to market, but many things in programming only exist to bring programmers to market. That is a terrible and striking difference that results ultimately from an absence of ethics. Actions/decisions that exist only to discard ethical considerations serve only two objectives: 1) normalization of lower competence, 2) narcissism. It does not matter which of those two objectives are served, because the conclusions are the same either way.
I know we're on Hacker News, but I would have preferred a more explicit title (programming language, not wealth across generations).
The first six words into the actual article makes it clear what it's about:
> Inheritance was invented by the Simula language
Title of the Hacker News submission, not title of the article itself.
[flagged]
[dead]
[flagged]
Performance is not a hack. Title is wrong... ;-)
Interfaces are indeed much nicer, but you have to make sure that your program language doesn't introduce additional overhead.
Don't be the guy that makes Abstract Factory Factories the default way to call methods. Be aware that there are a lot of people out there that would love to ask a web-server for instructions each time they want to call a method. Always remember that the IT-Crowd isn't sane.
This title is so wild when you read it without the context of software development...
True, before opening it I thought it was about actual transfer of wealth from parents to children. Which also seems likt a big performance hack.
The "invented" part was suspicious though.
I also read it in a different context, an interesting thought
I think many developers, especially in the range on 1999-2020, has gone through many pitfalls in programming. More specifically.. OOP.
As someone who was blessed/lucky to learn C and Pascal.. with some VB6.. I understood how to write clean code with simple structs and functions. By the time I was old enough to get a job, I realised most (if not all) job adverts required OOP, Design Patterns, etc. I remember getting my first Java book. About 1,000 pages, half of which was about OOP (not Java directly)
I remember my first job. Keeping my mouth shut and respecting the older, more experienced developers. I would write code the way I believed was correct -- proper OOP. Doing what the books tell me. Doing what is "cool" and "popular" is modern programming. Hiding the data you should not see, and wrapping what you should in Methods... all that.
Nobody came to me and offered guidance but I learned that some of my older codebase with Inheritence, Overrides.. while it was "proper" code, would end up a jumbled mess when it required new features. One class that was correctly setup one day needed to be moved about, affecting the class hierarchy of others. It brings me back to thinking of my earlier programming days with C -- and to have things in simples structs and functions is better.
I do not hate on OOP. Afterall, in my workplace, am using C# or Python - and make use of classes and, at times, some inheritence here and there. The difference is not to go all religious in OOP land. I use things sparingly.
At work, I use what the Companies has already laid out. Typically languages that are OOP, with a GC, etc. I have no problem with that. At home or personal projects, I lead more towards C or Odin these days. I use Scheme from time-to-time. I would jump at the opportunity to using Odin in the workplace but I am surrounded by developers who dont share my mindset, and stick to what they are familiar with.
Overall, his Conclusion matches my own. "Personally, for code reuse and extensibility, I prefer composition and modules."
I learned about OOP from a Turbo Pascal v5.5 book circa 1993. Drawing triangles, squares, circles, all the good stuff. Turbo Vision library was a powerful demonstration of the power of OOP which made MSFT MFC look like a mess in comparison.