The author's assertion is true - complexity has to live somewhere. The nuance, though, is that all places complexity can live are not created equal.
Let's take the example of memory management: by pushing that complexity into the type system, Rust forces the programmer to deal with it and design around it. At the expense of some performance, we could instead push this complexity into a runtime garbage collection system. Since the runtime system understands things about the runtime characteristics of the program that can't be proven via static analysis, it can also handle more things without the programmer having to intervene, thus reducing the difficulty of the programming language. For most programmers this is a positive tradeoff (since most programmers are not writing software where every microsecond matters).
Similar tradeoffs exist in many different areas of software engineering. One monolith, where all the information is in one place, is easier to write code in than two microservices, which keep having to ask each other questions via API call. Yet, sometimes we need microservices. Rendering your web application entirely on the frontend in React, or entirely on the backend with templates, where all the logic lives in one place, is much easier than doing server-sided rendering then hydrating on the frontend. Yet, sometimes we need server-sided rendering and hydration.
Complexity is an irreducible constant, yes, but cognitive load is not. Cognitive load can increase or decrease depending on where you choose to push your complexity.
I do not think of complexity as one thing. Abstractions are about both hiding and exposing complexity at the same time. Different levels of abstractions can expose or isolate different part of complexity. Exposing parts of it in a way that they become amenable to your tools is as important as isolating other parts somewhere in the background. Essentially, this has to do with how well a given abstraction choice maps into the structure of the problem-space and the relationships there. The choice of which parts of complexity you isolate and which you expose is important. You probably do not want to deal with everything at once, but also usually you cannot avoid dealing with something.
The way I primarily see (and often like) type systems wrt complexity is as choosing which parts of complexity are important and exposing them (and rest being still there to deal with). There is a cognitive aspect to abstractions and complexity, irrespective even of IDEs, debuggers, compilers etc. I personally want my abstractions to make at least some sense in my head or a piece of paper in the way I think about the problem before even I start writing code. If the abstractions do not help me actually cognise about (some part of) the problem, they probably solve other problems, not mine.
I find this topic particularly interesting. I've often said to others that software, in itself, is a general abstraction of one or more complex tasks. The whole point of software is to hide complexity and make possible, in a hopefully simpler manner, doing things that would otherwise be very difficult or impossible. Despite what users may experience, the complexity remains but becomes hidden.
I don’t think abstractions are inherently tied to hiding complexity. The purpose of an abstraction is to abstract over variations of a thing (think polymorphism), where each variation by itself might still be simple, or to separate essential features (e.g. parameters you have to pass) from accidental features (e.g. implementation details), where again there is no inherent implication of complexity on either side.
In slightly different words, an abstraction separates what client code needs to reason about from what it should be able to ignore. Of course, if an abstraction isolates client code from certain complexities, that will contribute to the success of the abstraction. But it’s not the essence of what an abstraction does, or a necessary condition for it to count as successful.
That's why Typescript/Python optional typing hit the best balance for me. Coding in duck-typed language is generally fine when your test suite is as fast and frequent as a type checker. That also explains why TDD is more popular in say Ruby or Python vs. Java.
Speaking of Java, the problem with types is when you try to reify every single problem you encounter in your codebase.
By the way, python has structured types since 3.8, and I hope they get more popular in Python code:
https://docs.python.org/3/library/typing.html#typing.Protoco...
> That also explains why TDD is more popular in say Ruby or Python vs. Java.
I'd say that TDD being more popular in untyped languages speaks against TDD, as it hints that maybe some of its benefits are covered already by a type system.
You did clarify latter a bit, but this cannot stand unchallenged. TDD and tests solve different problems from types and so are valuable for that. Tests assert that no matter what you change this one fact remains true. Types assert that you are using the right things in your code.
I don't think it is lack of types at fault for untyped languages liking TDD (though I miss types a lot). I think it is there is no way to find out if functions exist until runtime (most allow self modifying code of some form so a static analysis can't verify without solving the halting problem). Though once you know a function exists the next step of verifying the function (or an overload in some languages) exists does need types.
It's blatantly obvious that some of the benefits of extensive testing are covered by a type system. Even by a mostly useless one like Java's.
If you look at any well tested program in a dynamic language, almost all the tests check the same properties that a type system would also check by default. If you remove those, usually only a few remain that test non-trivial properties.
EDIT: And I just love that in the time I took to write this, somebody wrote a comment about how it isn't so. No, it is still blatantly obvious.
types are just autoverified logic. tdd just tests logic which cannot be typed in given type system. in lean4 one can type a lot(dependant types to test integration shapes and proofs are proptests).
TDD also asserts that if you make a change you don't break anything. Most programs are too complex to keep all behavior in your head so sometimes what looks like an obvious change breaks something you forgot about. Types won't tell you because you adjusted the types, but the test will tell you. (if you have the right tests of functionality - a very hard problem outside the scope of this discussion)
The person you're replying to mentioned Lean4. In such a language, types can definitely assert that a change didn't break anything, in the sense that you can write down the property you want as a type, and if there is an implementation (a proof), your code satisfies that property.
Now, proofs can be often devilishly hard to write whereas tests are easy (because they're just examples), so in practice, types probably won't supplant tests even in dependently typed languages.
Id say if you think tests and types are doing the same thing in the same way you are badly abusing at least one of them.
One attacks the problem of bugs from the bottom up and the other from the top down. They both have diminishing returns on investment the closer they get to overlapping on covering the same types of bug.
The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
> The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
I'm a Haskell bro and I love testing. You misunderstand me, though. All I say is that maybe _some_ of those tests deliver value by just making sure that code even runs, which is otherwise covered by types.
When I do TDD (virtually every time i write a line of code) each test scenario isnt just a way to verify that the code is working, it's also a specification - often for a previously unconsidered edge case.
Throwing away the test means throwing away that user story and the value that comes with it.
I believe you (other than tests being specifications, they are examples at best). But that doesn't change the fact that TDD looks more adopted in untyped languages, and that deserves an explanation.
Mine is that a lot of potential errors (typos, type mismatches) don't need to be exercised by running code in typed language.
That’s exactly what those tests are for. When you no longer have to worry if you invoked .foo() or .fooTypo(), you eliminated one class of bug. Namely trying to run things that do not exist.
Maybe you meant to invoke .bar(), but at least we know thanks to type checks that the target exists.
It's not about hiding the complexity in the type system, that is, the complexity of the type system. At least for Rust, it's about that (yes, complex) type system isolating the even worse complexity of tracking lifetimes and aliasing and such, for all possible control flow paths, in your head.
It's harder to summarize what Typescript is isolating, except that JavaScript function signatures are the flipping wild west and the type system has to model most of that complexity. It tends to produce very leaky abstractions in my experience unless you put in a lot of work.
Sometimes the original js function isn't safe at all. So does the typescript definition.
For example, `Object.assign` overrides all property with same name. Sometimes you use it to construct a new object, so it is a safe usage. But what about using it to override the buildin object's property? It is definitely going to explode the whole program. However there isn't really a mechanism for typescript to differ the usage is safe or not. So in order to maintain compatibility, typescript just allow both of them.
And typescript in my opinion don't really isolate very much complexity. But it does document what the 'complexity' is. So you can offload your memory tax to it. Put it away, do something else, and resume later by looking at what definition you write before. In this way. It can make managing a big project much easier if you make proper use of it.
I didn't get the general idea that the author thought they hid the complexity, but rather that they exposed and codified it. They gave the complexity that would previously live in your head somewhere it could be expressed. And once expressed, it can be iterated on.
Encoding complexity in your type system forces you to deal with that complexity throughout your codebase. It doesn’t give complexity a specific place to live.
This view has always been bullshit. It doesn't differentiate between the complexity of the types themselves and the complexity of representing them in a static type system.
It certainly isn't bullshit. I take advantage of type systems every day to help me write code that works on the first try. Obviously I'm not saying all my code works on the first try, but it often does even when it's quite complex.
The main problem is that a lot of developers don't know how to use the type system well, so they write code in a way that doesn't take advantage of the type system. Or they just write bad code in general that makes life difficult despite a type system.
It doesn't solve all problems, but if you use it well it can solve a lot of problems very elegantly.
If you parse a value into a guaranteed non-null value at the system boundary, then you have eliminated the need to check for that nullability throughout the rest of your codebase.
Did you mean to write the literal polar opposite of what you wrote?
The argument isn't that complexity is being hidden, but how it's managed and where it shows up in your experience of solving other problems. OP mentions:
> The complexity was always there... it merely shone a light on the existing complexity, and gave us the opportunity — and a tool with which — to start grappling with it
It's not about Rust vs. TypeScript per se but uses garbage collection and borrow checker as examples of two solutions to the same problem. For whatever task you have at hand, what abstractions offer the best value that lets you finish the solution to the satisfaction of constraints?
> they are tightly coupled with the code written around them
Which is where the cost of the abstractions comes in. Part of the struggle is when the software becomes more complicated to manage than the problems solved and abstractions move from benefit to liability. The abstractions of the stack prevent solving problems in a way that isn't bound to our dancing around them.
If I'm working on a high-throughput networked service shuffling bytes using Protobuf, I'm going to be fighting Node to get the most out of CPU and memory. If I'm writing CRUD code in Rust shuffling JSON into an RDBMS I'm going to spending more time writing and thinking about types than I would just shuffling around arbitrarily nested bag-of-bags in Python with compute to spare.
I always thought this was why microservices became popular, because it constrained the problem space of any one project so language abstractions remained net-positives.
> It forces you to deal with that complexity everywhere in your codebase.
The alternative is fighting the abstraction. Imagine trying to write the Linux Kernel in JavaScript or Python. Lot less fighting types in your code, more time fighting the abstractions to achieve other things. Considering a big part of the kernel is types it makes sense to encode complexity within them.
Going "low-level" implies that you're abandoning abstractions to use all the tools in the CS and compute toolbox and the baggage that entails.
Type systems like in Rust may introduce their own complexities, but they also help you tackle the complexity of bigger programs if wielded correctly.
Typesystems can be complex to use, but in the end they constrain the degrees of freedom exposed by any given piece of code. With a type systems only very specific things can happen with any part of your code, most of which the programmer may have had in mind — without a type system the number of ways any piece of code could act within the program is way larger. Reducing the possible states of your program in the case of programming error is a reduction of complexity.
Now I don't say type systems may introduce their own complexity, but in the case of Rust the complexity exposed is what systems programmers should handle. E.g. using different String types to signify to the programmer that your OS will not allow all possible strings as file names is the appropriate amount of complexity. Knowing how your program handles these is again reducing complexity.
Imagine you wrote a module in a language where you don't handle these. Every now and then the module crashes specifically because it came across a malformed filename. Or phrased differently: The program does more than you intended, namely crashing when it encounters certain filenames. Good luck figuring that out and preventing it from happening again. With a type system the choice had to be explicitly made during programming already. Less things you code can do, less complexity.
Many developers confuse complexity of the internal workings of a program with the complexity of the program exposed at the interface. These are separate properties that could become linked, but shouldn't.
The author talks about complexity like it's always an intrinsic thing out there (essential) and the job of the abstraction is to deal with it. It misses the point that a great deal of the complexity on our plates are created by abstractions themselves (accidental). Not only that, sometimes great abstractions are precisely the ones that decide to not isolate some complexity and allow the user to be a 'power user'.
> sometimes great abstractions are precisely the ones that decide to not isolate some complexity and allow the user to be a 'power user'.
I agree with this. Sometimes abstractions are the wrong ones. In a layered system, where each layer completely hides the layer below, sometimes abstraction inversion (https://en.wikipedia.org/wiki/Abstraction_inversion) occurs where the right mechanism is at the bottom layer but intermediate layers hide it and make it inaccessible, leading to a crappy re-implementation that is slower and usually less capable.
Python showed what relaxed types could do. And we could go a long way as it turns out without types. But there are use cases for types, and even python admitted such when they added type annotations.
However, when I was a kid a would put a firecracker next to an object. I didn't bother running the scenario through a compiler to see if the object was of type Explodable() and had an explode() method that would be called.
> However, when I was a kid a would put a firecracker next to an object. I didn't bother running the scenario through a compiler to see if the object was of type Explodable() and had an explode() method that would be called.
Duck typing: if it quacks like a duck, and it explodes objects next to it, it's a firequacker
Duck typing. If it quacks like a duck and swims like a duck it might be a duck. But it might also be a nuclear submarine doing a duck impersonation. The question is whether you want a nuclear submarine in your pond.
I have always felt that it's better to "concentrate" complexity into one key component and make the rest of the codebase simple than to distribute complexity evenly everywhere in some kind of open-coded swamp.
I don't think it is anything to do with complexity, or grouping code/data, its just a natural tendency of people to categorize things together that display a high degree of class inclusion. And some categories are easier to deal with than others.
Let's say you have a poem program, that reads files from your drive and turns them into poems. A well isolated/abstracted variant of that program is as simple as a blackbox with two or three inputs and a single output.
One of the inputs are the files, the others might be a configuration file or user adjustable parameters like length. The program is well isolated if you can't give it any combination of inputs that doesn't produce a poem or an error message related to the usage of the program.
A badly isolated variant of the same program would be one where the user had to think a lot about the internal behavior of the program, e.g. how file names are handled or where so many parameters of the poem generation have to be supplied as parameters, that the user essentially has to rewrite the core of program with their parameters. Or the user could supply a file that allows them to gain RCE or crash the program.
> Complexity has to live somewhere. If you are lucky, it lives in well-defined places.
This whole section makes me think of construction which has similar abstraction and hidden complexity problems. It strikes me that they solve it by having design be entirely separate from implementation. Which is usually the corner where all our luck as software developers inevitably runs out.
Our methods are still rather "cowboy." We have cool "modernized cowboy" languages that make it hard to shoot your foot off, but at the end of the day, we're still just riding old horses and hoping for the best.
I've often thought this. It feels like there should be two languages, one for the implementation of the parts, and another to design/architect the software using the parts, allowing the design/architect language to focus on the high level architecture of the software and the implementation language to focus on the parts. We currently use the same language for both, and mix the two areas as we program
To be fair to our field fields like construction have literal millennia of history and development to figure out the best patterns. Even then it’s still evolving.
It’s crazy to see what we’re capable of building now vs even 15 years ago.
That’s what I use types mostly for. I don’t care about compiler hints, well structured code with sane naming conventions solves that problem without the need for types. But I do want my program to fail to compile (or in JIT-land, fail unit tests / CICD) when I do something stupid with a variable.
The former is about typing speed and I already type faster than I think. The latter is about guardrails protecting me from my own human error. And that is a far more realistic problem than my IDE performance.
Not only compile time, but run/debug time. Just being able to say "I have an object here, so I must have some consistent state meaning XYZ" is very helpful.
Of course, it's on you to make that happen - if you have a Between6And10 type and you implement as struct with an int that someone comes and writes 15 into it, it's bad news for your assumptions.
If you can make it compile time safe, then great, but even when you can't, if you know the invariants are holding, it's still something powerful you can reason about.
Types imbue pure data with meaning. That's pretty much it, and the other uses of types flow from that.
Whether you use that meaning to produce IDE hints (say, via Python type annotations, though I am aware Python typing isn't only that), or you feed it to a compiler that promises that it will ruthlessly statically enforce the invariants you set via the types, or anything else, is up to you, your goal and the language you use.
For isn't, the return type STM () doesn't give you anything back, but it declares that the method is suitable for transactions (i.e. will change state, but can be rolled back automatically)
The author's assertion is true - complexity has to live somewhere. The nuance, though, is that all places complexity can live are not created equal.
Let's take the example of memory management: by pushing that complexity into the type system, Rust forces the programmer to deal with it and design around it. At the expense of some performance, we could instead push this complexity into a runtime garbage collection system. Since the runtime system understands things about the runtime characteristics of the program that can't be proven via static analysis, it can also handle more things without the programmer having to intervene, thus reducing the difficulty of the programming language. For most programmers this is a positive tradeoff (since most programmers are not writing software where every microsecond matters).
Similar tradeoffs exist in many different areas of software engineering. One monolith, where all the information is in one place, is easier to write code in than two microservices, which keep having to ask each other questions via API call. Yet, sometimes we need microservices. Rendering your web application entirely on the frontend in React, or entirely on the backend with templates, where all the logic lives in one place, is much easier than doing server-sided rendering then hydrating on the frontend. Yet, sometimes we need server-sided rendering and hydration.
Complexity is an irreducible constant, yes, but cognitive load is not. Cognitive load can increase or decrease depending on where you choose to push your complexity.
I do not think of complexity as one thing. Abstractions are about both hiding and exposing complexity at the same time. Different levels of abstractions can expose or isolate different part of complexity. Exposing parts of it in a way that they become amenable to your tools is as important as isolating other parts somewhere in the background. Essentially, this has to do with how well a given abstraction choice maps into the structure of the problem-space and the relationships there. The choice of which parts of complexity you isolate and which you expose is important. You probably do not want to deal with everything at once, but also usually you cannot avoid dealing with something.
The way I primarily see (and often like) type systems wrt complexity is as choosing which parts of complexity are important and exposing them (and rest being still there to deal with). There is a cognitive aspect to abstractions and complexity, irrespective even of IDEs, debuggers, compilers etc. I personally want my abstractions to make at least some sense in my head or a piece of paper in the way I think about the problem before even I start writing code. If the abstractions do not help me actually cognise about (some part of) the problem, they probably solve other problems, not mine.
I find this topic particularly interesting. I've often said to others that software, in itself, is a general abstraction of one or more complex tasks. The whole point of software is to hide complexity and make possible, in a hopefully simpler manner, doing things that would otherwise be very difficult or impossible. Despite what users may experience, the complexity remains but becomes hidden.
I don’t think abstractions are inherently tied to hiding complexity. The purpose of an abstraction is to abstract over variations of a thing (think polymorphism), where each variation by itself might still be simple, or to separate essential features (e.g. parameters you have to pass) from accidental features (e.g. implementation details), where again there is no inherent implication of complexity on either side.
In slightly different words, an abstraction separates what client code needs to reason about from what it should be able to ignore. Of course, if an abstraction isolates client code from certain complexities, that will contribute to the success of the abstraction. But it’s not the essence of what an abstraction does, or a necessary condition for it to count as successful.
That's why Typescript/Python optional typing hit the best balance for me. Coding in duck-typed language is generally fine when your test suite is as fast and frequent as a type checker. That also explains why TDD is more popular in say Ruby or Python vs. Java. Speaking of Java, the problem with types is when you try to reify every single problem you encounter in your codebase. By the way, python has structured types since 3.8, and I hope they get more popular in Python code: https://docs.python.org/3/library/typing.html#typing.Protoco...
> That also explains why TDD is more popular in say Ruby or Python vs. Java.
I'd say that TDD being more popular in untyped languages speaks against TDD, as it hints that maybe some of its benefits are covered already by a type system.
You did clarify latter a bit, but this cannot stand unchallenged. TDD and tests solve different problems from types and so are valuable for that. Tests assert that no matter what you change this one fact remains true. Types assert that you are using the right things in your code.
I don't think it is lack of types at fault for untyped languages liking TDD (though I miss types a lot). I think it is there is no way to find out if functions exist until runtime (most allow self modifying code of some form so a static analysis can't verify without solving the halting problem). Though once you know a function exists the next step of verifying the function (or an overload in some languages) exists does need types.
It's blatantly obvious that some of the benefits of extensive testing are covered by a type system. Even by a mostly useless one like Java's.
If you look at any well tested program in a dynamic language, almost all the tests check the same properties that a type system would also check by default. If you remove those, usually only a few remain that test non-trivial properties.
EDIT: And I just love that in the time I took to write this, somebody wrote a comment about how it isn't so. No, it is still blatantly obvious.
types are just autoverified logic. tdd just tests logic which cannot be typed in given type system. in lean4 one can type a lot(dependant types to test integration shapes and proofs are proptests).
TDD also asserts that if you make a change you don't break anything. Most programs are too complex to keep all behavior in your head so sometimes what looks like an obvious change breaks something you forgot about. Types won't tell you because you adjusted the types, but the test will tell you. (if you have the right tests of functionality - a very hard problem outside the scope of this discussion)
The person you're replying to mentioned Lean4. In such a language, types can definitely assert that a change didn't break anything, in the sense that you can write down the property you want as a type, and if there is an implementation (a proof), your code satisfies that property.
Now, proofs can be often devilishly hard to write whereas tests are easy (because they're just examples), so in practice, types probably won't supplant tests even in dependently typed languages.
Id say if you think tests and types are doing the same thing in the same way you are badly abusing at least one of them.
One attacks the problem of bugs from the bottom up and the other from the top down. They both have diminishing returns on investment the closer they get to overlapping on covering the same types of bug.
The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
> The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
Please don't do this. It's not constructive.
I'm a Haskell bro and I love testing. You misunderstand me, though. All I say is that maybe _some_ of those tests deliver value by just making sure that code even runs, which is otherwise covered by types.
When I do TDD (virtually every time i write a line of code) each test scenario isnt just a way to verify that the code is working, it's also a specification - often for a previously unconsidered edge case.
Throwing away the test means throwing away that user story and the value that comes with it.
I believe you (other than tests being specifications, they are examples at best). But that doesn't change the fact that TDD looks more adopted in untyped languages, and that deserves an explanation.
Mine is that a lot of potential errors (typos, type mismatches) don't need to be exercised by running code in typed language.
Yours is... well, you don't really address it.
>I believe you other than tests being specifications
If you're not, that suggests you're not doing them right which in turn suggests why you might have an issue with them...
That’s exactly what those tests are for. When you no longer have to worry if you invoked .foo() or .fooTypo(), you eliminated one class of bug. Namely trying to run things that do not exist.
Maybe you meant to invoke .bar(), but at least we know thanks to type checks that the target exists.
I don’t think I agree that either typescript nor rust successfully hide the complexity in their type systems.
By the nature of type systems, they are tightly coupled with the code written around them.
Rust has rich features to handle this coupling (traits and derives), but typescript does not.
It's not about hiding the complexity in the type system, that is, the complexity of the type system. At least for Rust, it's about that (yes, complex) type system isolating the even worse complexity of tracking lifetimes and aliasing and such, for all possible control flow paths, in your head.
It's harder to summarize what Typescript is isolating, except that JavaScript function signatures are the flipping wild west and the type system has to model most of that complexity. It tends to produce very leaky abstractions in my experience unless you put in a lot of work.
Sometimes the original js function isn't safe at all. So does the typescript definition.
For example, `Object.assign` overrides all property with same name. Sometimes you use it to construct a new object, so it is a safe usage. But what about using it to override the buildin object's property? It is definitely going to explode the whole program. However there isn't really a mechanism for typescript to differ the usage is safe or not. So in order to maintain compatibility, typescript just allow both of them.
And typescript in my opinion don't really isolate very much complexity. But it does document what the 'complexity' is. So you can offload your memory tax to it. Put it away, do something else, and resume later by looking at what definition you write before. In this way. It can make managing a big project much easier if you make proper use of it.
I didn't get the general idea that the author thought they hid the complexity, but rather that they exposed and codified it. They gave the complexity that would previously live in your head somewhere it could be expressed. And once expressed, it can be iterated on.
Encoding complexity in your type system forces you to deal with that complexity throughout your codebase. It doesn’t give complexity a specific place to live.
You were going to have to deal with that complexity either way.
Now it's expressed somewhere, and if you craft it right, enforced so it's harder to get things wrong.
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
This view has always been bullshit. It doesn't differentiate between the complexity of the types themselves and the complexity of representing them in a static type system.
It certainly isn't bullshit. I take advantage of type systems every day to help me write code that works on the first try. Obviously I'm not saying all my code works on the first try, but it often does even when it's quite complex.
The main problem is that a lot of developers don't know how to use the type system well, so they write code in a way that doesn't take advantage of the type system. Or they just write bad code in general that makes life difficult despite a type system.
It doesn't solve all problems, but if you use it well it can solve a lot of problems very elegantly.
If you parse a value into a guaranteed non-null value at the system boundary, then you have eliminated the need to check for that nullability throughout the rest of your codebase.
Did you mean to write the literal polar opposite of what you wrote?
The argument isn't that complexity is being hidden, but how it's managed and where it shows up in your experience of solving other problems. OP mentions:
> The complexity was always there... it merely shone a light on the existing complexity, and gave us the opportunity — and a tool with which — to start grappling with it
It's not about Rust vs. TypeScript per se but uses garbage collection and borrow checker as examples of two solutions to the same problem. For whatever task you have at hand, what abstractions offer the best value that lets you finish the solution to the satisfaction of constraints?
> they are tightly coupled with the code written around them
Which is where the cost of the abstractions comes in. Part of the struggle is when the software becomes more complicated to manage than the problems solved and abstractions move from benefit to liability. The abstractions of the stack prevent solving problems in a way that isn't bound to our dancing around them.
If I'm working on a high-throughput networked service shuffling bytes using Protobuf, I'm going to be fighting Node to get the most out of CPU and memory. If I'm writing CRUD code in Rust shuffling JSON into an RDBMS I'm going to spending more time writing and thinking about types than I would just shuffling around arbitrarily nested bag-of-bags in Python with compute to spare.
I always thought this was why microservices became popular, because it constrained the problem space of any one project so language abstractions remained net-positives.
> how it's managed and where it shows up in your experience of solving other problems
That’s what I’m talking about. Encoding complexity in your types does not manage where that complexity lives or where you have to deal with it.
It forces you to deal with that complexity everywhere in your codebase.
> It forces you to deal with that complexity everywhere in your codebase.
The alternative is fighting the abstraction. Imagine trying to write the Linux Kernel in JavaScript or Python. Lot less fighting types in your code, more time fighting the abstractions to achieve other things. Considering a big part of the kernel is types it makes sense to encode complexity within them.
Going "low-level" implies that you're abandoning abstractions to use all the tools in the CS and compute toolbox and the baggage that entails.
Type systems like in Rust may introduce their own complexities, but they also help you tackle the complexity of bigger programs if wielded correctly.
Typesystems can be complex to use, but in the end they constrain the degrees of freedom exposed by any given piece of code. With a type systems only very specific things can happen with any part of your code, most of which the programmer may have had in mind — without a type system the number of ways any piece of code could act within the program is way larger. Reducing the possible states of your program in the case of programming error is a reduction of complexity.
Now I don't say type systems may introduce their own complexity, but in the case of Rust the complexity exposed is what systems programmers should handle. E.g. using different String types to signify to the programmer that your OS will not allow all possible strings as file names is the appropriate amount of complexity. Knowing how your program handles these is again reducing complexity.
Imagine you wrote a module in a language where you don't handle these. Every now and then the module crashes specifically because it came across a malformed filename. Or phrased differently: The program does more than you intended, namely crashing when it encounters certain filenames. Good luck figuring that out and preventing it from happening again. With a type system the choice had to be explicitly made during programming already. Less things you code can do, less complexity.
Many developers confuse complexity of the internal workings of a program with the complexity of the program exposed at the interface. These are separate properties that could become linked, but shouldn't.
This is such a simplistic view on the matter.
The author talks about complexity like it's always an intrinsic thing out there (essential) and the job of the abstraction is to deal with it. It misses the point that a great deal of the complexity on our plates are created by abstractions themselves (accidental). Not only that, sometimes great abstractions are precisely the ones that decide to not isolate some complexity and allow the user to be a 'power user'.
> sometimes great abstractions are precisely the ones that decide to not isolate some complexity and allow the user to be a 'power user'.
I agree with this. Sometimes abstractions are the wrong ones. In a layered system, where each layer completely hides the layer below, sometimes abstraction inversion (https://en.wikipedia.org/wiki/Abstraction_inversion) occurs where the right mechanism is at the bottom layer but intermediate layers hide it and make it inaccessible, leading to a crappy re-implementation that is slower and usually less capable.
Python showed what relaxed types could do. And we could go a long way as it turns out without types. But there are use cases for types, and even python admitted such when they added type annotations.
However, when I was a kid a would put a firecracker next to an object. I didn't bother running the scenario through a compiler to see if the object was of type Explodable() and had an explode() method that would be called.
> However, when I was a kid a would put a firecracker next to an object. I didn't bother running the scenario through a compiler to see if the object was of type Explodable() and had an explode() method that would be called.
Duck typing: if it quacks like a duck, and it explodes objects next to it, it's a firequacker
Duck typing. If it quacks like a duck and swims like a duck it might be a duck. But it might also be a nuclear submarine doing a duck impersonation. The question is whether you want a nuclear submarine in your pond.
The philosophy of duck typing is very clear in that yes, you should accept a nuclear submarine in your pound.
The problems are that you won't remember to do the same exact checks everywhere and document them.
complexity has to live somewhere, code anxiety was a real thing for me
until what happened?
I have always felt that it's better to "concentrate" complexity into one key component and make the rest of the codebase simple than to distribute complexity evenly everywhere in some kind of open-coded swamp.
Does complexity mean a long block of code with many levels of nested conditionals which are messed with cross-block mutable variables ?
>The question is first of all whether we have written them down anywhere
The only hard thing in software: papers please (easily accessible documentation)
Give it a few years and it will be self-maintaining
"Parameterizing complexity" is probably a better way to say it. There's no isolation when it comes to software.
I don't think it is anything to do with complexity, or grouping code/data, its just a natural tendency of people to categorize things together that display a high degree of class inclusion. And some categories are easier to deal with than others.
Not sure if I agree
Let's say you have a poem program, that reads files from your drive and turns them into poems. A well isolated/abstracted variant of that program is as simple as a blackbox with two or three inputs and a single output.
One of the inputs are the files, the others might be a configuration file or user adjustable parameters like length. The program is well isolated if you can't give it any combination of inputs that doesn't produce a poem or an error message related to the usage of the program.
A badly isolated variant of the same program would be one where the user had to think a lot about the internal behavior of the program, e.g. how file names are handled or where so many parameters of the poem generation have to be supplied as parameters, that the user essentially has to rewrite the core of program with their parameters. Or the user could supply a file that allows them to gain RCE or crash the program.
> Complexity has to live somewhere. If you are lucky, it lives in well-defined places.
This whole section makes me think of construction which has similar abstraction and hidden complexity problems. It strikes me that they solve it by having design be entirely separate from implementation. Which is usually the corner where all our luck as software developers inevitably runs out.
Our methods are still rather "cowboy." We have cool "modernized cowboy" languages that make it hard to shoot your foot off, but at the end of the day, we're still just riding old horses and hoping for the best.
I've often thought this. It feels like there should be two languages, one for the implementation of the parts, and another to design/architect the software using the parts, allowing the design/architect language to focus on the high level architecture of the software and the implementation language to focus on the parts. We currently use the same language for both, and mix the two areas as we program
To be fair to our field fields like construction have literal millennia of history and development to figure out the best patterns. Even then it’s still evolving.
It’s crazy to see what we’re capable of building now vs even 15 years ago.
I think "types" is the solution of two completely different problems:
1. how to specify memory layout for faster execution
2. how to give hint when I press . in IDEs
if you use typing outside these two scopes you'd probably find many troubles.
> if you use typing outside these two scopes you'd probably find many troubles.
- encoding invariants and define valid evolutions of the codebase
- memory safety without a garbage collector (see Rust’s Affine type system)
3. Compile time safety.
That’s what I use types mostly for. I don’t care about compiler hints, well structured code with sane naming conventions solves that problem without the need for types. But I do want my program to fail to compile (or in JIT-land, fail unit tests / CICD) when I do something stupid with a variable.
The former is about typing speed and I already type faster than I think. The latter is about guardrails protecting me from my own human error. And that is a far more realistic problem than my IDE performance.
Not only compile time, but run/debug time. Just being able to say "I have an object here, so I must have some consistent state meaning XYZ" is very helpful.
Of course, it's on you to make that happen - if you have a Between6And10 type and you implement as struct with an int that someone comes and writes 15 into it, it's bad news for your assumptions.
If you can make it compile time safe, then great, but even when you can't, if you know the invariants are holding, it's still something powerful you can reason about.
Types imbue pure data with meaning. That's pretty much it, and the other uses of types flow from that.
Whether you use that meaning to produce IDE hints (say, via Python type annotations, though I am aware Python typing isn't only that), or you feed it to a compiler that promises that it will ruthlessly statically enforce the invariants you set via the types, or anything else, is up to you, your goal and the language you use.
They also imbue code with meaning, not just data.
For isn't, the return type STM () doesn't give you anything back, but it declares that the method is suitable for transactions (i.e. will change state, but can be rolled back automatically)