Computer scientists advance in their careers by writing papers, software developers do by writing programs. Some CS grad students and profs are genius programmers, but mostly CS researchers write a program that at best lacks the polish of a real product and at worst almost works.
When I was in physics grad school I had a job writing Java applets for education and did a successful demo of two applications and was congratulated for by bravery. I was not so brave, I expected these programs to work every time for people who came to our web site. (Geoff Fox, organizer of the conference, did an unsuccessful demo where two Eastern European twins tried to make an SGI supercomputer show some graphics and said “never buy a gigabyte of cheap RAM!”)
This reminds me strongly of reaching the final year industry projects in my software engineering degree, and seeing a significant portion of my colleagues unable to develop software in any meaningful way.
There was a curriculum correction in the years afterwards I think, but so many students had zero concept of version control, of how to start working on a piece of software (sans an assignment specification or scaffold), or how to learn and apply libraries or frameworks in general. It was unreal.
I mean, I’d say a markdown / latex / typst document in a Git repository would fit the bill.
I’m working on a history project at the moment which has reconstructed the version history of the US constitution based on the secretarial records and various commentaries written during the drafting process. At the moment we’re working on some US state constitutions, the Indian constitution, Irish peace process and the Australian constitutional process. We only have so many historical records of the committee processes, but it turns out to be more than enough to reconstruct the version history of the text.
When I was a prof (many years ago), I was working on a database for a political campaign. The code was a mess. I asked a couple of my colleagues (successful comp. sci. profs) to help out. It became clear very quickly that there are two kinds of comp. sci. profs: those who can program and those who cannot.
It seems to me that in other areas of tech, companies generally hire electrical engineers, mechanical engineers, civil engineers, etc. On the other hand, software companies feel that they don't need to hire computer scientists.
Then periodically there is a discussion on Hacker News that boils down to "all of the other engineering disciplines can make reliable predictions and deadlines; why can't software?" or "why is this company's code so shoddy?" or "why are we drowning in technical debt?".
Is your working definition of a computer scientist similar to a civil or electrical engineer?
To me, a computer scientist is someone who studies computation. They probably have the skills to figure out the run times of algorithms, and probably develop algorithms for solving arbitrary problems.
A software engineer is what I would call someone who can estimate and deliver a large software application fit for purpose.
I agree with this. A reason there is so much crappy software is because companies are hiring fresh CS grads expecting them to do real software engineering work. And they end up hacking it like they hacked it through school.
CS programs have gotten better at teaching real SWE skills, but the median CS grad still has ~zero real SWE experience.
There are many axes of complexity. Routine line of business systems, say an inventory management system for a car dealer, with a proper architecture costs should be additive instead of multiplicative, a certain cost to develop features such as “autocomplete control that draws values from a database row” and a certain cost to deploy that control. Double the number of screens using the same features and you double the cost but it feels like sub linear scaling because for the second tranche of forms you don’t have to redevelop the components and it can go much faster.
That ideal can be attained and you can be in control in application development but often we are not. When you are in control the conventional ideas anout project management apply.
As you get very big you start having new categories of problems, for instance a growing social system will have problem behaviors and you’d wish it was out of scope to control it but no, it is not out of scope.
Then there are projects which have a research component whether it is market research (iterate on ideas quickly) or research to develop a machine learning system or develop the framework for that big application above or radically improved tools.
A compiler book makes some of those problems look more regular like application programs but he project management model for research problems involves a run-break-fix trial of trying one thing and another thing which you will be doing even if you are planning to do something else.
Livingston, in Have fun at work says go along with the practices in the ocean you swim in (play planning poker if you must) but understand you will RBF are two knobs on run-break-fix: (a) how fast you can cycle and (b) the probability distribution of how many cycles it will take. Be gentle in schooling your manager and someday you might run your own team that speaks the language of RBF.
Unit tests put a ratchet in RBF and will be your compass in the darkest days. They enable the transition to routine operation (RBF in operations is the devil’s antipattern!)
They are not a religion. You do not write them because a book told you so, you write them for the same reason a mountain climber wears a rope and if you don’t feel that your tests are muda, waste as they say in Japan.
Agreed, a sign of good programming is that the program feels easy and natural to extend.
But there is a corollary, I think. A sign of good software development is that the program hasn't been extended in "unnatural" ways. That speaks to the developer's discipline and vision to create something that was fundamentally relevant to begin with.
Other than non-trivial academic samples, the odds of a program needing to change over its lifetime or large, and it's current apparent correctness has little to do with someone else adapting it to the ever changing environment.
The number of times I've heard "it seems to work and we don't dare change it" is far too many
What they mean is: "we don't understand it and we don't have good tests, so there is a high probability that it doesn't work and that doing even the most trivial and seemingly harmless modification would cause an issue to surface, so we don't dare to change it else we wouldn't be able to pretend that it works anymore and might have to fix a lot of issues that we would have a hard time to even understand"
I think you (and many software developers) are using the word "hard" to mean "intellectually challenging", as in "Leetcode Hard". But things that require a lot of effort, time, and coordination of people are also hard, just in a different way.
Imagine a codebase with a wart. And yes, without enough tests. Let's say the wart annoys you and you want to fix it. But first you have to convince your employer to let you spend 6 months backfilling missing tests. In the meantime they will pay your salary but you will not work on the features they want. You will be working on fixing that wart. Convincing management: easy or hard?
OK, so you got them convinced! Now you can't just fix the wart. First you have to slog through a big refactor and write a bunch of tests. Staying positive while doing this for 6 months: easy or hard?
Do you stop other teams from writing more code in the meantime? No, so does the new code come with tests? How do you make sure it doesn't depend on the old "warty" interface? You need a compatibility layer. You need to convince other managers to spend their teams' cycles on this. Easy or hard?
OK, the refactoring is done. You release the new software. But, despite all your efforts you overlooked something. There's a bug in production, and when a post mortem is done - fingers point at you. The bug wasn't introduced in pursuit of a new feature. It was part of an effort to solve an obscure problem most people at the company don't even understand. To them, the software worked before, and it doesn't work now, and it's always those nerds tinkering with stuff and breaking things. Convincing these people to let you keep your job: easy or hard?
Perf review time. Your colleague shipped a new feature. You shipped... that thing that broke prod and nobody understands. Getting a raise: easy or hard?
At some point I was working on a piece of software we knew inside out, had good tests for and often ran through hand curated stress tests for benches, analysis or just exploratory testing, so we had a high confidence in it and in our ability to modify it quickly and successfully.
Some day executives were visiting and we had to do a demo of our system interacting with another one. Until the last minutes we were happily modifying our code to make the demo better. A guy from other system's team saw that, freaked out, and went straight to our boss, who then laughed with us at how scared the guy was. It turned out his team was not at all that confident in their system.
I get this a bit at my job, and I think there's a difference between making changes (which I do a lot of) and being confident in the changes that you're making. The environment I'm in is completely fault-intolerant, and we're currently hamstrung by our hardware (e.g. no backups/no secondaries/etc.) so changes that we're making have to be well-reasoned and argued before they're put in.
Some people take that as being scared, but it's more like "you have to have made this work and tested it before putting it in."
>(When I talk about a program that is so many lines long, I mean a program that needs to be about that long. It’s no achievement to write 1,000 lines of code for a problem that would be reasonable to solve in 10.)
I think TFA is implying that good SWEs are good architects too, the skills go hand in hand.
I frankly don't believe in the "software architect" as a separate role. I've worked with "architects" who are clearly just BS artists because they know the jargon but have no skill to back it up and make difficult technical decisions regarding tradeoffs.
Computer scientists advance in their careers by writing papers, software developers do by writing programs. Some CS grad students and profs are genius programmers, but mostly CS researchers write a program that at best lacks the polish of a real product and at worst almost works.
When I was in physics grad school I had a job writing Java applets for education and did a successful demo of two applications and was congratulated for by bravery. I was not so brave, I expected these programs to work every time for people who came to our web site. (Geoff Fox, organizer of the conference, did an unsuccessful demo where two Eastern European twins tried to make an SGI supercomputer show some graphics and said “never buy a gigabyte of cheap RAM!”)
This reminds me strongly of reaching the final year industry projects in my software engineering degree, and seeing a significant portion of my colleagues unable to develop software in any meaningful way.
There was a curriculum correction in the years afterwards I think, but so many students had zero concept of version control, of how to start working on a piece of software (sans an assignment specification or scaffold), or how to learn and apply libraries or frameworks in general. It was unreal.
Is that unique to software though? Plenty of people can follow a plan but still find it tough to start from first principles, I would think.
I've worked for a bit in an engineering company and I was surprised at how bad they were at versioning their documents.
Do you have any examples of correctly versioned documents?
I mean, I’d say a markdown / latex / typst document in a Git repository would fit the bill.
I’m working on a history project at the moment which has reconstructed the version history of the US constitution based on the secretarial records and various commentaries written during the drafting process. At the moment we’re working on some US state constitutions, the Indian constitution, Irish peace process and the Australian constitutional process. We only have so many historical records of the committee processes, but it turns out to be more than enough to reconstruct the version history of the text.
When I was a prof (many years ago), I was working on a database for a political campaign. The code was a mess. I asked a couple of my colleagues (successful comp. sci. profs) to help out. It became clear very quickly that there are two kinds of comp. sci. profs: those who can program and those who cannot.
One of my computer science professors when his laptop wasn't connecting to the projector: "I hate computers."
That's one of the ones who could program.
Oh dear this hits way to close... Get deep enough into the machine and you come to expect the (in this case minor) disasters.
It seems to me that in other areas of tech, companies generally hire electrical engineers, mechanical engineers, civil engineers, etc. On the other hand, software companies feel that they don't need to hire computer scientists.
Then periodically there is a discussion on Hacker News that boils down to "all of the other engineering disciplines can make reliable predictions and deadlines; why can't software?" or "why is this company's code so shoddy?" or "why are we drowning in technical debt?".
Perhaps the these are all related?
Is your working definition of a computer scientist similar to a civil or electrical engineer?
To me, a computer scientist is someone who studies computation. They probably have the skills to figure out the run times of algorithms, and probably develop algorithms for solving arbitrary problems.
A software engineer is what I would call someone who can estimate and deliver a large software application fit for purpose.
I agree with this. A reason there is so much crappy software is because companies are hiring fresh CS grads expecting them to do real software engineering work. And they end up hacking it like they hacked it through school.
CS programs have gotten better at teaching real SWE skills, but the median CS grad still has ~zero real SWE experience.
There are many axes of complexity. Routine line of business systems, say an inventory management system for a car dealer, with a proper architecture costs should be additive instead of multiplicative, a certain cost to develop features such as “autocomplete control that draws values from a database row” and a certain cost to deploy that control. Double the number of screens using the same features and you double the cost but it feels like sub linear scaling because for the second tranche of forms you don’t have to redevelop the components and it can go much faster.
That ideal can be attained and you can be in control in application development but often we are not. When you are in control the conventional ideas anout project management apply.
As you get very big you start having new categories of problems, for instance a growing social system will have problem behaviors and you’d wish it was out of scope to control it but no, it is not out of scope.
Then there are projects which have a research component whether it is market research (iterate on ideas quickly) or research to develop a machine learning system or develop the framework for that big application above or radically improved tools.
A compiler book makes some of those problems look more regular like application programs but he project management model for research problems involves a run-break-fix trial of trying one thing and another thing which you will be doing even if you are planning to do something else.
Livingston, in Have fun at work says go along with the practices in the ocean you swim in (play planning poker if you must) but understand you will RBF are two knobs on run-break-fix: (a) how fast you can cycle and (b) the probability distribution of how many cycles it will take. Be gentle in schooling your manager and someday you might run your own team that speaks the language of RBF.
Unit tests put a ratchet in RBF and will be your compass in the darkest days. They enable the transition to routine operation (RBF in operations is the devil’s antipattern!)
They are not a religion. You do not write them because a book told you so, you write them for the same reason a mountain climber wears a rope and if you don’t feel that your tests are muda, waste as they say in Japan.
One thing that’s missing: programs are mutable. A good programmer writes programs that are easy to maintain and extend.
Also, programs that are easy to extend will be extended until they are not. I don't remember the name of this Law.
Agreed, a sign of good programming is that the program feels easy and natural to extend.
But there is a corollary, I think. A sign of good software development is that the program hasn't been extended in "unnatural" ways. That speaks to the developer's discipline and vision to create something that was fundamentally relevant to begin with.
I'd prefer maintainable programs of any size.
Other than non-trivial academic samples, the odds of a program needing to change over its lifetime or large, and it's current apparent correctness has little to do with someone else adapting it to the ever changing environment.
The number of times I've heard "it seems to work and we don't dare change it" is far too many
>"it seems to work and we don't dare change it"
What they mean is: "we don't understand it and we don't have good tests, so there is a high probability that it doesn't work and that doing even the most trivial and seemingly harmless modification would cause an issue to surface, so we don't dare to change it else we wouldn't be able to pretend that it works anymore and might have to fix a lot of issues that we would have a hard time to even understand"
Yeah.. The dumb thing is that it isn't even _that_ hard to fix this kind of stuff... It does take time and commitment though.
But _hard_? No.
But that is hard.
I think you (and many software developers) are using the word "hard" to mean "intellectually challenging", as in "Leetcode Hard". But things that require a lot of effort, time, and coordination of people are also hard, just in a different way.
Imagine a codebase with a wart. And yes, without enough tests. Let's say the wart annoys you and you want to fix it. But first you have to convince your employer to let you spend 6 months backfilling missing tests. In the meantime they will pay your salary but you will not work on the features they want. You will be working on fixing that wart. Convincing management: easy or hard?
OK, so you got them convinced! Now you can't just fix the wart. First you have to slog through a big refactor and write a bunch of tests. Staying positive while doing this for 6 months: easy or hard?
Do you stop other teams from writing more code in the meantime? No, so does the new code come with tests? How do you make sure it doesn't depend on the old "warty" interface? You need a compatibility layer. You need to convince other managers to spend their teams' cycles on this. Easy or hard?
OK, the refactoring is done. You release the new software. But, despite all your efforts you overlooked something. There's a bug in production, and when a post mortem is done - fingers point at you. The bug wasn't introduced in pursuit of a new feature. It was part of an effort to solve an obscure problem most people at the company don't even understand. To them, the software worked before, and it doesn't work now, and it's always those nerds tinkering with stuff and breaking things. Convincing these people to let you keep your job: easy or hard?
Perf review time. Your colleague shipped a new feature. You shipped... that thing that broke prod and nobody understands. Getting a raise: easy or hard?
And that is why these warts fester. The end.
Some people learned to be fearful.
At some point I was working on a piece of software we knew inside out, had good tests for and often ran through hand curated stress tests for benches, analysis or just exploratory testing, so we had a high confidence in it and in our ability to modify it quickly and successfully.
Some day executives were visiting and we had to do a demo of our system interacting with another one. Until the last minutes we were happily modifying our code to make the demo better. A guy from other system's team saw that, freaked out, and went straight to our boss, who then laughed with us at how scared the guy was. It turned out his team was not at all that confident in their system.
Will you have time to commit to it?
If it's in a professional setting, it's most likely to not be a hard problem, but actually an impossible one.
I get this a bit at my job, and I think there's a difference between making changes (which I do a lot of) and being confident in the changes that you're making. The environment I'm in is completely fault-intolerant, and we're currently hamstrung by our hardware (e.g. no backups/no secondaries/etc.) so changes that we're making have to be well-reasoned and argued before they're put in.
Some people take that as being scared, but it's more like "you have to have made this work and tested it before putting it in."
I can definitely write a 1KLOC program to solve a 10-line problem.
>(When I talk about a program that is so many lines long, I mean a program that needs to be about that long. It’s no achievement to write 1,000 lines of code for a problem that would be reasonable to solve in 10.)
That is indeed the self-defeating/deprecation part.
> how to organize software so that the complexity remains manageable as the size increases
So John is missing the role of software architect here. Science, art, and development - 3 roles. Not all visits to the stratosphere are misadventures.
Yet, some visits to the stratosphere are misadventures.
I think TFA is implying that good SWEs are good architects too, the skills go hand in hand.
I frankly don't believe in the "software architect" as a separate role. I've worked with "architects" who are clearly just BS artists because they know the jargon but have no skill to back it up and make difficult technical decisions regarding tradeoffs.