Software Engineering Body of Knowledge (SWEBOK) v4.0 is out [pdf]

(ieeecs-media.computer.org)

204 points | by beryilma 4 days ago ago

161 comments

  • 0xbadcafebee 4 days ago

    There appears to be a lot of hate towards this in the comments (because it's not perfect?), but I feel strongly that we need explicit bodies of knowledge, along with certifications for having been trained on it.

    Every company I go to, the base of knowledge of all the engineers is a complete crapshoot. Most of them lack fundamental knowledge about software engineering. And they all lack fundamental knowledge about the processes used to do the work.

    That's not how engineering should work. If I hire an architect, I shouldn't have to quiz them to find out if they understand Young's Modulus, much less teach them about it on the job. But that's completely normal in software engineering today, because nobody is expected to have already learned a universal body of knowledge.

    I get this thing isn't perfect. But not being perfect isn't a rational argument for not having one at all. And we certainly need to hold people accountable to have learned it before we give them a job. We need a body of knowledge, it needs to be up to date and relevant, and we need to prove people have actually read it and understood it. If this isn't it, fine, but we still need one.

    (this is, by the way, kind of the whole fucking point of a trade school and professional licensing... why the fuck we don't have one for software engineers/IT, boggles my fucking mind, if this is supposed to be the future of work)

    • creer 4 days ago

      You are under this illusion about other fields like architects because you don't work there and you can't tell. You don't know how the sausage is made.

      Historically I have tended to learn about a new field WAY too much before I tried to hire people in these fields. The truth is, that makes it hard to hire people (but for good reason - depending on your needs, you need to pass on a lot of people). More recently I have tried to pay very close attention to how people do their work (about whose field I am building an interest). The sad reality of the world is that most people and businesses stay in business entirely through dumb luck and because the world is not usually THAT demanding. And if you have a specific requirement, they won't be able to help "out of the box".

      You are imagining this competence. It doesn't exist in most people.

      And to compound this, to me, the characteristic of an engineer is that they are capable of learning about a specialty discipline. If you hire an engineer and they are incapable of learning something that's needed in your project, THAT is where their problem is (and yours for not hiring to that.) Engineering is not a trade. Certifications are usually about selling them or gatekeeping. I wish it were possible to certify "engineering progress mindset" - no, it doesn't have an ISO number.

      • 0xbadcafebee 3 days ago

        On the contrary, I am fully aware that there exists no field where a test or piece of paper guarantees excellence.

        But I am also aware what the lack of it does. It leads to buildings falling down or burning up [with people in them]. This was a common occurrence 100+ years ago. You know what made it less common? Standardization. Building codes. Minimum standards for engineers and the trades. Independent studies have all concluded that real world outcomes improved across the board because of these things.

        No formal certification or standard will lead to perfection. That is obvious. But what is also obvious, from actually looking at outcomes before and after their introduction, is that having them leads to better outcomes.

        You have to stop thinking about individual engineers, and start thinking about the much, much larger picture. What changes will have a positive effect on the larger picture? You can only have an effect on the larger picture if you enforce a change across the board, and then look at the aggregate results.

        That can not happen without a mechanism to enforce the change. We can't pray our way to better results, or just sit around hoping people magically get better at their jobs, because that clearly has not happened for the last few decades that I've been working.

        The more we depend on technology, the more we see the failures of a lack of rigor. Probably every single person with an address and social security number in the United States has had their personal information leaked, multiple times over, by now. Lives are ruined by systems that do not take into consideration the consequences of a lack of safety, or the bias of its creators. Entire global transportation systems are shut down because nobody added basic tests or fail-safes to critical software infrastructure.

        This shit isn't rocket science, man. It was all preventable. And just like with building codes, standards, licenses, etc, we can put things in place to actually teach people the right way to do things, and actually check for the preventable things, by law. If we don't, it's going to keep happening, and keep happening, and keep happening, and keep happening, forever.

        We can do something to stop it. But we have to pound our fist on the desk and say, enough is enough. We have to put something imperfect in place to stem the tide of enshittification. Because there are consequences if we don't.

        We have seen some of them globally in the form of warfare, but nothing compared to the devastation when the gloves come off. We have not yet seen an entire country's hacker resources attack the water, power, sanitation, food, and other systems of its enemy, all at once. But it's going to happen. And it's going to be devastating. Millions of people are going to die because some asshole set a default password on some SCADA systems. But it should have been impossible, because no SCADA system should be allowed to be sold with default passwords. That's the kind of thing we can prevent, just like you can't build a building today without a fire exit.

        That's the really big obvious impact. The impact nobody sees are from tiny decisions all the time, that slowly affect a few people at a time, but on the scale of millions of businesses and billions of people, add up to really big effects. We can make a huge difference here too, which will only be visible in aggregate later on. Like public sanitation, clean water, or hand-washing with soap, nobody thinks about the dramatic effect on public health and longevity until it's clear after decades what kind of impact it made. Technology is everywhere, in every home, affecting every life. The more we improve it [as a standard], the more we will see huge positive impacts later.

        • creer 3 days ago

          > It leads to buildings falling down or burning up [with people in them]. This was a common occurrence 100+ years ago. You know what made it less common? Standardization. Building codes. Minimum standards for engineers and the trades.

          To me, this is a more interesting comparison. Is it PE certification and contractor licenses that led to this or is it building codes, construction inspectors, occupancy permits? I will argue that it's inspectors, NOT PE or contractors. And I will argue that the buildings codes have major negative consequences also. We all know of constructions methods that would have great benefits but have to be abandonned because they don't easily fit the current code. We all know of buildings that are to-code and yet ridiculously noisy and cheaply built.

          I will also argue that there are building code equivalents already in software and system architecture. There are several for "certifying" system or site security and systems that host credit card payments. And we all know how well they work. So I agree with you that there is room for progress there, but I will also argue that the approach NEEDS to be different. The current security or payment checklists are bureaucratic, CYA nonsense which discourage thinking and encourage bureaucracy and CYA specifically in place of actual security. The only thinking they encourage is creative writing to twist reality into the proper buzzwords.

          There may be a way to specify practices and security but we sure have not discovered it yet. So, a research question rather than already a standardization question? I will point out also that there WERE directions that did work in the past. For example, Dan Farmer and Wietse Venema's SATAN (and the several descendants since then) was bureaucracy-free: the test showed specific rubber-meets-the-road issues with your system that you could either fix or defend. No bullshit about using a firewall(tm) "because that's best practice".

          I also don't say that it's bad to publish books. I will say that it is bad to push "best practice". "Best practice" is precisely bureaucracy and CYA in place of thinking. To the point of site owners defending their lapses in the name of "best practices".

          What else currently goes in the right direction? Pen testing. Bug rewards. Code reviews.

          • 0xbadcafebee 3 days ago

            You really need both. Mandatory education, degrees, apprenticeships, licenses, etc is how you make sure they know how to do the thing. And then the building codes and inspections is how you check that they did the thing. If you ask someone to build a home "to code" but you never teach them how, they will spend years trying to figure it out, inconsistently. Send them to school, have them apprentice, and afterward they will be able to build it in a month, in a standard way.

            You remind me, there is an industry that has some basic software building codes: the Defense Industry. There are some pretty thorough standards for IT components, processes, etc needed to work with the military (even in the cloud). But it is all self-attested, so it's like asking a building contractor to make sure they inspect themselves. Government keeps asking the tech industry to solve this, but nobody wants to take responsibility. As more and more stuff falls apart (in the public & private sector) the government is gonna get louder and louder about this. It's already started with privacy & competition, but big failures like Crowdstrike make it obvious that the rot goes deeper.

            • kragen 2 days ago

              I agree that the US defense sector is an excellent example of the kind of credentialism in software that you, and the IEEE, are advocating! And the results are dismaying. As Anduril says in https://www.rebootingthearsenal.com/:

              > Despite spending more money than ever on defense, our military technology stays the same. There is more AI in a Tesla than in any U.S. military vehicle; better computer vision in your Snapchat app than in any system the Department of Defense owns; and, until 2019, the United States' nuclear arsenal operated off floppy disks. (...) today, in almost every wargame the United States Department of Defense models against China, China wins.

              Of course the DoD's problems go much deeper than just credentialism, but credentialism is definitely one of the causes of the disease, not a palliative measure.

          • rockemsockem 3 days ago

            > Is it PE certification and contractor licenses that led to this or is it building codes, construction inspectors, occupancy permits? I will argue that it's inspectors, NOT PE or contractors.

            100%

        • rockemsockem 3 days ago

          You seem to think that with enough process and forethought you can avoid almost any disaster. My experiences have shown this to be false and I've seen this type of thinking actually make things more opaque and harder to work with.

          The failures you're talking about with SCADA and security breeches will not be solved by some licensing where you check a box saying "thou shall not use default passwords", they'll be solved by holding companies responsible for these failures and having good safety/security requirements. A class isn't going to fix any of that. It's a ridiculous notion.

    • pnathan 4 days ago

      I'm more than happy to sign onto a reasonable certification. Many good reasons for it. I am, personally, fond of the idea that an ABET certified BSCS should be ground floor level. Other ideas have been floated...

      But this particular work is really, really, really awful. For reasons that are well documented.

      In the most fundamental sense, the IEEE doesn't understand what professional SWEs need, in appropriate portions. It confuses SWE with PM, badly. And it has done so, historically. To the point of wide condemnation.

      • nradov 4 days ago

        What exactly about the SWEBOK is awful? Could you give us a link to the documentation of reasons? Which sections of the SWEBOK cover topics that professional SWEs don't need to understand, and which major topics are missing?

        It isn't possible to be a competent engineer, beyond the most junior levels, without having a pretty solid grasp of project management. You might not need to be a good project manager but in order to make competent engineering decisions you have to understand how your tasks fit into the whole.

        • pnathan 3 days ago

          The basic problem is you're wrong and also right: it all depends.

          That is widely understood as the senior+ swe mantra.

          The SWEBOK, on the contrary, asserts "it does not depend" and that in a sense is the core problem.

          For a detailed takedown, the ACM's is the most famous, there are others that v3 sparked. I'm sure v4 is sparking it's own detailed analysis ... I'm bowing out to go do my day job now. :)

        • kragen 2 days ago

          A comprehensive list of the SWEBOK's problems would run to tens of thousands of pages, so it will surely never be written. But here's a summary of my own comments in this thread on the topic. I think they cover most of the important aspects.

          As its Introduction clearly explains, the SWEBOK is primarily a work of political advocacy. One of its political goals is a particular division of labor in software that has been tried, has failed, and has largely been abandoned, as I explained in https://news.ycombinator.com/item?id=41918800. It also advocates basing that division of labor on a form of credentialism which has also been tried in software and continues to fail, as I explained in https://news.ycombinator.com/item?id=41931336.

          As explained in https://news.ycombinator.com/item?id=41907822, the ACM canceled their involvement in the SWEBOK effort in part because they are opposed to this political program, leaving it entirely in the hands of the IEEE. Unfortunately, today's IEEE seemingly lacks the technical competence at an organizational level to avoid publishing utter nonsense through many channels. (Many IEEE members are of course highly knowledgeable, but evidently they aren't the ones in charge.)

          As a result, the substantive content of the SWEBOK is also riddled with astoundingly incompetent errors; I dissected a representative paragraph in https://news.ycombinator.com/item?id=41915939, finding 12 serious factual errors in 86 words. As is commonly the case with text written with no concern for veracity, it takes roughly 20 times as much text to correct the errors as it did to make them.

          The content of the SWEBOK that actually concerns the engineering of software is not just as error-riddled as the rest of it, but also very limited†, displaced by project-management information, as I showed with a random sampling in https://news.ycombinator.com/item?id=41916612. This focus would represent a major departure from accredited curricula in real engineering‡ fields such as mechanical engineering, chemical engineering, and electrical engineering, as I showed by surveying top university curricula in those fields in https://news.ycombinator.com/item?id=41918011. The SWEBOK represents an attempt to substitute management process for technical knowledge, an approach which is well known not to work. This is what Dijkstra was criticizing about the so-called "software engineering" field 35 years ago in EWD1036, which I linked from the third comment of mine linked above.

          (Unlike Dijkstra, I do not think that it is futile to apply an engineering approach to software, but that is not what studying project management is.)

          Even within the scope of project management knowledge, the SWEBOK primarily advocates the kinds of management approaches that work well in industries such as mining and civil engineering. However, as I explained in https://news.ycombinator.com/item?id=41914610, fundamental differences between those fields and the engineering of software make those management approaches a recipe for failure in software, even with ample engineering competence.

          Engineering is a craft based on science, using formalized theory to navigate tradeoffs to solve problems despite great intellectual difficulty. But, as I explained in https://news.ycombinator.com/item?id=41914332, our current formal theory of software is not strong enough to provide much help with that task. That does not mean that teaching people how to engineer software is hopeless; in that comment I outlined a general approach, and in https://news.ycombinator.com/item?id=41918787 I give more details about the specific things people engineering software need to know, which has relatively little overlap with the SWEBOK. This current state of theoretical knowledge means that you cannot meaningfully assess software engineers' competence by testing their mastery of formal theory, because the formal theory we have is only somewhat relevant to actually engineering software. Instead, you must observe them attempting to engineer some software. That is why credentialism is so especially counterproductive in this field. That would still be a fatal flaw in the SWEBOK even if its content were primarily focused on the actual engineering of software rather than project management.

          As I said in https://news.ycombinator.com/item?id=41914444, the health-care equivalent of the approach advocated by the SWEBOK is faith healing; the agricultural equivalent of the approach advocated by the SWEBOK was the Great Leap Forward; and the petroleum-exploration equivalent of the approach advocated by the SWEBOK is dowsing.

          ______

          † "The food here is terrible! And such small portions!"

          ‡ The engineering of software is real engineering, as Hillel Wayne convincingly demonstrated in https://www.hillelwayne.com/talks/crossover-project/, but this is still a controversial point. So I picked three fields which I hope everyone can agree are real engineering. The "software engineering" that the SWEBOK is about, and that Dijkstra was criticizing, is not the engineering of software and is not in fact engineering at all.

      • mixmastamyk 4 days ago

        SE is not CS of course. Very few of us will write compilers, for example.

    • globular-toast 4 days ago

      I completely agree. The trouble is we're so far away from this that all the people who learnt from a few tutorials and never read any books will try to defeat it at every step. They're here in these very comments.

      I get that it's possible to build working software without a certificate, in much the same way as someone can do their own electrics. But is it up to standard and safe for the next guy to work on? This is why we have standards.

    • osigurdson 4 days ago

      What are you hoping professional licensing might do for you? I can attest that outside of traditional engineering fields, licensing is completely useless and confers roughly the same level of prestige as a Costco membership. I'll send you my engineering ring in the mail if you like (Canada's cheesy contribution to engineering culture).

    • abtinf 4 days ago

      > if this is supposed to be the future of work

      The day computing becomes subject to professional licensure is the day the field of computing will fall into hopeless stagnation, just like every other such field.

      • lotsoweiners 4 days ago

        Maybe that’s not a bad thing…

        • rockemsockem 4 days ago

          Let me hear your pro-stagnation argument

          • lantry 4 days ago

            Here's my "pro-stagnation" argument: stagnation and stability are pretty much the same thing. There's a lot of infrastructure that we take for granted because it always works (water purification and distribution, bridges and roads, electrical generation and transmission, automobile engines, the quality of gasoline, the safety of food, etc). You trust that these things will work the way you expect, because they don't change very quickly. Is that stagnation or stability?

            • rockemsockem 4 days ago

              So I don't know about you, but I live in America where roads, electrical generation and transmission, water purification, and bridges are all in subpar shape.

              That's super broad and I think there are complex reasons why each of these has failed, but it's pretty clear that stagnation hasn't helped and has probably actively caused harm by letting incompetence become too common in these areas.

              • patmorgan23 4 days ago

                This is just not the case.

                The US has lots of infrastructure that needs repair or replacement, but there are very few areas that do not have clean water, or reliable electricity (Sans extreme weather which causes disruptions in every country), and roads and bridges are all safe to drive on (when was the last time you read about a bridge that collapsed from lack of maintenance?)

                The US has its issues, but it does actually have a huge amount of superb, world class infrastructure.

                • shiroiushi 4 days ago

                  >reliable electricity (Sans extreme weather which causes disruptions in every country)

                  Freezing temperatures do not cause widespread outages in properly-run countries.

                  >roads and bridges are all safe to drive on (when was the last time you read about a bridge that collapsed from lack of maintenance?)

                  2022, when the President was in town in Pittsburg and the bridge there collapsed.

                • Jtsummers 4 days ago

                  > when was the last time you read about a bridge that collapsed from lack of maintenance?

                  2022.

                  https://en.wikipedia.org/wiki/Fern_Hollow_Bridge

          • patmorgan23 4 days ago

            Code that changes introduces new bugs, new bugs can be new security issues. A lower velocity would hopefully mean less changes but higher quality, more thoroughly tested changes.

            • rockemsockem 3 days ago

              This is the best argument anyone has given in this thread.

              Strongly agree that fewer changes equals fewer bugs, it just comes down to trading that off with shipping value in your product.

          • Arainach 4 days ago

            Let's start by fixing the language. It's not stagnation, it's predictability.

            Civil and mechanical engineering are not static fields. They come up with new materials, new methods, new ideas. They have tooling to understand the impact of a proposed change and standard ways to test and validate things. It is much easier to predict how long it will take to both design and build things. These are all good things.

            We would all benefit from fewer cryptoAI startups and frameworks of the week and more robust toolchains tested and evolved over decades.

            • rockemsockem 4 days ago

              Why do you think such wrong things about civil and mechanical engineering.

              Tell me about all the on time and under budget civil/mechanical engineering projects that are happening.

              Do you think that just because they have physics to lean on that they can just like press solve and have accurate estimates spit out?

              Edit: I totally agree that more long-lived battle tested software toolchains and libraries would be great though

              • mckn1ght 4 days ago

                How do you know things wouldn’t be much much worse if there were no standards for being a civil/structural engineer or architect that have been refined over long periods of time? Imagine municipalities taking the lowest bids by far thrown out there by any rando that decided they can make a few bucks by welding together the supports for a bridge or designing a really interesting building that will just cave in on itself a decade hence.

                • rockemsockem 4 days ago

                  There are tons of physical engineers working on safety critical hardware that are not required to have some BS piece of paper that says they're safe.

                  You do not need a credential to work on EV charging infrastructure, rockets, crew capsules to ferry astronauts to the ISS, or many, many other things.

                  That's how you know, because those fields are not less safe. It's an easy comparison.

                  • mckn1ght 3 days ago

                    > work on EV charging infrastructure

                    Could you expand on that? Are you saying that you don’t need a licensed electrician to connect a new EV charging terminal at installation time?

                    • rockemsockem 3 days ago

                      This thread is about engineers.

                      I am talking about engineers who design the EV charging terminal.

                • webmaven 3 days ago

                  It's not common anymore (like, in the past three decades), but "taking the lowest bid from some rando" is definitely still a thing.

              • Arainach 4 days ago

                Such delays are overwhelmingly political, not engineering. The local government demanding yet another environmental impact review is not an engineering cost - it is a scope change.

                • eacapeisfutuile 4 days ago

                  Scope change is really not a foreign concept in the field of software engineering, including politically driven

                • abtinf 3 days ago

                  Licensure injects politics into the heart of engineering.

    • rockemsockem 4 days ago

      Every time I see someone post this line of reasoning they talk like this, as if other engineering disciplines all have some cert that is the god-tier cert.

      While this is true for some engineering fields it's mostly not true and I think that's a good thing because credentialism is bad actually.

      Also, architects are not even engineers.

      • Arainach 4 days ago

        Credentialism is good. It provides both a trustworthy reference point and a method for punishment.

        If I want someone to do work, I want them to be licensed/certified. If they are flagrantly unsafe, I want a licensing board or similar to be able to strip that person of their ability to practice that profession. This raises public perception of the profession as a while, avoids a market for lemons, and gives some baseline.

        There are too many decisions in life to be able to spend an hour (or more) researching every option. Credentials allow a framework of trust - I don't have to decide if I trust every single engineer; if they have passed their PE exam and not had their certification taken away that is a starting point.

        • rockemsockem 4 days ago

          Credentialism creates a false basis of trust and an arbitrary reference point.

          You're just arguing that you want to outsource your own decision making. You actually should interview your candidates before hiring them, whatever credential they have, because it actually is your job to ensure you work with high quality individuals.

          Credentialism basically allows the sort of low effort that you're describing and causes many places to rely solely on the credentials, which are obviously never sufficient to find high quality individuals.

          What are the jobs you're day dreaming about that require PE exams? I'd bet that requirement is much less common than you think.

          • Arainach 4 days ago

            Credentialism is for more than employers. When I need an electrician or other tradesperson to work on my house, credentials are beneficial. When plans are drawn up for a deck or extension to my house, credentials are beneficial when getting an engineering signoff. Knowing that the local medical facilties employ credentialed doctors is great when I need something done. Etc., etc.

            • rockemsockem 3 days ago

              Pretty sure engineers don't sign off on things like deck extensions, but I could be wrong.

              Credentials are insufficient for all of those. A credentialed plumber or electrician could flood or burn down your house and it might be hard to figure out the root-cause, so the credential slips. You still have to do due diligence to find competent people.

              I'll admit that for certain things which are easy enough that you can write down procedures for them that a credential can be valuable, but there are a reasonably small number of those things in the world and even when you do write the procedures down you're usually significantly constraining the type of project that individuals in that field can undertake. That constraining is a very important trade-off to consider when thinking about whether a credential is helpful.

            • tomasGiden 3 days ago

              I think complexity frameworks (like Cynefin) describes it pretty good. When the complexity is low, there are best practices (use a specific gauge of wires in an electric installation in a house or surgeons cleaning according to a specific process before a surgery) but as the complexity goes up best practices are replaced with different good practices and good practices with the exploration of different ideas. Certificates are very good when there are best practices but the value diminishes as the complexity increases.

              So, how complex is software production? I’d say that there are seldom best practises but often good practices (in example DDD, clean code and the testing pyramid) on the technical side. And then a lot of exploration on the business side (iterative development).

              So is a certificate of value? Maybe if you do Wordpress templates but not when you push the boundary of LLMs. And there’s a gray zone in between.

        • creer 4 days ago

          Isn't the reality of things that credentials are a low bar. Yes, even with the legal bar exam, or the PE engineer, etc? When you are hiring, are you really hiring JUST based on that low bar? No! That wouldn't make sense! For example if you have a specific problem, most of the time you are looking for a lawyer who has already worked for a while in THAT specific field. The bar exame is not enough! I feel that's usually the case. And that makes sense. Why just specify "PE engineer"? When there are lots of them who have at least some specialization in the direction you want?

        • eacapeisfutuile 4 days ago

          It would have zero value in every process of vetting someone. People don’t care about years of verification in the form of degrees, who do you think will care about some “license to fucking code” given for reading some garbage pdf

    • eacapeisfutuile 4 days ago

      > Every company I go to, the base of knowledge of all the engineers is a complete crapshoot

      Sounds like unfortunate companies to go to.

      > much less teach them about it on the job

      That is literally how and where people learn the job pretty much everywhere.

      > it needs to be up to date

      Yeah, it will never be.

      > we need to prove people have actually read it and understood it

      Why/how?

      • Jtsummers 4 days ago

        >> it needs to be up to date

        > Yeah, it will never be.

        And this particular document will never be up to date. SWEBOK gets updated on the order of every 5-10 years, so it's always going to be dated. This is one reason it's a poor document for its purpose. If they want it to be relevant it needs to be continuously developed and updated. Hire active editors for the different "knowledge areas" (consider even losing that notion, it's very CMMI which is not something to aspire to) and solicit contributions from practitioners to continuously add to, remove from, correct, and amend the different sections. Build out the document they actually want instead of wasting 5-10 years publishing an updated, but still out-of-date, document.

        • nradov 4 days ago

          I think you missed the purpose of the SWEBOK. It is intended to cover basic fundamentals which don't change much decade by decade. Not the latest JavaScript framework or whatever. Just about everything in the previous version from 2014 is still relevant today.

          • Jtsummers 4 days ago

            They took 10 years since v3 (over 20 if we count from the start) to include security in their discussions. This is my primary issue with the text: It should be a living document.

            Choosing a dead document or "mostly dead" if we're generous (with a new version every decade) for a body of knowledge that is constantly growing and developing makes no sense. If you want to publish it as a PDF that's ok, but it needs continuous updates, not decadal updates.

            In 2014 Agile barely got a passing mention in the book, 13 years after the term had come into existence and a decade after it had already made major waves in software engineering (many of the concepts that fell under Agile were also already published in the 1990s or earlier and barely mentioned or not mentioned). OOP gets a more substantive section in the 2024 version than the 2014 version when OOP languages had been out for decades before the 2014 one was published. In their added chapter on security they don't even have references for any of section 6.

            All of these are things that can be addressed by making it a living document. Update it more regularly so it's actually relevant instead of catching up on ideas from 2009 in 2024 (DevOps as a term dates at least back that far) or ideas from the 1960s and 1970s (OOP) in 2024.

            Practitioners are better off reading Wikipedia than this document. It's more comprehensive, more up to date, and has more references they can use to follow up on topics than this book does.

            • Kerrick 2 days ago

              > Choosing a dead document or "mostly dead" if we're generous for a body of knowledge that is constantly growing and developing makes no sense.

              As the document says, it is _not_ the body of knowledge. It is a guide to the body of knowledge. The body of knowledge exists elsewhere, in the published literature.

              • Jtsummers 2 days ago

                That's not actually any better, it's worse. A guide that's not updated for 5-10 years means that it's missing 5-10 years of material and newer references. Plus, as a guide it still fails, numerous sections are missing any references to follow, that is: It is a guide that often points you nowhere.

                As I pointed out before: It took over 20 years for them to add a section on security. That was a critical issue when it first came out, an even bigger issue in the 2010s when v3 came out, and they only now got around to it. And that chapter lacks references for an entire section. That is not a good guide.

                Make it a living document and keep it continuously updated with either expanded or deepened coverage, kept up to date with what's been learned in the industry. Then it will be useful. Otherwise, just go to Wikipedia and you'll get a better resource.

          • eacapeisfutuile 4 days ago

            I don’t think anyone mentioned JavaScript frameworks. Yes everything is perpetually relevant if it is made generic enough, but it is harder to be actionable and truly useful.

          • eacapeisfutuile 4 days ago

            That made me curious, is there a changelog somewhere on changes between editions?

    • yowlingcat 2 days ago

      > I feel strongly that we need explicit bodies of knowledge, along with certifications for having been trained on it.

      ...

      > That's not how engineering should work. If I hire an architect, I shouldn't have to quiz them to find out if they understand Young's Modulus

      Why do you feel strongly about it? Why isn't that how software engineering should work?

      While I don't disagree with your belief that improved software engineering skill foundations would be better for the industry as a whole, I find your conclusion unpersuasive because it seems to imply that "something is better than nothing." But as this sibling comments alludes to (an ACM rebuttal to SWEBOK):

      https://news.ycombinator.com/item?id=41907822

      https://web.archive.org/web/20000815071233/http://www.acm.or...

      I find the ACM's argument more persuasive:

      ```

      The SWEBOK effort uses the notion of “generally accepted knowledge” as a cornerstone, specifically excluding “practices used only for specific types of software.” We believe very strongly that, for software, this is approach is highly likely to fail and that the opposite approach — primarily focusing on specific domains — is far more likely to succeed. The central reason is that software engineering addresses a much broader scope than traditional engineering disciplines. ...

      ```

      And I tend to agree with their conclusion:

      ```

      Overall, our assessment has led us to the conclusion that the SWEBOK effort is geared aggressively towards trying to define an overall level of professional practice for all of software engineering that would implicitly provide assurances to the public. Furthermore, we believe strongly that it will fail to lead to an achievable level of professional practice in a reasonable time and that, in failing, it may lead to a situation in which the public is provided with false assurances of the quality of software systems ...

      ```

      To conclude, I'll address a point you raised which I have a hunch could be the root premise of your argument:

      > Every company I go to, the base of knowledge of all the engineers is a complete crapshoot. Most of them lack fundamental knowledge...

      If this is what you've seen at every company you go to, it could be that the common thread is you. I've worked at a variety of companies over my career, and quite a few suffered from the issue you mention. But on the whole, at least half of them (and by proxy, the engineers and engineering orgs I've worked in) have been the exact opposite. The technical bar is very high, the body of knowledge is established, clearly defined, and rigorously enforced; in doing so, these organizations were able to ship durable, high quality code with high velocity and a low defect rate even as the business expanded dramatically and we experienced setbacks and false starts. While I don't think my experience is universal, I also don't think it's unique either.

      My unsolicited advice to you would be to try and find a company that has the technical bar you would like to see and work there for a while. Failure is a much poorer teacher than success. There is certainly room for software engineering to evolve as a practice, but for the aforementioned reasons (articulated by folks in far greater, well thought through detail than I have), I don't believe SWEBOK holds the keys.

  • pnathan 4 days ago

    Swebok is an attempt to look at the whole ox

    Cook Ding was cutting up an ox for Lord Wenhui. As every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee — zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Jingshou music.

    “Ah, this is marvelous!” said Lord Wenhui. “Imagine skill reaching such heights!”

    Cook Ding laid down his knife and replied, “What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now — now I go at it by spirit and don’t look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and following things as they are. So I never touch the smallest ligament or tendon, much less a main joint.

    “A good cook changes his knife once a year — because he cuts. A mediocre cook changes his knife once a month — because he hacks. I’ve had this knife of mine for nineteen years and I’ve cut up thousands of oxen with it, and yet the blade is as good as though it had just come from the grindstone. There are spaces between the joints, and the blade of the knife has really no thickness. If you insert what has no thickness into such spaces, then there’s plenty of room — more than enough for the blade to play about it. That’s why after nineteen years the blade of my knife is still as good as when it first came from the grindstone.

    “However, whenever I come to a complicated place, I size up the difficulties, tell myself to watch out and be careful, keep my eyes on what I’m doing, work very slowly, and move the knife with the greatest subtlety, until — flop! the whole thing comes apart like a clod of earth crumbling to the ground. I stand there holding the knife and look all around me, completely satisfied and reluctant to move on, and then I wipe off the knife and put it away.”

    “Excellent!” said Lord Wenhui. “I have heard the words of Cook Ding and learned how to care for life!”

    • mbivert 4 days ago

      I'm convinced slowing feeding students, and having them produce good low-level codebase(s) (e.g. OSs, compilers) is a great Way to "holistically" teach them CS, much better than what's happening usually. "C is a razor-sharp tool"!

    • numbsafari 4 days ago

      Even he admits, he had to start somewhere.

      • pnathan 4 days ago

        The Master might say something like this, if translated crudely -

        Software engineering is programming professionally, with a dialogue on quality. Everything else is details.

        The IEEE has been riding this horse for a very long time, in the face of very serious criticism (see the ACMs comments from a quarter century ago).

        The presentation of it is _not even wrong_. It reads like a mid level manager at a very old enterprise firm wrote out what important at their firm, and took no material care for other ways. The SWEBOK has been that way for as long as I can remember ( an aside: my experience of Software Engineering academia has been so deeply negative to the point I wrote the field off in 2013. Decoupled from reality, PM oriented, toy studies- irrelevant. The SWEBOK is an artifact of that world. I should dip back in... Maybe Google & MS Research have done the real work here...)

        There's a body of _practice_ that is mildly incidental. Most acronyms are fads. Lots of ephemeral technologies that only exist as painful grimaces. IE- CORBA- SOAP, etc...

        Project management and quality management are also essentially contingent. One company does this. One that. Waterfall here. Agile there. Whirlpool the other.

        What you're left with as non contingent and timeless is in the area of compilers, algorithms, etc. Which is not SWE at all.

        If I were to write a swe body of knowledge, it would be in koan form, more than likely.

        • q7xvh97o2pDhNrh 4 days ago

          > The IEEE has been riding this horse for a very long time

          Well, there's your mistake right there. You're supposed to be riding an ox.

          All this talk of oxen and horses got me curious about the PDF, so I went and took a look. It's really far worse than you've described.

          I couldn't stomach it for too long, but here's some highlights:

          (1) The first ~65 pages are about "requirements gathering." Page 60 offers up this gem of insight:

              Priority = ((Value * (1 - Risk)) / Cost
          
          (2) The next hundreds of pages go through topics in sequence, like "Architecture" and "Design" (who knew they were different?). Naturally, "Security" is slapped on several hundred pages later.

          I couldn't make it through the whole PDF, in all honesty. But I'm quite certain the soul of software engineering is nowhere to be found in there; they've eliminated it entirely and replaced it with stamp-collecting and checklists.

          • Kerrick 2 days ago

            > like "Architecture" and "Design" (who knew they were different?)

            O'Reilly, for one. ISBN 9781098134358's blurb says, "You'll learn the distinction between architecture and design..."

        • walterbell 4 days ago

          > If I were to write a swe body of knowledge, it would be in koan form, more than likely.

          Please do! You can continue with standalone HN comments, which can be upvoted to enlighten human and AI bot alike.

        • kragen 2 days ago

          I'm interested to hear what you think of my take on the problems with the SWEBOK and the academic "software engineering" field in https://news.ycombinator.com/item?id=41931491.

        • vundercind 4 days ago

          > If I were to write a swe body of knowledge, it would be in koan form, more than likely.

          http://www.thecodelesscode.com/contents

  • beryilma 4 days ago

    > The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide), published by the IEEE Computer Society (IEEE CS), represents the current state of generally accepted, consensus-based knowledge emanating from the interplay between software engineering theory and practice. Its objectives include the provision of guidance for learners, researchers, and practitioners to identify and share a common understanding of “generally accepted knowledge” in software engineering, defining the boundary between software engineering and related disciplines, and providing a foundation for certifications and educational curricula.

  • epolanski 4 days ago

    After seeing so much negativity and controversy around this book in the comments, I'm quite convinced to giving it a read.

    I've seen so little "engineering" in software world, regardless of the company and how many ivy league devs it hires to be fully convinced that a work of encoding software engineering knowledge is worth the effort, and even attempts like this are valuable reads in such a gigantic vacuum, even just to start a discussion and to be able to disagree on definitions and practices.

    • rockemsockem 4 days ago

      I love the notion of having standard definitions and practices that are specifically not agreed on and will be argued about every time they come up.

      • epolanski 3 days ago

        To move forward you need a starting place.

  • tptacek 4 days ago

    SWEBOK 4 adds a dedicated section for security, but it's painfully 2012 (testing, for instance, centers on the old industry-driven "SAST" vs. "DAST" distinction). It also promotes stuff like Common Criteria and CVSS. The "domain-specific" security section could have been pulled out of the OWASP wiki from 2012 as well: "cloud", "IOT", "machine learning".

    • codetrotter 4 days ago

      Are there any freely available books you would recommend for 2024 security in software engineering?

      (Freely available in the same sense that the SWEBOK is I mean; you can read it free of charge without DRM and without having to resort to piracy. Doesn't have to be a fully free book that goes as far as to allow modification and redistribution although that is an extra nice bonus if any of your suggested books are like that.)

    • glwtta 2 days ago

      Apparently I am also stuck in 2012 - are we not doing cloud and machine learning anymore?

    • mixmastamyk 4 days ago

      Believe it is supposed to be slow-moving, keeping to settled matters. Have those fallen out of favor, or shown to be wrong already?

      • tptacek 3 days ago

        I would not describe SWEBOK's breakdown of software security as "settled".

  • rhythane 4 days ago

    This book actually makes some sense to me as a software engineer. We’ve all learned this at school, but they were just scattered pieces of knowledge. This book actually offers a way of systematic organization of useful knowledge in production. Content is actually not for learning, but for quick check and review. The organization might not be perfect, but really is a way of reflecting on our understanding in this field.

    • viraptor 4 days ago

      Did we all? I haven't done a lot of this during my master's and neither did any of my friends (that's across multiple countries/unis).

      But yeah, it's a really interesting way to organise the knowledge.

      • globular-toast 4 days ago

        What did you do?

        • viraptor 3 days ago

          Everything apart from design or software architecture as such. (Unless you include that one weird class on UML and Sybase) Math, electronics, algorithms, encryption, physics, legal side of things like gdpr, networking, compilers, UX, embedded systems, chip design, different CPU architectures, reading research papers, databases (about acid, not specific implementations), ...

          But the part of "how to manage the process of creating this" and related ideas you were basically supposed to figure out on your own. Or actively ask the teachers. It feels very sink-or-swim now in retrospect. I've never been explicitly taught about testing for example, but if you didn't write them for your projects, I don't think you'd complete them.

  • abtinf 4 days ago

    A competing and altogether more entertaining and comprehensive body of knowledge: https://grugbrain.dev/

  • kazinator 4 days ago

    > Popular OOP languages include C++, C#, Cobol 2002, Java, Python, Lisp, Perl, Object Pascal, Ruby and Smalltalk.

    :)

  • miffy900 4 days ago

    So at the start of each chapter, the book has a table of abbreviations and their definitions, called 'Acronyms'. To whoever wrote or edited the book: please lookup the definition of the word 'acronym': "an abbreviation formed from the initial letters of other words and pronounced as a word (e.g. NASA)." Not all of the abbreviations listed are acronyms! Most are just plain old initialisms.

    Since when has anyone ever tried to pronounce 'GPS' as anything other than G-P-S? Also "ECS" = "Ecosystem"? Maybe I'm just crazy but I've never heard or read anyone abbreviate ecosystem as 'ECS'. I've come across ECS as entity component system in video game engines, but never as just 'ecosystem'. Also it's defined exactly once in chapter 5 and then never used again in the book. Why even bother to mention it then?

    Oh and publish it as a PDF, but then have no actual page numbers?

    • jcarrico 4 days ago

      From Merriam Webster:

      a word (such as NATO, radar, or laser) formed from the initial letter or letters of each of the successive parts or major parts of a compound term

      also : an abbreviation (such as FBI) formed from initial letters : initialism

      It appears the meaning of the word has changed over time.

      • globular-toast 4 days ago

        There's literally an acronym for that type of acronym that itself is not an acronym: TLA. Three-letter acronym. I get the GP's frustration. When a word that you know the definition of is lost it feels bad.

        The hardest for me to accept is the loss of alternate. It now means the same thing as alternative, but it used to refer to switching between possible states, usually in an oscillating manner. Nowadays I alternate between caring and not caring.

        • Kerrick 2 days ago

          At least alternate and alternate have different pronunciations.

      • floydnoel 4 days ago

        yeah, every time i see somebody call an initialism an acronym i just shrug because you can't argue with words how people use them, irregardless.

  • kragen 4 days ago

    It's so unfortunate that this effort is still alive. The ACM canceled its involvement for excellent reasons which are worth reading: https://web.archive.org/web/20000815071233/http://www.acm.or...

    It's probably also worth reading Dijkstra's assessment of the "software engineering" field (roughly coextensive with what the SWEBOK attempts to cover) from EWD1036, 36 years ago.

    > Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".

    https://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF

    The ACM's criticisms, however, are much harsher and much more closely focused on the ill-conceived SWEBOK project.

    The IEEE's continued involvement calls the IEEE's own credibility and integrity into question—as do its continued opposition to open-access publishing and its recent history of publishing embarrassingly incompetent technical misinformation in IEEE Spectrum (cf., e.g., https://news.ycombinator.com/item?id=41593788, though there are many other examples). What is going on at IEEE?

    • TrueDuality 4 days ago

      Wanted to call out the specific requirements for what the ACM wanted out of their participation in creating a core body of knowledge (from the linked reasoning):

          * It must reflect actual achievable good practice that ensures quality consistent with the stated interest; it is not that following such practices are guaranteed to produce perfect software systems, but rather that doing so can provide reasonably intuitive expectations of quality.
          * It must delineate roles among the participants in a software project.
          * It must identify the differential expertise of specialties within software engineering.
          * It must command the respect of the community.
          * It must embrace change in each and every dimension of its definition; that is, it must be associated with a robust process for ensuring that it is continually updated to account for the rapid change both in knowledge in software engineering and also in the underlying technologies.
      
      It then details exactly how SWEBOK fails to meet those (which all still seem to be relevant) and comes to the following scathing conclusion:

          Overall, it is clear that the SWEBOK effort is structurally unable to satisfy any substantial set of
      the requirements we identified for bodies of knowledge in software engineering, independent of its specific content.

      I haven't read the SWEBOK but some spot checking and a review of the ToC seems to indicate they have not meaningfully taken that criticism into an account.

      • nradov 4 days ago

        The ACM's insistence on clearly delineated roles and specialties seems so bizarre and misguided. Having defined roles necessarily implies some rigidity in allowed process and organizational structure, which seems out of scope for an engineering body of knowledge.

        If you insist on defined roles then you end up with something like Scaled Agile Framework (SAFe) or Large Scale Scrum (LeSS). Which aren't necessarily bad methodologies if you're running a huge enterprise with a complex product portfolio and need to get productive work out of mediocre resources. But not good approaches for other types of organizations. The SWEBOK, for better or worse, largely steers clear of those issues.

        • kragen 3 days ago

          This is actually a general problem with the SWEBOK: it isn't an engineering body of knowledge. It's a set of management practices like the ones you describe. It doesn't steer clear of those issues, at all.

          • nradov 3 days ago

            Only a small fraction of the SWEBOK covers management practices and it doesn't dictate any particular methodology. Competent engineers might not need to do management but they have to understand at least the basics of the management context in which they operate.

            • kragen 3 days ago

              I agree with your second sentence, but your first sentence is pretty profoundly incorrect. Each of its 413 pages is divided into two columns. I generated a random sample of 10 page numbers associated with column numbers as follows:

                  >>> import random
                  >>> r = random.SystemRandom()
                  >>> [(r.randrange(1, 414), r.randrange(1, 3)) for i in range(10)]
                  [(299, 1), (164, 2), (292, 1), (246, 2), (205, 2), (113, 1), (167, 2), (393, 2), (16, 1), (129, 2)]
              
              Page 299/413 column 1 contains: part of a confused description of mathematical optimization in the sense of finding infima, incorrectly conflating it with space-time tradeoffs in software, which is at least a software engineering topic; and the beginning of a section about "multiple-attribute decision making", which is almost entirely about the kinds of decision-making done by corporate management. Though software design is given lip service, if you dig into the two particular "design" approaches they mention, they turn out to be about corporate management again, with concerns such as brainstorming sessions, identifying business cost drivers, staff headcount, presenting ideas to committees, etc. Conclusion: project management, not software engineering.

              Page 164/413 (6-12) column 2 is about corporate operations (for which telemetry can be used), corporate operational risk management, and automating operational tasks to improve corporate efficiency. Conclusion: project management, not software engineering.

              Page 292/413 (15-3) column 1 is about software engineering economics, specifically proposals and cash flow. Project management, not software engineering.

              Page 246/413 (11-12) is a table summarizing chapter 11, which contains both project management and software engineering elements. I'm going to eliminate this point from the sample as being too much work to summarize fully and too hard to avoid interpretation bias.

              Page 205/413 (9-13) column 2 is about software engineering management issues such as the difficulty of estimation, the project risks posed by the rate of change of the underlying technology, metrics for managing software, and what software organizational engineering managers should know. Project management, not software engineering.

              Page 113/413 (4-14) column 1 is about what a platform standard is, TDD, and DevOps. Mostly software engineering, not project management.

              Page 113/413 (6-15) is another summary table page similar to page 246, so I'm eliminating it too.

              Page 393/413 (A-5) column 2 is about the SWEBOK itself and the documents it draws from and contains no information about either project management or software engineering.

              Page 16/413 (xv) is part of the table of contents, so I'm eliminating it as well.

              Page 129/413 (5-12) column 2 is about random testing (software engineering), "evidence-based software engineering" (an utterly vapid section which contains no information about software engineering, project management, or anything else, as far as I can tell), and test cases that force exceptions to happen (software engineering). Conclusion: software engineering, not project management.

              So of the seven non-eliminated randomly sampled half-pages in the document, four are about project management, two are about software engineering, and the seventh is just about the SWEBOK. I guess my declaration that it's just a set of management practices was incorrect. It's only mostly a set of management practices. It's not at all only a small fraction.

              • nradov 3 days ago

                Your analysis doesn't support your claim. Just to point out one basic flaw, real engineering always has to account for financial realities including cash flow as a constraint or optimization parameter.

                I don't think you even understand what software engineering is. If we want to limit the discussion to just software development as a craft and take out the engineering aspects then you might have a point, but that's not what the SWEBOK is about.

                And in fairness, most real world software projects can produce good enough results without applying real engineering practices. If you're just building yet another CRUD web app then rigorous engineering is hardly required or even economically justified.

                • kragen 3 days ago

                  While I agree that "real engineering always has to account for financial realities including cash flow as a constraint or optimization parameter" and that, as I said, "Competent engineers (...) have to understand at least the basics of the management context in which they operate," that's no substitute for attempting to replace real engineering with project management in the curriculum, which is what the SWEBOK is attempting to do—as my analysis conclusively shows!

                  Contrast, for example, MIT's required courses for a degree in mechanical engineering (https://catalog.mit.edu/degree-charts/mechanical-engineering...): 13 required core subjects of which zero are project-management stuff; one course chosen from a menu of four of which one is "The Product Engineering Process" and another "Engineering Systems Design"; and two electives chosen from a menu of 22, of which three are project-management stuff. The core subjects are Mechanics and Materials (I and II), Dynamics and Control (I and II), Thermal-Fluids Engineering (I and II), Design and Manufacturing (I and II), Numerical Computation for Mechanical Engineers, Mechanical Engineering Tools, Measurement and Instrumentation, Differential Equations, and your undergraduate thesis.

                  Berkeley's equivalent is https://me.berkeley.edu/wp-content/uploads/2022/03/ME-Flowch..., with math courses, chemistry courses, physics courses, and engineering courses such as ENGIN 7 (Introduction to Computer Programming for Scientists and Engineers), ENGIN 26 (Three-Dimensional Modeling for Design), ENGIN 29 (Manufacturing and Design Communication, which might sound like a project management course but is actually about things like manufacturing process tolerances and dimensioning), MEC ENG 40 (Thermodynamics), and MEC ENG 132 (Dynamic Systems and Feedback). Again, as far as I can tell, there's virtually no project-management material in here. Project management stuff doesn't constitute one tenth of the curriculum, much less two thirds of it.

                  The software equivalent of Thermal-Fluids Engineering II, Differential Equations, or Thermodynamics is not, I'm sorry, proposals and cash flow, nor is it multiple-attribute decision making, nor is it corporate operational risk management.

                  The same holds true of chemical engineering (https://catalog.mit.edu/degree-charts/chemical-engineering-c...) or electrical engineering (https://catalog.mit.edu/degree-charts/electrical-engineering...) or basically any other engineering field except "systems engineering". In all of these courses you spend basically all of your time studying the thing your engineering is nominally focused on and the science you use, such as chemical reactions, thermodynamics, fluid mechanics, separation processes, algorithms, electric circuits, and the theory of dynamical systems, and very little time on HR, accounting, and project management.

                  That's because HR, accounting, and project management aren't real engineering, much as the SWEBOK tries to pretend they are.

                  Real engineering is a craft based on science, navigating tradeoffs to solve problems despite great intellectual difficulty, and that's just as true of software—even yet another CRUD web app—as of gears, hydraulic cylinders, electric circuits, or chemical plants.

                  See https://news.ycombinator.com/item?id=41918787 for my thoughts on what a real-engineering curriculum about software would include.

    • bigiain 4 days ago

      On a tangent here, but...

      > The ACM canceled its involvement for excellent reasons which are worth reading: https://web.archive.org/web/20000815071233/http://www.acm.or...

      This jumped out at me from the first para there:

      " ... also stating its opposition to licensing software engineers, on the grounds that licensing is premature ... "

      I wonder what ACM's current thinking on licensing software engineers is almost 25 years further on?

    • beryilma 4 days ago

      As much as I like Dijkstra and this particular article of his (it is an assigned reading in my "Software Engineering" class), developing any large scale software that we have today starting from formal methods is just a fantasy.

      I understand the importance of learning formal methods (discrete math, logic, algorithms, etc.), but they are not nearly enough to help someone get started with a software project and succeed at it.

      So, if not "software engineering", then what should we teach to a student who is going to be thrown into the software world as it exists in its current form?

      • nradov 4 days ago

        Formal methods have advanced enough now that any competent software engineer should know that it is at least an option. It's obviously not practical or necessary to apply everywhere but in any sufficiently large piece of software there are likely a few modules where applying formal methods would allow for faster, higher quality delivery at lower cost. Making those trade-offs and selecting appropriate approaches is the fundamental essence of engineering as a profession.

        • kragen 2 days ago

          I mostly agree, although I differ on your last point.

      • kragen 3 days ago

        Maybe if developing large-scale software starting with the formal methods we have today is just a fantasy—and that's plausible—we shouldn't be trying to formalize "software engineering". Imagine trying to formalize medicine before Pasteur, motor engineering before Carnot, mechanical engineering before Reuleaux, or structural engineering before Galileo. Today, we do have relevant bodies of formal knowledge that are enough to help someone get started with a project in those areas and succeed at it. As you say, that knowledge doesn't exist yet for software.

        So what would you teach an architect in 01530 or a mechanical engineer in 01850? In addition to the relatively sparse formal knowledge that did exist, you'd make them study designs that are known to have worked, you'd apprentice them to currently successful master architects or mechanical engineers, and you'd arrange for their apprenticeship to give them experience doing the things people already do know how to do.

      • numbsafari 4 days ago

        Since we’re talking Dijkstra, perhaps “structured programming” is a starting place.

    • Rochus 4 days ago

      > The ACM canceled its involvement for excellent reasons which are worth reading

      Interesting, thanks for the hint; the paper is from 2000 though, and as it seems it would need an update; just checked e.g. the "roles" point and it seems there were significant changes. I also think ACM has rather different goals than IEEE.

      > It's probably also worth reading Dijkstra's assessment of the "software engineering" field

      Well, was there anything or anyone that Dijkstra didn't rant about ;-)

    • michaelsbradley 4 days ago

      Any suggestion for a handbook or compendium that you consider to be a worthy alternative?

      • lifeisstillgood 4 days ago

        The thing here is, this reads like a prissy textbook that no-one can really disagree with but is still not gripping the reality. More HR handbook than blood-red manual.

        For example, project management. The book covers this but does the usual wrong headed way of imagining there are executives with clear eyed Vision and lay down directives.

        This is of course not how most projects in most companies are started. It’s a mess - reality impinges on the organisation, pain and loss and frustration result in people making fixes and adjustments. Some tactical fixes are put in place, covered by “business as usual”, usually more than one enthusiastic manager thinks their solution will be the best, and a mixture of politics and pragmatism results in a competition to be the one project that will solve the problem and get the blessed budget. By the time there is an official project plan, two implementations exist already, enough lessons learnt that the problem is easily solved, but with sufficient funding all that will be abandoned and have to be rebuilt from scratch - and at a furious pace to meet unrealistic expectations that corners will be cut leading …

        That manual needs to be written.

        • epolanski 4 days ago

          You know that you could be speaking about mining operations or building highways in your post rather than software and everything would apply the same?

          I really don't see the argument against the book here in your comment.

          • kragen 3 days ago

            There are three absolutely key differences here.

            The first is that, if you get a four-year college degree in mining or civil engineering, you will not spend much of those four years studying management practices; you will spend it studying geology, the mechanical properties of rocks and soil, hydrology (how water flows underground), and existing designs that are known to work well. You probably will not build a mine or a highway, but you will design many of them, and your designs will be evaluated by people who have built mines and highways.

            The second is related to why you will not build a mine or highway in those four years: those are inherently large projects that require a lot of capital, a lot of people, and at least months and often decades. Mining companies don't have to worry about getting outcompeted by someone digging a mine in their basement; even for-profit toll highway operators similarly don't have to worry about some midnight engineer beating them to market with a hobby highway he built on the weekends. Consequently, it never happens that the company has built two highways already by the time there is an official project plan, and I am reliably informed that it doesn't happen much with mines either.

            The third is that the value produced by mining operations and highways are relatively predictable, as measured by revenue, even if profits are not guaranteed to exist at all. I don't want to overstate this; it's common for mineral commodity prices and traffic patterns to vary by factors of three or more by the time you are in production. By contrast, much software is a winner-take-all hits-driven business, like Hollywood movies. There's generally no way that adding an extra offramp to a highway or an extra excavator to a mine will increase revenue by two orders of magnitude, while that kind of thing is commonplace in software. That means that you win at building highways and mining largely by controlling costs, which is a matter of decreasing variance, while you win at software by "hitting the high notes", which is a matter of increasing variance.

            So trying to run a software project like a coal mine or a highway construction project is a recipe for failure.

            • lifeisstillgood 3 days ago

              And as a side note, this is why LLMs are such a huge sugar rush for large companies. The performance of LLMs is directly correlated to capital investment (in building the model and having millions of GPUs to process requests).

              Software rarely has a system that someone cannot under cut in their bedroom. LLMs is one such (where as computer vision was all about clever edge finding algorithms, LLMs are brute force (for the moment))

              Imagine being able to turn to your investors and say “the laws of physics mean I can take your money and some open source need cannot absolutely cannot ruin us all next month”

              • kragen 3 days ago

                That's an interesting thought, yeah. But it also limits the possible return on that capital, I think.

        • fragmede 4 days ago

          You seem to have quite a bit of lived experience with that particular version of project management. Why not write it yourself?

      • kragen 3 days ago

        Although any random bathroom-wall graffiti is better than the SWEBOK, I don't know what to recommend that's actually good. Part of the problem is that people still suck at programming.

        “How to report bugs effectively” <https://www.chiark.greenend.org.uk/~sgtatham/bugs.html> is probably the highest-bang-for-buck reading on software engineering.

        Not having read it, I hear The Pragmatic Programmer is pretty good. Code Complete was pretty great at the time. The Practice of Programming covers most of the same material but is much more compact and higher in quality; The C Programming Language, by one of the same authors, also teaches significant things. The Architecture of Open-Source Applications series isn't a handbook, but offers some pretty good ideas: https://aosabook.org/en/

        Here are some key topics such a handbook or compendium ought to cover:

        - How to think logically. This is crucial not only for debugging but also for formulating problems in such a way that you can program them into a computer. Programming problems that are small enough to fit into a programming interview can usually be solved, though badly, simply by rephrasing them in predicate logic (with some math, but usually not much) and mechanically transforming it into structured control flow. Real-world programming problems usually can't, but do have numerous such subproblems. I don't know how to teach this, but that's just my own incompetence at teaching.

        - Debugging. You'll spend a lot of your time debugging, and there's more to debugging than just thinking logically. You also need to formulate good hypotheses (out of the whole set of logically possible ones) and run controlled experiments to validate them. There's a whole panoply of techniques available here, including testing, logging, input record and replay, delta debugging, stack trace analysis, breakpoint debuggers, metrics anomaly detection, and membrane interposition with things like strace.

        - Testing. Though I mentioned this as a debugging technique, testing has a lot more applications than just debugging. Automated tests are crucial for finding and diagnosing bugs, and can also be used for design, performance profiling, and interface documentation. Manual tests are also crucial for finding and diagnosing bugs, and can also tell you about usability and reliability. There are a lot of techniques to learn here too, including unit testing, fuzzing, property-based testing, various kinds of test doubles (including mock objects), etc.

        - Version tracking. Git is a huge improvement over CVS, but CVS is a huge improvement over Jupyter notebooks. Version control facilitates delta debugging, of course, but also protects against accidental typo insertion, overwriting new code with old code, losing your source code without backups, not being able to tell what your coworkers did, etc. And GitLab, Gitea, GitHub, etc., are useful in lots of ways.

        - Reproducibility more generally. Debugging irreproducible problems is much more difficult, and source-code version tracking is only the start. It's very helpful to be able to reproduce your deployment environment(s), whether with Docker or with something else. When you can reproduce computational results, you can cache them safely, which is important for optimization.

        - Stack Overflow. It's pretty common that you can find solutions to your problems easily on Stack Overflow and similar fora; twin pitfalls are blindly copying and pasting code from it without understanding it, and failing to take advantage of it even when it would greatly accelerate your progress.

        - ChatGPT. We're still figuring out how to use large language models. Some promising approaches seem to be asking ChatGPT what some code does, how to use an unfamiliar API to accomplish some task that requires several calls, or how to implement an unfamiliar algorithm; and using ChatGPT as a simulated user for user testing. This has twin pitfalls similar to Stack Overflow. Asking it to write production-quality code for you tends to waste more time debugging its many carefully concealed bugs than it would take you to just write the code, but sometimes it may come up with a fresh approach you wouldn't have thought of.

        - Using documentation in general. It's common for novice programmers to use poor-quality sites like w3schools instead of authoritative sites like python.org or MDN, and to be unfamiliar with the text of the standards they're nominally programming to. It's as if they think that any website that ranks well on Google is trustworthy! I've often found it very helpful to be able to look up the official definitions of things, and often official documentation has better ways to do things than outdated third-party answers. Writing documentation is actually a key part of this skill.

        - Databases. There are a lot of times when storing your data in a transactional SQL database will save you an enormous amount of development effort, for several reasons: normalization makes invalid states unrepresentable; SQL, though verbose, can commonly express things in a fairly readable line or two that would take a page or more of nested loops, and many ORMs are about as good as SQL for many queries; transactions greatly simplify concurrency; and often it's easier to horizontally scale a SQL database than simpler alternatives. Not every application benefits from SQL, but applications that suffer from not using it are commonplace. Lacking data normalization, they suffer many easily avoidable bugs, and using procedural code where they could use SQL, they suffer not only more bugs but also difficulty in understanding and modification.

        - Algorithms and data structures. SQL doesn't solve all your data storage and querying problems. As Zachary Vance said, "Usually you should do everything the simplest possible way, and if that fails, by brute force." But sometimes that doesn't work either. Writing a ray tracer, a Sudoku solver, a maze generator, or an NPC pathfinding algorithm doesn't get especially easier when you add SQL to the equation, and brute force will get you only so far. The study of algorithms can convert impossible programming problems into easy programming problems, and I think it may also be helpful for learning to think logically. The pitfall here is that it's easy to confuse the study of existing data structures and algorithms with software engineering as a whole.

        - Design. It's always easy to add functionality to a small program, but hard to add functionality to a large program. But the order of growth of this difficulty depends on something we call "design". Well-designed large software can't be as easy to add functionality to as small software, but it can be much, much easier than poorly-designed large software. This, more than manpower or anything else, is what ultimately limits the functionality of software. It has more to do with how the pieces of the software are connected together than with how each one of them is written. Ultimately it has a profound impact on how each one of them is written. This is kind of a self-similar or fractal concern, applying at every level of composition that's bigger than a statement, and it's easy to have good high-level design and bad low-level design or vice versa. The best design is simple, but simplicity is not sufficient. Hierarchical decomposition is a central feature of good designs, but a hierarchical design is not necessarily a good design.

        - Optimization. Sometimes the simplest possible way is too slow, and faster software is always better. So sometimes it's worthwhile to spend effort making software faster, though never actually optimal. Picking a better algorithm is generally the highest-impact thing you can do here when you can, but once you've done that, there are still a lot of other things you can do to make your software faster, at many different levels of composition.

        - Code reviews. Two people can build software much more than twice as fast as one person. One of the reasons is that many bugs that are subtle to their author and hard to find by testing are obvious to someone else. Another is that often they can improve each other's designs.

        - Regular expressions. Leaving aside the merits of understanding the automata-theory background, like SQL, regular expressions are in the category of things that can reduce a complicated page of code to a simple line of code, even if the most common syntax isn't very readable.

        - Compilers, interpreters, and domain-specific languages. Regular expressions are a domain-specific language, and it's very common to have a problem domain that could be similarly simplified if you had a good domain-specific language for it, but you don't. Writing a compiler or interpreter for such a domain-specific language is one of the most powerful techniques for improving your system's design. Often you can use a so-called "embedded domain-specific language" that's really just a library for whatever language you're already using; this has advantages and disadvantages.

        - Free-software licensing. If it works, using code somebody else wrote is very, very often faster than writing the code yourself. Unfortunately we have to concern ourselves with copyright law here; free-software licensing is what makes it legal to use other people's code most of the time, but you need to understand what the common licenses permit and how they can and cannot be combined.

        - Specific software recommendations. There are certain pieces of software that are so commonly useful that you should just know about them, though this information has a shorter shelf life and is somewhat more domain-specific than the stuff above. But the handbook should list the currently popular libraries and analogous tools applicable to building software.

        • kragen 3 days ago

          There are some people (such as the SWEBOK team) who would claim that software engineering shouldn't concern itself much with considerations like my list above. Quoting its chapter 16:

          > Software engineers must understand and internalize the differences between their role and that of a computer programmer. A typical programmer converts a given algorithm into a set of computer instructions, compiles the code, creates links with relevant libraries, binds†, loads the program into the desired system, executes the program, and generates output.

          > On the other hand, a software engineer studies the requirements, architects and designs major system blocks, and identifies optimal algorithms, communication mechanisms, performance criteria, test and acceptance plans, maintenance methodologies, engineering processes and methods appropriate to the applications and so on.

          The division of labor proposed here has in fact been tried; it was commonplace 50 or 60 years ago.‡ It turns out that to do a good job at the second of these roles, you need to be good at the stuff I described above; you can't delegate it to a "typical programmer" who just implements the algorithms she's given. To do either of these roles well, you need to be doing the other one too. So the companies that used that division of labor have been driven out of most markets.

          More generally, I question the SWEBOK's attempt to make software engineering so different from other engineering professions, by focusing on project-management knowledge to the virtual exclusion of software knowledge; the comparison is in https://news.ycombinator.com/item?id=41918011.

          ______

          † "Binds" is an obsolete synonym for "links with relevant libraries", but the authors of the SWEBOK were too incompetent to know this. Some nincompoop on the committee apparently also replaced the correct "links with relevant libraries" with the typographical error "creates links with relevant libraries".

          ‡ As a minor point, in the form described, it implies that there are no end users, only programmers, which was true at the time.

        • kragen 3 days ago

          I wrote:

          > Code Complete was pretty great at the time.

          Unfortunately it seems that Steve McConnell has signed the IEEE's garbage fire of a document. Maybe if you decide to read Code Complete, stick with the first edition.

    • Niksko 4 days ago

      Very interesting. Particularly their notion (paraphrasing) that SWEBOK attempts to record generally recognised knowledge in software engineering while excluding knowledge about more specific subdomains of software.

      That over-deference towards general knowledge coupled with some sort of tie to a similar Australian effort probably explains why the software engineering degree I began in Australia felt like a total waste of time. I remember SWEBOK being mentioned frequently. I can't say I've gotten terribly much value out of that learning in my career.

      • kanbankaren 4 days ago

        I am guessing that you didn't get value out of it probably because you didn't work in avionics, medicine, defense, etc? Those industries where a software fault is unacceptable and has to work for decades.

        In some industries like avionics and medical instruments, the programmer might be personally held responsible for any loss of life/injury if it could be proven.

        Having read Software Engineering and Formal Methods 25 years ago, I could say that IEEE leans heavily towards SE like it is a profession.

        It is not going to be appealing to the crowd of Enterprise developers who use Python, Javascript, Web development etc.

        • Jtsummers 4 days ago

          > In some industries like avionics and medical instruments, the programmer might be personally held responsible for any loss of life/injury if it could be proven.

          If you aren't a PE, it's hard to hold you personally responsible unless they can show something close to willful, deliberate misbehavior in the development or testing of a system even in avionics. Just being a bad programmer won't be enough to hold you responsible.

          • 9659 3 days ago

            If your software kills someone (by mistake), personal guilt is a punishment one never completes.

        • kragen 3 days ago

          The SWEBOK will not reduce the number or severity of software faults; it probably increases both.

    • shiroiushi 4 days ago

      >What is going on at IEEE?

      The IEEE has been a worn-out, irrelevant relic of the past for at least 2 decades now.

    • BJones12 4 days ago

      > software engineering has accepted as its charter "How to program if you cannot.".

      Is that supposed to be a negative? Isn't that the point of any profession? Like are any of these analogs negative?:

      Medicine has accepted as its charter "How to cure disease if you cannot."

      Accounting has accepted as its charter "How to track money if you cannot."

      Flight schools has accepted as its charter "How to fly if you cannot."

      • kragen 3 days ago

        Yes, because those would describe, respectively, faith healing, spending money whenever you happen to have bills in your pocket, and levitation through Transcendental Meditation, rather than what we currently call "medicine", "accounting", and "flight schools".

        "Software engineering" as currently practiced, and as promoted by the SWEBOK, is an attempt to use management practices to compensate for lacking the requisite technical knowledge to write working software. Analogs in other fields include the Great Leap Forward in agriculture and steelmaking, the Roman Inquisition in astronomy, dowsing in petroleum exploration, Project Huemul in nuclear energy, and in some cases your example of faith healing in medicine.

      • Exoristos 4 days ago

        I really don't think he means "cannot" in the sense of "presently don't know how," but more categorically--along the lines of chiropractic being the profession for those who cannot cure the way an MD can. I think it's an indictment of hackery.

  • jdlyga 4 days ago

    Yes, there’s been a lot of negativity toward earlier versions of this document. It’s been around for a while and represents a formal, structured, and rigid approach to software development. Historically, it reflects the 1990s era, when waterfall was the preferred method, and there was a push to make software engineering a licensed profession. At the time, we were also in a “software crisis” where large, expensive projects with extensive documentation and formalized planning often failed. Today, we have quicker releases, faster feedback, and more direct user communication (what we now call extreme programming or agile). However, this has also led to too much cowboy programming. So it’s important to maintain some level of standards. Might be worth revisiting?

    • nradov 4 days ago

      There was literally nothing in the SWEBOK which dictated a particular approach to software development, formal or otherwise. It has always been just a high-level list of knowledge areas and common definitions.

    • eacapeisfutuile 4 days ago

      Well also now entire companies fail quicker instead.

      It is useless because no one will read it or use it as any type of benchmark, probably rightly so here. There is a version of this at every company, just more relevant, also already not being read of course.

  • osigurdson 4 days ago

    For me "BOK" is correlated with the creation of false industry and certification.

  • TZubiri 4 days ago

    Taking a look at the index, a couple of issues which would make it not in my interest:

    1- very heavy on process management. Not against studying it, but 70% is a lot. Also it isn't a very objective area of knowledge, there's hundreds of ways to skin the pig, which probably explains why it takes so much space.

    2- a whole chapter about architecture, which isn't a very well regarded as an independent branch of computer science/systems engineering knowledge.

    3- at last, in the technical section, databases shares a section along with fundamentals like processes and operating systems?

    All in all it looks like a dictionary/checkbox built by a comittee rather than a consistent perspective on software.

    And as a textbook it's got problems as well as mentioned, it has a doubtful taxonomy and too heavy on process engineering.

    Tl;dr: I'm getting Orange Eating 101 vibes https://youtu.be/pR6z-gm5_cY?t=38&si=F7knCRNcFET7nki7

    Anyways, my 2 cents, won't be reading it.

  • jongjong 4 days ago

    I've noticed that a lot of engineering books cover concepts which are useful in the hands of senior developers but are harmful in the hands of junior developers because they misunderstand the nuance due to lack of experience.

    It reminds me of the phrase "If all you have is a hammer, you tend to see every problem as a nail.." But in the case of a thick design patterns book, it amounts to giving the junior dev a whole toolbox... The junior dev tends to assume that every problem they'll ever face requires them to use one of the tools from that toolbox... But reality isn't so neat; you rarely use a specific tool in its native form as it's described. They are conceptual tools which need to be adapted to various situations and they are only really useful in highly complex scenarios which can often be avoided entirely.

    The one piece of advice which is closest to 'universal' which I can give about software development is this:

    "You should be able to walk through and describe any feature provided by your code using plain English, in such a way that a non-technical listener would gain a complete understanding of what the code is doing but without having to dive into functions whose implementation details would exceed the listener's mental capabilities."

    When someone talks me through some feature they developed and they start jumping around 10 different files 80% of which have technical names which they invented and they keep having to define new concepts just to explain what the code is doing at a high level; this is a red flag. It shows that their code is over-engineered and that they haven't fully made sense of what they're doing in their own mind.

    My philosophy is basically a variant of "Rubber duck debugging" as proposed in the book "The Pragmatic Programmer" by Andrew Hunt and David Thomas... But you imagine the duck to have the intellect and knowledge of a non-technical middle-manager, and you use the approach during design and development; not just debugging.

  • rockemsockem 4 days ago

    I would love to hear someone that actually works in a place that requires credentials like the PE comment on having a course based on this as a credential.

    There are way too many software engineers with lofty ideas about how physical engineers can magically know all the answers to all the problems they could ever have.

    • wheelinsupial 3 days ago

      > having a course based on this as a credential.

      I'm assuming you mean a single course? If so, this material would not be a standalone course. It would be baked into the entire bachelor's degree program. Some of the topics would maybe be more advanced or something that need to be demonstrated by being an EIT or writing the appropriate exams. For example, chapter 15 on engineering economics is a single class, but chapter 17 on mathematical foundations would cover at least 4 classes (discrete math, differential calculus, integral calculus, probability).

      The US of A did have a software engineering principles and practices of engineering (PE) exam, but it's been discontinued, and I haven't managed to find an archived snapshot of the exam spec. I'm not American, but I think there is a common fundamentals of engineering (FE) exam [1] that has to be written to register as an EIT and then the PE [2] has to be written to be licensed and given the PE.

      I'm not familiar with which American schools were ABET accredited in software engineering, but in Canada, several schools do have accredited software engineering majors. You can review the curriculum and see a fair amount of alignment to the SWEBOK topics. Again, some of these chapters could be split across multiple courses, but some chapters look more like a couple of weeks in one class.

      For comparison, there is a 61 page industrial and systems engineering body of knowledge [3] available from the IISE (Institute of Industrial and Systems Engineers), which is really just a couple short paragraphs on each topic, a list of key areas within each, and a list of reference books. At a quick glance, all of the areas correspond to sections in the industrial FE [4] and the industrial PE [5].

      > There are way too many software engineers with lofty ideas about how physical engineers can magically know all the answers to all the problems they could ever have.

      I'm not an engineer. I did an associates in engineering technology in Canada, so I'm a "pretengineer" at best. As far as I know, engineers in Canada have a discipline and then areas of practice. For industrial, there are 9 different areas of practice, but people are generally licensed to practice in 1 to 3.

      In my region, software is not even broken out into its own areas of practice. Software is an area within computer engineering. I think software is way too vast right now and the expectations are much too big. So, the traditional engineers have much more limited scope problems. But I could be limited by my perspective and lack of license.

      [1] https://ncees.org/exams/fe-exam/

      [2] https://ncees.org/exams/pe-exam/

      [3] https://www.iise.org/Details.aspx?id=43631

      Links to PDFs

      [4] https://ncees.org/wp-content/uploads/2022/09/FE-Industrial-a...

      [5] https://ncees.org/wp-content/uploads/2024/10/PE-Ind-Oct-2020...

      • rockemsockem 2 days ago

        In the US most engineers do not have the PE and many (might be most, but I couldn't find the numbers) do not take the FE exam.

        In the States most people working on software have computer science degrees, not software engineering degrees.

        In general there is not a huge amount of certification overhead for engineers in the US, despite what some people seem to think, and I'm quite happy that's the way it is.

        I was mostly looking for perspectives on how requiring credentials like these impacted actual work and workplaces.

  • abtinf 4 days ago

    300 pages* is perhaps not quite the length one would expect for Version 4.0 of such an ambitious undertaking.

    * Actual page count less front/back matter and a rough guess at pages of matrices and references.

  • elseweather 3 days ago

    this is a book for people who think the wizard on the cover of SICP is an actual wizard and get scared

  • mixmastamyk 3 days ago

    A lot of scathing critiques in here, but they remind me of the parable of the blind folks and the elephant. https://en.wikipedia.org/wiki/Blind_men_and_an_elephant

        The parable of the blind men and an elephant is a story of a group of blind
        men who have never come across an elephant before and who learn and imagine
        what the elephant is like by touching it. Each blind man feels a different
        part of the animal's body, but only one part, such as the side or the tusk.
        They then describe the animal based on their limited experience and their
        descriptions of the elephant are different from each other. In some
        versions, they come to suspect that the other person is dishonest and they
        come to blows. The moral of the parable is that humans have a tendency to
        claim absolute truth based on their limited, subjective experience as they
        ignore other people's limited, subjective experiences which may be equally
        true.[1][2] The parable originated in the ancient Indian subcontinent, from
        where it has been widely diffused.
    
    
    So section 2 does not jibe with enterprise development, section 3 does not agree with embedded development, and section 8 does not fit well with web startups? These three industries have different requirements, not just from each other but from controllers for skyscrapers and space probes too.

    This document is trying to find common ground, and so will offend folks in camp X, Y, or Z that their case is not handled well enough. Please be mindful that other sets of requirements exist.

  • JanSt 4 days ago

    If you really want someone interested in software development to run away, hand them books like this one.

    • epolanski 4 days ago

      This is meant for engineers, a certification-like body of the core knowledge of the field.

    • NotGMan 4 days ago

      This was my first thought.

      If this ever starts to get thought in CS university courses the amount of devs would dramaticaly reduce due to trauma.

      • epolanski 4 days ago

        Software quality may increase though, because there's a desperate lack of solid engineering practices across the industry.

  • ofou 4 days ago
  • ctz 4 days ago

      Runtime errors surface when a program
      runs into an unexpected condition or situation
      such as dividing by zero, memory overflow, or
      addressing a wrong or unauthorized memory
      location or device, or when a program tries to
      perform an illegitimate or unauthorized operation
      or tries to access a library, for example.
      The programs must be thoroughly tested for
      various types of inputs (valid data sets, invalid
      data sets and boundary value data sets) and
      conditions to identify these errors. Once identified,
      runtime errors are easy to fix.
    
    Embarrassing horseshit.
    • dustfinger 4 days ago

      > or tries to access a library

      I had to open the PDF and find this line to confirm. It really says that. It reads as if claiming that any program that accesses a library will then have a runtime error. That is obviously not what they intended, but I have read it over a few times now and cannot see another interpretation.

      • TrueDuality 4 days ago

        That line is referring to shared libraries linked to a dynamic executable. If a shared library isn't installed or available you will receive an error similar to the following:

            $ ./main
            ./main: error while loading shared libraries: librandom.so: cannot open shared object file: No such file or directory 
        
        Which is indeed a runtime error.

        There is also the common use case of hot-reloading compiled code which dynamically loads and unloads shared libraries without restarting the entire application. It needs to be done carefully but you've likely used an application that does this. Failure to load a library, or loading an incompatible one will also create a runtime error.

        Looks like there is a lot of bad generalizations in here, but they are technically correct on this one.

        • dustfinger 4 days ago

          I understand that is what they meant, but it is not what they wrote. They should have qualified the statement as you did, with the conditions that yield a runtime error. Even if they generalized those conditions that would have been fine.

          for example:

            or tries to access a library that it is unable to for some reason without handling the resulting error.
          • wakawaka28 4 days ago

            Ah, but the existence of a suitable library in the environment is assumed. So not having it is an unexpected condition.

          • eacapeisfutuile 4 days ago

            So… most likely it is a “typo” or edit-mistake or similar. If you understand what they meant then all is good, no?

        • stonemetal12 4 days ago

          Perhaps if they said "tries and fails to access a library". Merely attempting to access (possibly successfully) a library is not an error.

    • wakawaka28 4 days ago

      What exactly is wrong with it? That is a definition fit for someone who does not have prior knowledge of what a runtime error is. It might be boring to us, and I might word it a little different, but it's fine.

      • miffy900 4 days ago

        For me it's the last sentence: "Once identified, runtime errors are easy to fix." Well no not really - it depends on the issue; sometimes a solution isn't 'fixing' it, sometimes choosing what to do next after identifying the root cause is it's own task. Maybe it's working around it, like amending the return value with amended data, or patching the API itself by wrapping it in an intermediate API and swallowing exceptions. Guidance on addressing runtime errors should never presuppose that 'fixing' it will be easy - it will always depend on the context. Just get rid of that last sentence and it's a better statement.

        Imagine instead it's for criminal defense lawyers: re-word it to be advice for attorneys defending their client against a prosecution. "Once [all exculpatory evidence against your client] is identified, defending them against a guilty verdict is easy to do"

        It sounds like it's written by someone who has never practiced in the real world.

        • wakawaka28 4 days ago

          I think it should say runtime errors "are usually easy to fix" because they are usually a result of simple logical errors or technical issues. But you're right, it does not account for all possibilities.

      • kragen 3 days ago

        A lot of things are wrong with it.

        > Runtime errors surface when a program runs into an unexpected condition or situation such as dividing by zero,

        This implicitly asserts that dividing by zero is always unexpected, that all runtime errors are unexpected, and that dividing by zero always causes a runtime error. None of these are true. Dividing by zero is very frequently not unexpected (it's utterly commonplace in 3-D graphics, for example, when a vertex happens to be in the camera's focal plane) and in floating-point math (standardized by, ironically, the IEEE) it doesn't generate runtime errors by default. Moreover, there are certain applications where runtime errors are also expected, such as in fuzzers. Some languages frequently use their runtime error mechanism to convey perfectly mundane conditions such as not finding a key in a hash table or coming to the end of a sequence. And it's fairly often a useful mechanism for terminating a recursive search procedure.

        > memory overflow,

        There's no such thing as a "memory overflow" on any platform I'm familiar with. This might have been intended to reference any of five things: stack overflows, buffer overflows, heap allocation failures, resource limit exhaustion, or OOM kills.

        The stack is an area of memory, and a "stack overflow" is an attempt to put too much data in it, usually due to an infinite recursive loop. Stack overflows do often get detected as runtime errors, but many platforms fail to detect them, causing incorrect program behavior instead.

        A buffer is also an area of memory (an array) and a "buffer overflow" is an attempt to put too much data in it, usually due to an omitted check on input data size. Buffer overflows do sometimes get detected as runtime errors, but often do not, because the most popular implementations of many popular programming languages (notably C, C++, and Pascal) do not do automatic bounds checking.

        Heap allocation failures are attempts to dynamically allocate memory that are denied, often because too much memory is already allocated, so in a sense your heap memory has "overflowed". Some programming languages handle this as a runtime error, but the most popular implementations of many popular programming languages (notably C, C++, and Pascal) do not.

        Resource limit exhaustion is a condition where a program exceeds operating-system-imposed limits on its usage of resources such as CPU time, output file size, or memory usage, in response to which the OS raises a runtime error. On Unix this manifests as killing the process with a signal.

        OOM kills are a Linux thing where the OS starts killing resource-intensive processes to attempt to recover from a memory overload condition that is degrading service badly. In a sense the computer's memory has "overflowed", so Unix's runtime error mechanism is pressed into service. (Something very similar happens if you press ^C, though, so describing this as a "runtime error" is maybe not defensible.)

        > or addressing a wrong or unauthorized memory location or device,

        The standard verb here would be "accessing", not "addressing", but I don't want to make too much of the use of nonstandard terminology; it's not always an indicator of unfamiliarity with the field. And there are indeed machines where there are kinds of "wrong" other than "unauthorized"—that's why we have SIGBUS, for unaligned access exceptions on CPUs that don't support unaligned access.

        But "device" is, as far as I know, wrong; I don't know of any platform which raises a runtime error when you try to access the wrong device. There are a number of platforms that have I/O instructions which will raise a runtime error if you try to use them in unprivileged code, but that doesn't depend on which device you're accessing. Usually in small microcontrollers there isn't any kind of runtime-error mechanism, and when there is, it generally isn't invoked for I/O instructions. Sometimes you can set up the MPU so that certain code can't access devices—when they're memory-mapped, in which case "memory location" would have covered the case.

        Even if there is some platform where it's possible to get a runtime error accessing the wrong device, it's far from the kind of common case you'd want to include in such an abbreviated list of typical causes of runtime errors "for someone who does not have prior knowledge of what a runtime error is". Much more typical would be illegal instructions, attempts to access privileged state from unprivileged code, null pointer dereferences, runtime type errors such as ClassCastException, and array bounds violations, none of which are mentioned.

        It seems like someone who doesn't know anything about software was trying to imagine what kinds of things might, in theory, cause runtime errors, but the kinds of protection mechanisms they imagined were completely different from the ones used in actual existing computers.

        > or when a program tries to perform an illegitimate or unauthorized operation

        As a simple matter of logic, this subsumes the previous item, and "illegitimate" in this context means the same thing as "unauthorized". So this list is not just comprehensively wrong at a technical level, but also so badly written as to be incoherent.

        > or tries to access a library, for example.

        This is completely wrong. Of course a program can get a runtime error when it tries to load a library, just as it can get a runtime error when it undertakes any other task, but there is no platform that raises a runtime error whenever you try to access a library.

        > The programs must be thoroughly tested for various types of inputs (valid data sets, invalid data sets and boundary value data sets) and conditions to identify these errors.

        This statement, in itself, only contains a couple of slight factual errors.

        First, it says that the inputs to programs are data sets. This is probably something that someone who doesn't know anything about software copied from a 60-year-old textbook, because at that time, the inputs to programs were data sets (now more commonly known as files). In fact, the more important inputs to programs nowadays are things such as user interactions, data received over a network, and data received in real time from sensors such as microphones, oxygen sensors, and accelerometers. But this statement is stuck in punched-card land, where programs punched into card decks ran in batch mode to process some input data sets and produce some output data sets.

        The second factual error is that it implies that runtime errors result primarily from inputs and how the program responds to them. And, of course, there are runtime errors that are determined by the program's handling of inputs; typically type errors, null pointer exceptions, array bounds violations, etc., can be reproduced if you run the program again with the same input. But large classes of runtime errors do not fit this profile. Some are errors that depend on internal nondeterminism in the program, such as nondeterministic interactions between different threads (we would say "scheduler interleaving order" back in the single-core days), or nondeterministic timing. Others depend on runtime conditions like how much memory is available to allocate or how full the disk is (writing to a full disk in Python raises an exception, though not in many other languages). Others result from hardware problems such as overheating. The statement does, in passing, say "and conditions", but these are given short shrift.

        The much bigger problem with this statement, though, is that it's in the middle of a paragraph about runtime errors. But everything it says is equally applicable to logic errors, the subject of the next paragraph; indeed, much of it is more applicable to logic errors. Putting it in the middle of this paragraph misleadingly implies that things like testing with invalid input files (or the other, more important, invalid inputs it failed to mention) are uniquely or primarily applicable to runtime errors.

        > Once identified, runtime errors are easy to fix.

        This statement is also wrong in a multidimensional way.

        First, although this is in the "debugging" section, many runtime errors aren't bugs at all. A program crashing with a MemoryError or OutOfMemoryError when it is run with insufficient memory available to allocate is, in general, desired behavior. It's also often the desired behavior when it's applied to a too-large input, though obviously that's not acceptable for games or avionics software. The same is true of permission errors, out-of-disk-space errors, etc. These are specifically the kind of runtime errors that most commonly result from the "and conditions" in the previous sentence.

        Second, while it's usually easier to debug a segfault or exception than silently incorrect output, it is often far from trivial to track down the runtime error to its ultimate cause.

        Third, even when you have figured out what the ultimate cause is, fixing it is sometimes not easy, for example because other people's deployed software depends on interfaces you have to change, or more generally because it is difficult to understand what other problems could be caused by the possible fixes you are considering.

        By my count, that's 12 significant factual errors in 86 words, an average of 7.2 words per error. This is an error density that would be impressive in any contexts, although I've occasionally seen expert liars exceed it somewhat. In the context of an official publication purporting to establish standards for professional competence, it's utterly horrifying.

        So this is not a definition fit for anyone, least of all someone who does not have prior knowledge of what a runtime error is; it is very far from being fine. It's embarrassing horseshit.

        And it's not just this one paragraph. Virtually the entire document is like this. Occasionally you'll find a paragraph that's mostly correct from beginning to end, but then it'll be followed by the cringiest idiocy you can possibly imagine in the very next paragraph.

        With respect to this paragraph in particular, an even worse problem than its unreliability is emphasis. This is from §4.6, "Debugging", which is 189 words, about a quarter of p. 16-13 (p. 326/413). Debugging is arguably the central activity of software engineering; about a quarter of the effort† spent on software engineering is spent on debugging, an amount which would be larger if we didn't spend so much effort on avoiding debugging (for example, with type systems, unit tests, version control, and documentation). Any actual software engineering body of knowledge would be largely concerned with knowledge about debugging. But the IEEE relegates it to 0.2% of their document.

        ______

        † in https://www.quora.com/In-a-typical-software-engineering-comp... Raja Nagendra Kumar says 80% in IT services and typically 20–50% in "product companies", and Gene Sewell says ⅓ of the time; in https://www.quora.com/How-much-time-does-a-programmer-spend-... Ben Gamble says 5–70% of each day and Stephen Bear says ideally about 10% if you're doing everything else right; https://old.reddit.com/r/learnprogramming/comments/1eclw4l/i... complains about spending 60–70% of their time debugging; https://old.reddit.com/r/computerscience/comments/lwemi5/is_... quotes the Embedded Market Survey as saying 20%, while other people report numbers as high as 80%; https://craftbettersoftware.com/p/debugging-the-biggest-wast... says 50%, referencing https://www.researchgate.net/publication/345843594_Reversibl.... So I think "a quarter" is a pretty reasonable ballpark.

        • mixmastamyk 3 days ago

          Crikey. Nitpicking a single sentence (in a document of hundreds of pages) to the degree you just did is not nearly useful as you seem to think.

          Sometimes a sentence is a little too general, or too specific. It happens often and we all know that. Indeed, in this short post I've probably already committed that sin. 100% guaranteed you made several errors (especially of degree) in the post above.

          • kragen 3 days ago

            There's an enormous difference between "sometimes a sentence is a little too general, or too specific" and making 12 serious factual errors in 86 words. My comment is 1944 words, at least according to Emacs; if it had the same error density, it would have not just several errors (though I note you were unable to find any!) but 271 serious ones.

            You did make one error in your comment, though; when you said "Nitpicking a single sentence", you implied that my comment only dissected the errors in a single sentence, rather than an entire paragraph.

            When text is carefully drafted by competent people, it is impossible to "nitpick" it to the degree I just did. It is very rare to find something either as error-filled or as badly written as this paragraph. The fact that the document is hundreds of pages long makes the situation far worse, not better; all of those hundreds of pages seem to be of the same appalling quality.

            • mixmastamyk 3 days ago

              Only skimmed your comment. Because swebok and your scathing critique are not important enough. "The lady doth protest too much, methinks."

              • kragen 3 days ago

                What, you think I wrote the SWEBOK? Or maybe you just don't understand the Shakespeare you're quoting any more than you understood what I was saying in the first place.

          • kragen 2 days ago

            In https://news.ycombinator.com/item?id=41918401 you admit that you only skimmed my comment; that's why you mistook it for nitpicking. Consequently, your response is wholly incorrect.

        • wakawaka28 3 days ago

          You're reading WAY too much into it. Sometimes things are meant to be simplified, and this is one of those cases. We are essentially talking about a brief description of something in a textbook.

          >This implicitly asserts that dividing by zero is always unexpected, that all runtime errors are unexpected, and that dividing by zero always causes a runtime error. None of these are true.

          You are assuming basically all of that stuff. Dividing by zero is almost always an error, is usually unexpected, and is a suitable example of a runtime error. Just because it may not be unexpected sometimes does not make it an unsuitable example. If I said "Taxable events surface when a person conducts a taxable transaction such as sale of goods or labor" does not in any way imply that all sales of goods or labor are taxable. It is implied in the sentence that the examples refer to common errors.

          If you talked to 100 programmers and asked them to give an example of a common runtime error, I think conservatively at least 95 of them would spit out "division by zero" or "stack overflow". The audience of this book may conceivably not know what a stack is yet, so "memory overflow" conveys that you have used more memory than expected.

          I don't have enough patience right now to shred the rest of your "analysis" but I can tell that you're the type of person who used to get pissed off at textbooks in school because they never mention air resistance. You probably got pissed off at that sentence because you once saw a textbook that did say "ignore air resistance" lol

          • kragen 3 days ago

            Well, you asked what was wrong with it, and I answered. If you don't bother to read the answer to the question you asked, that's on you.

            I do agree that a division-by-zero exception is a good example of a runtime error, and—as I said in the comment you are purportedly responding to!—that the examples are intended to refer to common errors. (However, most of them fail to do so, evidently due to the ignorance of the authors.)

            I would get pissed off at physics textbooks that said air resistance didn't exist, but I haven't ever seen such a bad physics textbook—and I've seen some pretty bad physics textbooks!

            The rest of your comment is completely incorrect, and the personal attacks in your comment do not rise to the level of discourse desired on this site. It seems like you missed the main points of my comment, in several cases responding to my reasoned arguments with simple contradiction, and are unaware of the intended audiences of the SWEBOK.

            Simplification is not only fine but a sine qua non for high-level summaries of a field. And simplifications are always in some sense erroneous. But the objective of simplification in a summary is to lead the reader toward the truth, even if you can't quite reach it—you can formulate the ballistics ODEs including air resistance much more easily once you've learned to handle the simplified version without. However, when someone doesn't understand the field, they often produce a "simplification" that includes lots of incorrect unnecessary detail and which is broadly misleading, and that is what happened in this case, as I explained in detail in my comment above. Someone who knew nothing about runtime errors would know less than nothing about them after reading that "simplification".

            And the document goes on for hundreds of pages at this atrocious quality level.

            • wakawaka28 2 days ago

              I didn't miss anything you said. I did read your comments and frankly I don't have the time to address the level of banality therein. My position is that you are erecting and burning an elaborate straw man based on the least reasonable reading of a general statement, and you don't understand who the audience is. If someone has to be told what a runtime error is, that means they don't necessarily know it. That definition is totally adequate for the purpose of conveying the concept.

              You accuse me of being personal but your comments are some of the most pompous crap I've read in ages.

              >And the document goes on for hundreds of pages at this atrocious quality level.

              Oh so you read the whole thing? Lol

              • kragen 2 days ago

                It sounds like you aren't very clear on what a "runtime error" is yourself, or what role they play in software development, which I guess is why you didn't see anything wrong with the ostensible definition in the SWEBOK. You could probably learn a lot from reading my longer comment carefully enough to understand it.

                The audience of the SWEBOK is not only, or even primarily, learners. This is explained in the preface.

                Thinking my comments are pompous—because they're written in a more formal register than you're accustomed to reading, I suppose—is no excuse for your invective.

                I didn't read the whole document, but, as you can see from https://news.ycombinator.com/item?id=41916612, I did generate a fair random sample of 10 half-pages of it and read and summarize them. All of them were similarly incoherent and full of serious errors of fact that can only be attributed to profound incompetence. That's an adequate basis on which to conclude that the whole thing is bad. Moreover, everyone else in this thread commenting on specific parts of the document that they've read has similarly damning comments. (Except you, but at this point I have, I think, ample reason to discount your opinion.)

    • booleandilemma 3 days ago

      Software development is one of those things that's easy until you have to actually sit down and do it.

    • prewett 4 days ago

      I suppose, yeah, when I figure out the exact reason why the error occurred, fixing it is often easy. The finding, however, can often take a long time, which leaves me feeling "once you have driven across the continent, finding your desired endpoint is a quick process". "Once constructed, installing the door on your new house is a simple and easy task". Well, sure, if you take out the time-consuming parts, it's quick and easy.

      Except for all those runtime errors where the fix required major refactoring, or the problem is in a third party library, the OS, or some other unchangeable component.

    • TZubiri 4 days ago

      Reads like chatgpt, or those insufferable linkedin autogenerated ai questions:

      "How would you secure a system?"

      • miffy900 4 days ago

        This is the embarrassing bit; this is version 4 of a book that predates ChatGPT by years; knowing that probably most of it is written by humans makes me cringe.

  • aste123 4 days ago

    Yes