The Tears of Donald Knuth (2015)

(cacm.acm.org)

59 points | by todsacerdoti 10 hours ago ago

40 comments

  • oh_my_goodness 5 hours ago

    What a weird, bitchy article. Knuth might be wrong but I gave up.

    • the-grump 4 hours ago

      Same reaction. I can't even say the author is correct/wrong because I couldn't get through it.

      • benreesman 3 hours ago

        Then say nothing. If you know nothing, say nothing.

    • musicale 3 hours ago

      Here is my TL;DR and interpretation:

      1. Knuth laments the lack of technical ("internal") history of computing, which traces the evolution of technology and ideas, and should be of great interest and benefit to practitioners.

      2. Historians typically focus on their domains of expertise - social history, culture, economics, politics, personalities, etc. - and tend to write non-technical ("external") history of computing.

      3. The people who have the relevant technical expertise - practioners, researchers, and scholars within the computing field - are qualified (in terms of technical understanding at least) to write this technical history, but have basically zero economic incentive to do so. There is no reward for industry practitioners to write the technical history of computing, and there is little to no reward for computing researchers or scholars either. And of course if one is (or becomes) an expert in computing, there is no economic incentive to become (or remain) a historian.

      4. Nonetheless, there is in fact a small (and hopefully growing) group of scholars who seem to be interested in investigating the technical history of computing (and according to the author "holistic" history which includes multiple aspects.)

      • musicale 3 hours ago

        I tend to agree with Knuth - technical history is extremely valuable to both practitioners and researchers in computing, and there isn't enough of it.

        While it is understandable that computing practitioners and researchers want to look forward to the next "new" thing rather than backward to "old" things, ignoring computing history means that we are often reinventing the wheel, repeating old mistakes, etc., all while lacking an understanding of how and why things are the way they are today. And perhaps missing out on a great deal of fun and intellectual engagement as well.

        Fortunately is some activity in terms of writing up and analyzing the technical history of computing, and I certainly appreciate the work of the CHM, journals like Annals of the History of Computing, the work of retrocomputing hobbyists, and the work of the scholars mentioned in the article. But (as the article notes) there are few economic and career incentives - in history or in computing - to produce this important work.

        The article validates Knuth with these statements:

        > For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.

        > Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists

        > Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it.

        The second part of this last sentence isn't wrong, but sidesteps the first point. One might similarly criticize history departments for failing to reward or commission technological literacy.

    • benreesman 3 hours ago

      I consider it a public service to call you a fucking idiot with six-sigma falsification failure.

      https://github.com/b7r6/cassandra-dissertation

  • atomic128 6 hours ago

    Imagine Knuth's heartbreak when he sees how LLMs have perverted the practical application of the art of computer programming. ("The LLM understands so I don't have to.") It's sad it happened during his lifetime. Has he commented on the topic? Anyone have a link?

    • simonw 6 hours ago

      https://cs.stanford.edu/~knuth/chatGPT20.txt is a conversation between Knuth and Wolfram about GPT-4.

      > I find it fascinating that novelists galore have written for decades about scenarios that might occur after a "singularity" in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost.

      > I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same.

      • ghssds 5 hours ago

        > not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost.

        Aren't Asimov's Multivac stories basicaly this? Humans build a powerful computer with a conversational interface helping them doing all kind of science and stuff, then before they know they become Multivac's pets.

      • sethev 5 hours ago

        I don't know why but it makes me smile that he did this experiment by having a grad student type the questions for chatgpt and copy the results.

      • atomic128 5 hours ago

        That's related. Thank you for posting it.

        But what does Knuth think of "vibe coding" or "agentic coding"?

        What does he think of "The Dawn of the Dark Ages of Computer Programming"?

        • jacquesm 5 hours ago

          I don't think Knuth needs to stoop that low. He actually knows what he's doing.

      • rramadass 4 hours ago

        That link is great!

        Knuth has a beautiful way of writing systematically (as can be expected of the inventor of "Literate Programming").

    • johngunderman 6 hours ago

      While I can't speak for Knuth, I have been reflecting on the fact that developing with a modern LLM seems to be an evolution of the concept of Literate Programming that Knuth has long been a proponent of.

      What is the rationale behind the assertion that Knuth would be so fundamentally opposed to the use of LLMs in development?

      • atomic128 6 hours ago

        I don't see the connection.

        In literate programming you meticulously write code (as usual) but present it to a human reader as an essay: as a web of code chunks connected together in a well-defined manner with plenty of informal comments describing your thinking process and the "story" of the program. You write your program but also structure it for other humans to read and to understand.

        LLM software development tends to abandon human understanding. It tends to abandon tight abstractions that manage complexity.

        • rixed 3 hours ago

          Have you ever tried literate programming? In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

          Thus the similarity with using LLM. Working with LLMs is quicker though, not only because you do not write the code but you don't care much about the style of the prose. On the other hand, the code has to be reviewed, debugged and polished. So, Ymmv.

          • phba an hour ago

            > In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

            This is not literate programming. The main idea behind literate programming is to explain to a human what you want a computer to do. Code and literate explanations are developed side by side. You certainly don't change your mind in the process (lol).

            > Working with LLMs is quicker though

            Yes, because you neither invest time into understanding the problem nor conveying your understanding to other humans, which is the whole point of literate programming.

            But don't take my word, just read the original.[1]

            [1] https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...

      • jacquesm 5 hours ago

        It couldn't be further away from Literate programming. If anything we should call it illiterate programming.

      • nz 4 hours ago

        The irony is that if we had been writing literate programs instead of "normal" programs, from 1984 to 2026, then LLMs may actually have been much better at programming in 2026, than they turned out to be. Literate programs entwine the program code with prose-explanations of that code, while also cross-referencing all dependent code of each chunk. In some sense they make fancy IDEs and editors and LSPs unnecessary, because it is all there in the PDF. They also separate the code from the presentation of the code, meaning that you don't really have to worry about the small layout-details of your code. They even have aspects of version control (Knuth advocates keeping old code inside the literate program, and explaining why you thought it would work and why it does not, and what you replaced it with).

        LLMs do not bring us closer to literate programming any more than version-control-systems or IDEs or code-comments do. All of these support-technologies exist because the software industry simply couldn't be disciplined enough to learn how to program in the literate style. And it is hard to want to follow this discipline when 95% of the code that you write, is going to be thrown away, or is otherwise built on a shaky foundation.

        Another "problem" with literate programming is that it does not scale by number of contributors. It really is designed for a lone programmer who is setting out to solve an interesting yet difficult problem, and who then needs to explain that solution to colleagues, instead of trying to sell it in the marketplace.

        And even if literate programming _did_ scale by number of contributors, very few contributors are good at both programming _and_ writing (even the plain academic writing of computer scientists). In fact Bentley told Knuth (in the 80s) that, "2% of people are good at programming, and 2% of people are good at writing -- literate programming requires a person to be good at both" (so only about 0.04% of the adult population would be capable of doing it).

        By the way, Knuth said in a book (Coders at Work, I believe): "If I can program it, then I can understand it." The literate paradigm is about understanding. If you do not program it, and if _you_ do not explain the _choices_ that _you_ made during the programming, then you are not understanding it -- you are just making a computer do _something_, that may or may not be the thing that you want (which is fine, most people use computers in this way: but that makes you a user and not a programmer). When LLMs write large amounts of code for you, you are not programming. And when LLMs explain code for you, you are not programming. You are struggling to not drown in a constantly churning code-base that is being modified a dozen times per day by a bunch of people, some of whom you do not know, many of whom are checked out and are trying to get through their day, and all of whom know that it does not matter because they will hop jobs in one or two or three years, and all their bad decisions become someone else's problem.

        Just because LLMs can translate one string of tokens into a different string of tokens, while you are programming does not make them "literate". When I read a Knuthian literate program, I see, not a description of what the code does, but a description what it is supposed to do (and why that is interesting), and how a person reasoned his/her way to a solution, blind-alleys and all. The writer of the literate program anticipates the next question, before I even have it, and anticipates what might be confusing, and phrases it in a few ways.

        As the creator of the Axiom math software said: the goal of Literate Programming, is to be able to hire an engineer, give him a 500 page book that contains the entire literate program, send him on a 2 week vacation to Hawaii, and have him come back with whole program in his head. If anything LLMs are making this _less_ of a possibility.

        In an industry dominated by deadline-obsessed pseudo-programmers creating for a demo-obsessed audience of pseudo-customers, we cannot possibly create software in a high-quality literate style (no, not even with LLMs, even if they got 10x better _and_ 10x cheaper).

        Lamport (of Paxos, Byzantine Generals, Bakery Algo, TLA+), made LaTeX and TLA+, with the intent that they be used together, in the same way that CWEB literate programs are. All of these tools (CWEB, TeX, LaTeX, TLA+), are meant to encourage clear and precise thinking at the level of _code_ and the level of _intent_. This is what makes literate programs (and TLA+ specs) conceptually crisp and easily communicable. Just look at the TLA+ spec for OpenRTOS. Their real time OS is a fraction of the size that it would have been if they had implemented it in the industry-standard way, and it has the nice property of being correct.

        Literate Programming, by design, is for creating something that _lasts_, and that has value when executed on the machine and in the mind. LLMs (which are being slowly co-opted by the Agile consulting crowd), are (currently) for the exact opposite: they are for creating something that is going to be worthless after the demo.

        • mcswell 3 hours ago

          > LLMs do not bring us closer to literate programming...

          Without saying that I agree with the person you're responding to, and without claiming to really know what he was saying, I'll say what I think he was suggesting: That a human could do the literate part of literate programming, and the LLM could do the computing part. When (inevitably) the LLM doesn't write bug-free code snippets, the human revises the literate part, followed by the LLM revising the code part.

          And of course there would be a version control part of this, too, wherein both the changes to the literate part and the changes to the code parts are there side-by-side, as documentation of how the program evolved.

        • WD-42 an hour ago

          This is meta so sorry about not actually responding, but thank you for a very well written comment. In this time of slop and rage it's really refreshing to see someone take the time to write (long form for a comment) about something they are clearly knowledgeable and passionate about.

    • asddubs 5 hours ago

      You might enjoy this video:

      https://www.youtube.com/watch?v=Y65FRxE7uMc

      The connection to knuth is tangential to the actual video subject, but it does contrast knuth to LLMs as a framing device

    • seanmcdirmid 6 hours ago

      He is still alive (I think?) you could just ask him. I doubt he is sad as much as he is excited. Computer scientists are not SWEs worried about losing their careers.

      • linguae 6 hours ago

        He’s still here. In fact, in December he gave his annual Christmas lecture, and last month he was a guest at a Computer History Museum event.

      • atomic128 6 hours ago

        Excited? I doubt that. I'm guessing you haven't read his books.

        • CharlesW 6 hours ago

          He seems pretty fascinated with the possibilities.

          https://cs.stanford.edu/~knuth/chatGPT20.txt

          • atomic128 6 hours ago

            "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same. Best regards, Don"

            • CharlesW 5 hours ago

              There's more than one cherry to pick if one needs Mr. Knuth to have a purely-negative opinion about LLMs, but naturally any fascination is offset by the same concerns that any sane technologist has. In any case, it's all in his post.

        • seanmcdirmid 6 hours ago

          The techno pessimists on HN are probably not PhDs in computer science. I don’t think they understand what it takes to get there, and how it shapes your thinking afterwards.

          • defrost 5 hours ago

            Neither Wolf nor Knuth are PhDs in Computer Science, yet many would agree that both understand "what it takes to get there" as do many others who else live sans a PhD in Comp. Sci.

            • robotresearcher 4 hours ago

              Needlessly pedantic.

              Knuth's PhD is in mathematics, like Alan Turing, and many other significant computer scientists.

              • defrost 2 hours ago

                > Needlessly pedantic.

                You don't have to pre warn readers about your comments here, we're all needlessly pedantic.

                That aside, the guts of this sub branch is the correlation between {techno pessimists on HN} and {people qualified to understand LLM's (workings and implications)}.

                Personally I wouldn't limit set two to "PhDs in computer science" or even accept that {all PhD's in Comp Sci} is a subset of set two, as I made clear with my comment, nor would I argue a lack of overlap between sets one and two.

                I'm interested to hear where you stand.

      • add-sub-mul-div 6 hours ago

        Hopefully some are visionary enough to be dismayed that the endgame of their field is the acceleration of slop and fraud, the end of customer service, and the end of the reading of full, original documents.

        I can't imagine being excited about any of that unless I was trying to make money from it.

        • bigstrat2003 4 hours ago

          > the end of the reading of full, original documents

          That's one that always gets me: people who use LLMs to summarize everything. It's like, bro, how lazy are you that you can't be bothered to read a handful of paragraphs of text? That takes all of 30 seconds. I can understand trying to get a computer to summarize a document which is dozens of pages long (though I would be concerned about hallucinations), but a lot of the tasks people use LLMs for are really easy already.

    • Razengan 5 hours ago

      > …LLMs have perverted the practical application of the art of computer programming. ("The LLM understands so I don't have to.") It's sad it happened during his lifetime.

      If you see magazines articles or TV shows and ads from the 1980s (a fun rabbit hole on YouTube, like the BBC Archive), the general promise was that "Computers can do anything, if you just program them."

      Well, nobody could figure out how to program them. (except the outcasts like us who went on to suffer for the rest of our lives for it :')

      OS makers like Microsoft/Apple/etc all had their own ideas about how we should make apps and none of them wanted to work together and still don't.

      With phones & "AI" everywhere we are actually closer to that original promise of everyone having a computer and being able to do anything with it, that isn't solely dictated by corporations and their prepackaged apps:

      Ideally ChatGPT etc should be able to create interactive apps on the fly on your iPhone etc. Imagine having a specific need and just being able to say it and get a custom app right away just for you on your device.

      • atomic128 5 hours ago

        Past progress in software engineering is a tower of well-defined abstractions.

        Compilers for languages that make specific guarantees about the semantics of their translation to machine code.

        Libraries with well-defined interfaces that let you stand on the shoulders of others by understanding said interfaces and ignoring the internals.

        This is how concrete progress is made. You build on solid blocks.

        That era is ending.

        • rixed 3 hours ago

          That era ended 20 years ago. It's called "industrialization", a process that has happened to many other crafts in the past. AI is just the latest blow.

  • kazinator 4 hours ago

    LinkedIn version of the history of CS:

    In the beginning was assembly language, then we got C, followed by C++ bringing in OOP and Java making it safe, ...

  • gnabgib 3 hours ago

    (2015) At the time (180 points, 55 comments) https://news.ycombinator.com/item?id=8796212

  • smitty1e 6 hours ago

    Title should note that this is a 2015 post.