Wirth's Revenge

(jmoiron.net)

84 points | by signa11 9 hours ago ago

23 comments

  • satvikpendem 4 hours ago

    While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.

    For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!

    [0] https://youtu.be/LvW1HTSLPEk

    • Cthulhu_ 3 hours ago

      It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.

      • moring 2 hours ago

        I don't think graphics devs changing over would change much. They would probably not lament over 16ms, but they would quickly learn that performance does not matter much in application development, and start building their own abstraction layer cake.

        It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.

      • jacquesm 3 hours ago

        And embedded too. But then again, they do what they do precisely because in that environment those skills are appreciated, and elsewhere they are not.

    • elliotec 3 hours ago

      Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.

      The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.

  • nickm12 3 hours ago

    I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.

    Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?

    I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.

  • emsign 2 hours ago

    LLMs are a very cool and powerful tool when you've learned how to use them effectively. But most people probably didn't and thus use them in a way that they produce unsatisfying results while maximizing resource and token use.

    The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.

  • firmretention an hour ago

    The Reiser footnote was on point. I couldn't resist clicking it to find out if it was the same Reiser I was thinking about.

  • cadamsdotcom 2 hours ago

    The actual constraint is how long people are willing to wait for results.

    If the results are expected to be really good, people will wait a seriously long time.

    That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.

    It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.

    • deepserket an hour ago

      > It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.

      In recent times I found myself falling for this preconception when a LLM starts to spit text just a couple of seconds after a complex request.

  • xnorswap 3 hours ago

    An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing.

    It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.

    Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?

  • an hour ago
    [deleted]
  • iberator 3 hours ago

    Dull article with no point, numbers or anything of values. Just some quasi philosophical mumbing. Wasted like 10 minutes and I'm still not sure what was the point of the article

    • pseudony 3 hours ago

      Have you considered that the article might be fine, but it’s more a case of you not getting the point ?

  • dist-epoch an hour ago

    Wirth was complaining about the bloated text editors of the time which used unfathomable amounts of memory - 4 MB.

    Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.

    But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.

    If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.

    I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.

    • weinzierl 33 minutes ago

      This will probably not happen, because of physics.

      Both, compute and memory, are getting closer to fundamental physical limits and it is unlikely that the next 60 years will be in any way like last 60 years.

      While the argument for compute is relatively simple it is a bit harder to understand for memory. We are not near to any limit for the size of our memory but the limiting factor is how much storage we can bring how close to our computing units.

      Now, there is still way to make and low hanging fruit to pick but I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.

  • gostsamo 36 minutes ago

    I suspect that the next generation of agenticly trained llm-s will have a mode where they first consider solving the problem by writing a program first before doing stuff by hand. At least, it would be interesting if in a few months the llm greets me with "Keep in mind that I run best on ubuntu with uv already installed!".

  • casey2 5 minutes ago

    It's inevitable even if it's unnecessary. Capitalism necessitates 6% growth year on year. Since IT services are the growth sector of course 25% of power will go to data centers in 2040

    The EU should do a radical social restructuring betting on no growth. Perhaps even banning all American tech. A modern Tokugawa.

  • jokoon 3 hours ago

    Hardware is cheaper than programmers

    Maybe one day that will change

    • tgv an hour ago

      Idk. What are these programmers doing afterwards? Build more shoddy code? Perhaps it's a better idea to focus on what's necessary and not run from feature to feature at top speed. This might require some rethinking in the finance department, though.

    • srean 2 hours ago

      Thanks to AI driven scarcity of hardware it's already coming true.

  • jongjong 36 minutes ago

    We haven't yet lost the war against complexity. We would know if we had, because all software would grind to a halt due to errors. We're getting close though; some aspects of software feels deeply dysfunctional; like 2FA and Captcha - They're perfect examples of trying to improve something (security) by adding complexity... And it fails spectacularly... It fails especially hard because those people who made the decision to force these additional hurdles on users are still convinced that they're useful because they have a severely distorted view of the average person's reality. Their trade-off analysis is completely out of whack.

    The case of security is particularly pernicious because complexity has an adverse impact on security; so trying to improve security by adding yet more complexity is extremely unwise... Eventually the user loses access to the software altogether. E.g. they forgot their password because they were forced to use some weird characters as part of their password or they downloaded a fake password manager which turned out to be a virus, or they downloaded a legitimate password manager like Lastpass which was hacked because obviously, they'd be a popular target for hackers... Even if everything goes perfectly and the user is so deeply conditioned that they don't mind using a password manager... Their computer may crash one day and they may lose access to all their passwords... Or the company may require them to change their password after 6 month and the password manager misses the update and doesn't know the new password and the user isn't 'approved' to use the 'forgot my password' feature... Or the user forgets their password manager's master password and when they try to recover it via their email, they realize that the password for their email account is inside the password manager... It's INFURIATING!!!

    The root problem is that the average computer is full of vulnerabilities and cannot be trusted 100% so you need a second device just in case the computer was hacked... But it's not particularly useful because if someone infected your computer with a virus, they can likely also infect your phone the next time you plug it in to your computer to charge it... It's not quite 2-factor... So much hassle for so little security benefit... Especially for the average person who is not a Fortune 500 CEO. Company CEOs have a severely distorted view about how often the average person is targeted by scammers and hackers. Last time someone tried to scam me was 10 years ago... The pain of having to pull up my phone every single day, multiple times per day to type in a code is NOT WORTH the tiny amount of security it adds in my case.

  • 2 hours ago
    [deleted]