Let's properly analyze an AI article for once

(nibblestew.blogspot.com)

103 points | by pabs3 11 hours ago ago

68 comments

  • vunderba 7 hours ago

    Spot on critical analysis of the blog post "Developers reinvented" by Github Thomas Dohmke which includes such quotes as:

    > Many Computer Science (CS) programs still center around problems that AI can now solve competently.

    Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]

    Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.

    [1] https://x.com/mxcl/status/608682016205344768

    • Gigachad 4 hours ago

      Schools still make you manually understand math even though calculators have been perfect for decades. Because it turns out having some magic machine spit out an answer you can’t understand isn’t good and you’ll have no ability to understand when and why the answer is incorrect.

      • jacquesm 2 hours ago

        I think until the third year of high school you should do without a calculator. There really is no substitute for being able to do basic math in your head. It also helps you to later on spot order of magnitude errors and such, as well as to do good first order of approximation estimates.

        • dleeftink 38 minutes ago

          I'm a bit different then, maths only started to make sense well after picking up a calculator; Wolfram notebooks, Excel and SQL were much easier for me to grok than attempting the equivalent by heart/head/hand.

          Nowadays, math concepts or papers only makes sense when I can properly implement them as query, it's somehow a basic translation step I need.

    • thrown-0825 5 hours ago

      Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

      You can teach fundamentals all day long, but on their first day of work they are going to be asked adhere to some internal corporate process that is so far removed from their academic experience that they will feel like they should have just taught themselves online.

      • bregma 2 hours ago

        Computer programming is to computer science as working a cash register is to economics.

        • Der_Einzige 31 minutes ago

          If this is true than I hope that AI kills both computer programming and computer science as fast as possible.

          It’s not. If you’re a computer scientist who’s not coding, you are a bad computer scientist. “Those who cannot do, teach”

      • meindnoch 5 hours ago

        >Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

        80% of software development boils down to:

        1. Get JSON(s) from API(s)

        2. Read some fields from each JSON

        3. Create a new JSON

        4. Send it to other API(s)

        Eventually people stopped pretending that you need a CS degree for this, and it spawned the coding bootcamp phenomenon. Alas it was short-lived, because ZIRP was killed, and as of late, we realized we don't even need humans for this kind of work!

        • Esophagus4 2 hours ago

          I’ll challenge that assumption.

          All of my rock star engineers have CS degrees OR are absolute savants of computer science who taught themselves. They solve problems, not just write CRUD apps.

          I don’t want people who only have surface level knowledge of calling JSON APIs. They tend to be a serious drag on team productivity, and high performers don’t want to work with them.

        • thrown-0825 4 hours ago

          And they were right.

          We no longer hires junior engineers because it just wasn't worth the time to train them anymore.

      • saagarjha 4 hours ago

        I don't get this viewpoint. Yes, of course when you start at your job you will have to learn how JIRA works or how to write a design doc. Obviously nobody is going to teach you that in college. But nobody is going to teach you that online either!

        • thrown-0825 4 hours ago

          How about performing git bisect to identify when a regression was introduced, or debugging a docker container that is failing to start in your local environment, writing some unit tests for a CI suite, merging a pull request that has some conflicts, etc etc etc.

          These a just a couple of examples of things that I see juniors really struggle with that are day 1 basics of the profession that are consistently missed by interview processes that focus on academic knowledge.

          People won't teach you how to solve these problems online, but you will learn how to solve them while teaching yourself.

          • lelanthran 3 hours ago

            > How about performing git bisect to identify when a regression was introduced, or debugging a docker container that is failing to start in your local environment, writing some unit tests for a CI suite, merging a pull request that has some conflicts, etc etc etc

            That's called vocational training and isn't usually taught as part of academic curricula.

            If you want non-academic graduates you've got your pick.

            Maybe having a technical addendum to academic curricula that makes student work at the end of the studies a criteria for graduation might help. That's how it is done for doctors, lawyers and accountants after all. The difference is that they graduate but can't practice until they have completed training.

          • saagarjha 4 hours ago

            Yes, and I did plenty of that during my university education. Except Docker because at the time I refused to use Docker.

            • thrown-0825 4 hours ago

              Great, and if you got a job with us I would be having to explain how docker works because you refused to learn it for some reason.

              Point is that what is deemed important in academic circle is rarely important in practice, and when it is I find it easier to explain a theory or algorithm than teach a developer how to use an industry standard tool set.

              We should be training devs like welders and plumbers instead of like mathematicians because practically speaking the vast majority of them will never use that knowledge and develop an entirely new skill set the day they graduate.

              • saagarjha 4 hours ago

                I use standard algorithms all the time. Sometimes I have to come up with new ones. And that's not just when I'm working on performance-sensitive roles.

                Also, btw, I did eventually learn how to use Docker. I did actually vaguely know how it worked for a while but I didn't want Linux VM anywhere near my computer, but eventually I capitulated provided I didn't have Linux VM running all the time.

                • jrh3 39 minutes ago

                  Ditto. I avoided Docker as long as possible and look forward to the day it is replaced.

              • lelanthran 3 hours ago

                IME it is far far easier to teach a CS graduate how to use some software than to teach a user basic CS principles.

                Besides, at the rate of change we see in this industry, focusing on producing users instead of developers will make half the stuff outdated by the time the student graduates.

                I mean, okay, lets teach Jira. Then the industry switches to assure develops.

                That's the general problem with vocational training: it ages much faster than academic stuff.

      • janalsncm 4 hours ago

        A weaker version of your argument that might be more popular here involves math requirements.

        I had to take calculus and while I think it’s good at teaching problem solving, that’s probably the best thing I can say about it. Statistics, which was not required, would also check that box and is far more applicable on a regular basis.

        Yes calculus is involved in machine learning research that some PhDs will do, but heck, so is statistics.

        • RugnirViking an hour ago

          Calculus is used in basically everything hard or worthwhile. Its like the most useful part of maths to learn, along with linear algebra, for doing real world stuff.

          I've personally used it in my career for machine learning, non ml image processing, robot control, other kinds of control, animations, movement in games, statistics, physics, financial modelling, and more.

        • wizzwizz4 3 hours ago

          Except statistics requires calculus – otherwise it makes no sense, and you can't determine when the tools in the toolbox are applicable.

        • crinkly an hour ago

          Strangely I see a lot of people hitting things with the brute force hammer quite regularly rather than using calculus.

          Some of our financial modelling stuff was CPU bound and took seconds because someone couldn’t be bothered or didn’t know how to work out an integral.

      • skywhopper 2 hours ago

        Computer science is not the same thing as software development.

      • crinkly 5 hours ago

        Depends what you do. The sudden large accumulation of layers and corporate SaaS crap in the industry since about 2001 you're right on. But those of us a bit further down the stack or before that, it's pretty useful still.

        • thrown-0825 4 hours ago

          Absolutely, but we are a dying breed in the same way that nobody really knows how to build nuclear power plants anymore.

          Most CS grads will end up in a position that has more in common with being an electrician or plumber than an electrical engineer, difference is that we can't really automate installing wires and pipes to same degree we have automated service integration and making api calls.

          • crinkly 4 hours ago

            Not a dying breed. There is just a relatively static demand. Proportionally it looks worse because the rest of the industry has grown massively.

            Really the problem is there are too many CS grads. There should be a software engineering degree.

            • thrown-0825 4 hours ago

              Personally I am a fan of bootcamp grads for simple stuff like FE.

              They lack academic knowledge but understand the problem domain and tools, and are generally more teachable with lower salary expectations.

              I would like to see more "trade schools" and its one of my pet peeves when devs call themselves engineers despite not being a licensed or regulated in any meaningful way.

              • bsenftner 39 minutes ago

                Those bootcamps create assembly line capable people, and nothing else. If your work is so static and unchanging that you can use such people, great, for those people doing that kind of limited scope work they are being used and discarded with little to no economic ladder to better their situation. It's exploitation of others, which is commonplace today, but I'd try to do better. It's still a poor way to treat others.

                • thrown-0825 25 minutes ago

                  You just described people who go to bootcamps or trade schools as "assembly line capable, and nothing else".

                  By your definition running a welding company is also exploitive?

    • charcircuit 6 hours ago

      >not your ability to invert a binary tree on a whiteboard.

      Knowing how to swap 2 variables and traverse data structures are fundamentals.

      • kubb 5 hours ago

        I’m surprised that the creator of Homebrew didn’t know how to do that.

        • josephg 4 hours ago

          Of course, lots of people are employed despite giant holes in their knowledge of CS fundamentals. There’s more to being an effective developer than having good fundamentals. A lot more.

          But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.

          This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.

          If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.

        • meindnoch 5 hours ago

          If you spend enough time with Homebrew, it's actually not that surprising.

  • rsynnott 4 hours ago

    > Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

    This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.

    • kibwen an hour ago

      > It's a truly baffling attitude.

      I wish, but no, it's not baffling. We live in a post-truth society, and this is the sort of fundamental nihilism that naturally results.

      • CoastalCoder 2 minutes ago

        I agree that it fits in with a certain trope. But do people really believe that?

        What I mean is:

        Some people recognize that there are circumstances where the social aspects of agreement seem to be the dominant concern, e.g. when the goal is to rally votes. The cynical view of "good beliefs" in that scenario is group cohesion, rather than correspondence with objective reality.

        But most everyone would agree that there are situations where correlation with objective reality is the main concern. E.g., when someone is designing and building the bridge they cross every day.

    • Gigachad 4 hours ago

      They always market the % of lines generated by AI. But if you are forced to use a tool that constantly inserts generations, that number is always going to be high even if the actual benefit is nil or negative.

      If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.

      • diggan 4 hours ago

        > They always market the % of lines generated by AI

        That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)

    • heresie-dabord 20 minutes ago

      > the fundamental guiding ideology of LLM boosterism

      It's the same as the ideology of FOMO Capitalism:

      = The billionaire arseholes are saying it, it must be true

      = Stock valuations are in the trillions, there must be enormous value

      = The stock market is doing so well, your concerns about fundamental social-democratic principles are unpatriotic

      = You need to climb aboard or you will lose even worse than you lost in the crypto-currency hornswoggle

    • tempodox an hour ago

      And yet this fine example of a used car salesman is being rewarded by everyone and their dog hosting their stuff on GitHub, feeding the Copilot machinery with their work for free, so it can be sold back to them.

    • thrown-0825 4 hours ago

      All of the crypto grifters have shifted to AI.

      Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.

      • bsenftner 33 minutes ago

        They never mattered, at least a long as you've been alive. The "Soviet statistics" discussion at the start of the article was an amazing example of Western capitalistic propaganda, because the same nonsense with statistics is also out of control in the West, just not so mind numbingly obvious. The USA is the king of propaganda, far in advance of all rivals.

  • vemv 5 hours ago

    > Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

    Spot on, I think this every time I see AI art on my Linkedin feed.

    • Gigachad 4 hours ago

      I’ve become super sensitive to spotting it now. When I see a restaurant using AI food pictures I don’t want to eat there. Why would I want to do business with people who are so dishonest to lie about the basics?

      • thrown-0825 4 hours ago

        using food pics as an example is hilarious, food photos in advertising have been "faked" for about as long as photography has existed

        • Gigachad 4 hours ago

          Sure, they went through a long process of dressing them up as nice as possible. But they were still just ideal versions of the thing you actually receive. While the AI food looks pretty much nothing like what the restaurant is actually making.

          And outside of mega chains like McDonalds, most restaurants used fully real images.

          • thrown-0825 4 hours ago

            This is provably false.

            There is a large industry based around faking food, you can watch some pretty interesting videos on the process and you will quickly find that they rarely use anything resembling the actual food you will be eating.

            Japan is an extreme example, but there they literally use wax models to advertise their food.

            • Gigachad 3 hours ago

              Those fake food models are still made to look just like the actual meal. I don't know if you've looked at any of these AI food pictures but they look nothing like the end result. And they are also signalling low effort and low initial investment unlike commissioning custom models of ramen bowls.

    • pacifika 4 hours ago

      Isn’t this just the digital version of hand waving / guesswork? People not checking assumptions have long since been a leading cause of weak software delivery.

  • nhinck3 4 hours ago

    This isn't the first time Github (or it's CEO) has produced a completely garbage article about the wonders of AI and it won't be the last.

  • bsenftner an hour ago

    This analysis is not complete, it needs to continue with an analysis of how many people, and critically how many business owners, believe the lies and non-truths propagandized by that article and the entire marketing push of LLMs.

    That article is not for developers, it's for the business owner, their management, and the investor class. If they believe it, they will try to enforce it.

    This is serious, destroy our industry type of idiot logic.

  • pcwelder 7 hours ago

    >I found, a required sample size for just one thousand people would be 278

    It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.

    I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.

    While bad, if done properly, it may still be insightful.

  • crinkly 4 hours ago

    Professional statistician here. Not that I get to do any of that these days, bar read Significance magazine and get angry occasionally.

    Looking at the original blog post, it's marketing copy so there's no point in even reading it. The conclusion is in the headline and the methodology is starting with what you want to say and working back to supporting information. If it was in a more academic setting it would be the equivalent of doing a meta-analysis and p-hacking your way to the pre-defined conclusion you wanted.

    Applying any kind of rigour to it is pointless but thanks for the effort.

  • quantum_state an hour ago

    if history is of reference, vibe coding would turn out to be the most effective tool to produce technical debt …

    • tempodox an hour ago

      If history is any indication, too few will care to make a difference.

      • RugnirViking an hour ago

        A near insurmountable amount of problems and technical debt are due to be created in problems teams. Sounds like a great time to get into consulting. It's like knowing that y2k is coming up ahead of time. Learn your debuggers and your slide decks folks

  • ma73me 7 hours ago

    I'll never judge an article by its HN header again

  • NoahZuniga 5 hours ago

    It's a bit annoying that this article on AI is hallucinating itself:

    > To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself.

    He never said this. This is just false, and it seems like the author didn't even fact check if Hayao Miyazaki ever said this.

    • meindnoch 4 hours ago

      Context: https://youtu.be/ngZ0K3lWKRc

      Miyazaki is repulsed by an AI-trained zombie animation which reminded him of a friend with disabilities. So the oft quoted part is about that zombie animation.

      When he the team tells him that they want to build a machine that can draw pictures like humans do, he doesn't say anything just stares.

    • tough 5 hours ago

      Right it was an out of context presentation of their own company and some very rough AI generated content and years ago before the existence of LLMs

      but yeah sensationalism and all and people don't do research so unless you remember well

      and also lost in translation from Japanese to English, the work sampled by their engineers, it depicted some kinda of zombie like pictures in a very rough form, thus the -insult to life- as in literally

  • skywhopper 2 hours ago

    “ It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: ‘Either Embrace AI or Get Out of This Career’ ”

    Is it clickbait if it’s literally quoting the author? I mean, yes, it was clickbait by Thomas Dohmke, but not by the source that used that headline.

  • bgwalter 5 hours ago

    "AI" could certainly replace Dohmke. It excels at writing such meaningless articles.

  • sixhobbits 7 hours ago

    > The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278

    I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

    The bit about running the study multiple times also seems kinda random.

    I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.

    • foma-roje 6 hours ago

      > You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

      Certainly? Now, who is ranting?

      • brabel 5 hours ago

        It’s a basic property of statistics. You need an extremely varied population to need a sample of 25%, and almost no human population is that varied in practice. Humans are actually very uniform in fact.

    • astrobe_ 6 hours ago

      > The bit about running the study multiple times also seems kinda random.

      Reproducibility? But knowing it comes from the CEO of Github, who has vested interests in that matter because AI is one of the things that will allow to maintain Github's position on the market (or increase revenue of their paid plans, once everyone is hooked on vibe coding etc.), anyone would anyway take it with a grain of salt. It's like studies funded by big pharma.

  • righthand 8 hours ago

    This was excellent!

  • croes 7 hours ago

    The statistics part will also be relevant for the rest of Trump‘s presidency