626 comments

  • S0y a day ago
  • asdfman123 9 hours ago

    I work for Google, and I just got done with my work day. I was just writing I guess what you'd call "AI generated code."

    But the code completion engine is basically just good at finishing the lines I'm writing. If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.

    So basically, it's a helpful productivity tool but it's not doing any engineering at all. It's probably about as good, maybe slightly worse, than Copilot. (I haven't used it recently though.)

    • NotAnOtter 31 minutes ago

      I also work at google (until last Friday). Agree with what you said. My thoughts are

      1. This quote is clearly meant to exaggerate reality, and they are likely including things like fully automated CL/PR's which have been around for a decade as "AI generated".

      2. I stated before that if a team of 10 is equally as productive as a team of 8 utilizing things like copilot, it's fair to say "AI replaced 2 engineers", in my opinion. More importantly, Tech leaders would be making this claim if it were true. Copilot and it's clones have been around long enough know for the evidence to be in, and no one is stating "we've replaced X% of our workforce with AI" - therefor my claim is (by 'denying the consequent'), using copilot does not materially accelerate development.

      • onion2k a minute ago

        no one is stating "we've replaced X% of our workforce with AI"

        That's only worth doing if you're trying to cut costs though. If the company has unmet ambitions there's no reason to shrink the headcount from 10 to 8 and have the same amount of output when you can keep 10 people and have the output of 12 by leveraging AI.

    • nlehuen 16 minutes ago

      I also work at Google and I agree with the general sentiment that AI completion is not doing engineering per se, simply because writing code is just a small part of engineering.

      However in my experience the system is much more powerful than you described. Maybe this is because I'm mostly writing C++ for which there is a much bigger training corpus than JavaScript.

      One thing the system is already pretty good at is writing entire short functions from a comment. The trick is not to write:

        function getAc...
      
      But instead:

        // This function smargls the bleurgh
        // by flooming the trux.
        function getAc...
      
      This way the completion goes much farther and the quality improves a lot. Essentially, use comments as the prompt to generate large chunks of code, instead of giving minimum context to the system, which limits it to single line completion.
    • davedx an hour ago

      I'm working on a CRM with a flexible data model, and ChatGPT has written most of the code. I don't use the IDE integrations because I find them too "low level" - I work with GPT more in a sort of "pair programming" session: I give it high level, focused tasks with bits of low level detail if necessary; I paste code back and forth; and I let it develop new features or do refactorings.

      This workflow is not perfect but I am definitely building out all the core features way faster than if I wrote the code myself, and the code is in quite a good state. Quite often I do some bits of cleanup, refactorings, making sure typings are complete myself, then update ChatGPT with what the code now looks like.

      I think what people miss is there are dozens of different ways to apply AI to your day-to-day as a software engineer. It also helps with thinking things through, architecture, describing best practices.

      • littlestymaar 34 minutes ago

        I share your sentiment, I've written three apps where I've used language models extensively (a different one for each: ChatGPT, Mixtral and Llama-70B) and while I agree that they where immensely helpful in terms of velocity, there are a bunch of caveats:

        - it only works well when you write code from scratch, context length is too short to be really helpful for working on existing codebase.

        - the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively. If you trust the output and had to debug it later it would be a painfully slow process.

        Also, I didn't really noticed a significant difference in code quality, even the best model (GPT-4) write code that doesn't work, and I find it much more efficient to use open models on Groq due to the really fast inference. Looking at ChatGPT slowly typing is really annoying (I didn't test o1 and I have no interest in doing so because of its very low throughput).

        • davedx 10 minutes ago

          > context length is too short to be really helpful for working on existing codebase.

          This is kind of true, my approach is I spend a fairly large amount of time copy-pasting code from relevant modules back and forth into ChatGPT so it has enough context to make the correct changes. Most changes I need to make don't need more than 2-3 modules though.

          > the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively.

          I think this really depends on what you're building. Making a CRM is a very well trodden path so I think that helps? But even when it came to asking ChatGPT to design and implement a flexible data model it did a very good job. Most of the code it's written has worked well. I'd say maybe 60-70% of the code it writes I don't have to touch at all.

          The slow typing is definitely a hindrance! Sometimes when it's a big change I lose focus and alt-tab away, like I used to do when building large C++ codebases or waiting for big test suites to run. So that aspect saps productivity. Conversely though I don't want to use a faster model that might give me inferior results.

    • atoav 4 hours ago

      So this is basically the google CEO saying "a quarter of our terminal inputs is written by a glorified tab completion"?

      • asdfman123 3 hours ago

        Yes. Most AI hype is this bad. They have to justify the valuations.

        • remus 2 hours ago

          "tab completion good enough to write 25% of code" feels like a pretty good hit rate to me! Especially when you consider that a good chink of the other 75% is going to be the complex, detailed stuff where you probably want someone thinking about it fairly carefully.

          • rantallion an hour ago

            The problem being that the time spent fixing the bugs in that 25% outweighs the time saved. Now that tools like Copilot are being widely used, studies are showing that they do not in fact boost productivity. All claims to the contrary seem to be either anecdotal or marketing fluff.

            https://www.techspot.com/news/104945-ai-coding-assistants-do...

          • Maxion 2 hours ago

            For me it's really goddam satisfying having good autocomplete, especially when you are just writing boilerplate lines of code to get the code into a state where you actually get to work on the fun stuff (ther harder problems).

            • amelius an hour ago

              Also if your code gets sent to someone else's cloud?

              • mewpmewp2 an hour ago

                Have you ever had your code repository hosted by Github, Bitbucket, Gitlab or similar?

                If so, all your code is sent to cloud.

      • unglaublich 2 hours ago

        Yes, isn't that the essential idea of industrialization and automation?

        • OtherShrezzing an hour ago

          I think the critique here is that the AI currently deployed at Google hasn't meaningfully automated this user's life, because most IDEs already solved "very good autocomplete" more than a decade ago.

      • mmmpetrichor 3 hours ago

        Yeah, but he wants people to hear "reduce headcount by 25% if you buy our shit!"

        • mewpmewp2 an hour ago

          How do you know that? You are creating this false sense of expectations and hype yourself.

          I am going to argue contrary. If AI increases productivity 2x, it opens up as much new usecases that previously didn't seem worthy to do for its cost. So overall there will just be more work.

    • OnionBlender 5 hours ago

      Do people find these AI auto complete things helpful? I was trying the XCode one and it kept suggesting API calls that don't exist. I spent more time fixing its errors than I would have spent typing the correct API call.

      • ncruces an hour ago

        I particularly like the part where it suggests changes to pasted code.

        When I copy and paste code, very often it needs some small changes (like changing all xs to ys and at the same time widths to heights).

        It's very good at this, and does the right thing the vast majority of the time.

        It's also good with test code. Test code is supposed to be explicit, and not very abstracted (so someone only mildly familiar with a codebase that's looking at a failing test can at least figure the cause). This means it's full of boilerplate, and a smart code generator can help fill that in.

      • _kidlike an hour ago

        I really really dislike the ones that get in your way. Like I start typing something and it injects random stuff (yes in the auto-complete colors). I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

        In IntelliJ thankfully you can disable that part of the AI, and keep the part that you trigger it when you want something from it.

      • mu53 5 hours ago

        I find the simpler engines work better.

        I want the end of the line completed with focus on context from the working code base, and I don't want an entire 5 line function completed with incomplete requirements.

        It is really impressive when it implements a 5 line function correctly, but its like hitting the lottery

      • mcintyre1994 2 hours ago

        I like Cursor, it seems very good at keeping its autocomplete within my code base. If I use its chat feature and ask it to generate new code that doesn’t work super well. But it’ll almost always autocomplete the right function name as I’m typing, and then infer the correct parameters to pass in if they’re variables and if the function is in my codebase rather than a library. It’s also unsurprisingly really good at pattern recognition, so if you’re adding to an enum or something it’ll autocomplete that sensibly too.

        I think it’d be more useful if it was clipboard aware though. Sometimes I’ll copy a type, then add a param of that type to a function, and it won’t have the clipboard context to suggest the param I’m trying to add.

        • qeternity an hour ago

          I really like Cursor but the more I use it the more frustrated I get when it ends up in a tight loop of wanting to do something that I do not want to do. There doesn’t seem to be a good way to say “do not do this thing or things like it for the next 5 minutes”.

      • myworkinisgood 5 minutes ago

        Copilot is very good.

      • M4v3R 3 hours ago

        It probably depends on the tool you use and on the programming language. I use Supermaven autocomplete when writing Typescript and it’s working great, it often feels like it’s reading my mind, suggesting what I would write next myself.

      • I_AM_A_SMURF 3 hours ago

        I use the one at G and it's definitely helpful. It's not revolutionary, but it makes writing code less of a headache when I kinda know what that method is called but not quite.

      • vbezhenar 2 hours ago

        I mostly use one-line completes and they are pretty good. Also I really like when Copilot generates boilerplate like

            if err != nil {
              return fmt.Errorf("Cannot open settings: %w", err);
            }
      • guappa 2 hours ago

        I suspect that if you work on trivial stuff that has been asked on stackoverflow countless of times they work very nicely.

      • skybrian 4 hours ago

        I often delete large chunks of it unread if it doesn't do what I expected. It's much like copy and paste; deleting code doesn't take long.

        • card_zero 3 hours ago

          So your test is "seems to work"?

          • skybrian 3 hours ago

            No, what I meant is that, much like when copying code, I only keep the generated source code if it's written the way I would write it.

            (By "unread" I meant that I don't look very closely before deleting if it looks weird.)

            And then write tests. Or perhaps I wrote the test first.

            • card_zero 3 hours ago

              Oh, if the AI doesn't do what you expected, got it.

      • karmasimida an hour ago

        It is useful in our use case.

        Realtime tab completion is good at some really mundane things within the current file.

        You still need a chat model, like Claude 3.5 to do more explorational things.

      • 0points 2 hours ago

        No, not at all.

        "classic" intellisense is reliable, so why introduce random source in the process?

      • mdavid626 2 hours ago

        No, not at all. It’s just the hype. It doesn’t replace engineering.

      • sharpy 4 hours ago

        Often yes. There were times when I was writing unit tests that was me just naming the test case, with 99% of the test code auto generated based on the existing code, and the name.

      • saagarjha 2 hours ago

        The one Xcode has is particularly bad, unfortunately.

    • alxjrvs 5 hours ago

      In my day to day, this still remains the main way I interact with AI coding tools.

      I regularly describe it as "The best snippet tool I've ever used (because it plays horseshoes)".

      • tomcam 4 hours ago

        Horseshoes? As in “close enough”?

        • ttul 3 hours ago

          Or, as in, “Ouch, man! You hit my foot!”

          • goykasi 3 hours ago

            As long as hand grenades arent introduced, I could live with that.

            • DanHulton 3 hours ago

              Honestly, I don't think "close only count in horseshoes, hand grenades, and production code" will ever catch on...

    • simplyluke 3 hours ago

      This is exactly how I’ve used copilot for over a year now. It’s really helpful! Especially with repetitive code. Certainly worth what my employer pays for it.

      The general public has a very different idea of that though and I frequently meet people very surprised the entire profession hasn’t been automated yet based on headlines like this.

      • arisAlexis 2 hours ago

        Because you are using it like that doesn't mean that it can't be used for the whole stack and on its own and the public including laymen such as the Nvidia CEO and Sam think that yes, we (I'm a dev) will be replaced. Plan accordingly my friend.

        • robertlagrant 2 hours ago

          > Because you are using it like that doesn't mean that it can't be used for the whole stack

          Well no, but we have no evidence it can be used for the whole stack, whatever that means.

          • arisAlexis an hour ago

            Even last year's gpt4 could make a whole iphone app from scratch for someone that doesn't know how to code. You can find videos online. I think you are applying the ostrich method which is understandable. We need to adapt.

            • papichulo2023 37 minutes ago

              Complexity increase over time. I can create new features in minutes for my new selfhosted projects, equivalent work on my entreprise work takes days...

              • arisAlexis 10 minutes ago

                New Gemini has millions of context windows. Think big and project 1-2 years

    • hgomersall an hour ago

      Before I go and rip out and replace my development workflow, is it notably better than auto complete suggestions from CoC in neovim (with say, rust-analyzer)? I'm generally pretty impressed how quickly it gives me the right function call or whatever, or it's the one of the top few.

    • karmasimida an hour ago

      I can totally see it.

      It is actually a testament that, part of Google's code are ... kinda formulaic to some degree. Prior to the LLM take over, we already heard praise how Google's code search works wonder in helping its engineer writing code, LLM just brought that experience to next level.

    • ghostpepper 5 hours ago

      Does it make you 25% more productive?

      • vundercind 5 hours ago

        Between the fraction of my time I spend actually writing code, and how much of the typing time I’m using to think anyway, I dunno how much of an increase in my overall productivity could realistically be achieved by something that just helped me type the code in faster. Probably not 25% no matter how fast it made that part. 5% is maybe possible, for something that made that part like 2-3x faster, but much more than that and it’d run up against a wall and stop speeding things up.

        • imchillyb 5 hours ago

          I imagine that those who cherished the written word thought similar thoughts when the printing press was invented, when the typewriter was invented, and before excel took over bookkeeping.

          My productivity isn't so much enhanced. It's only 1%... 2%... 5%... globally, for each employee.

          Have you ever dabbled with, mucked around in, a command line? Autocomplete functions there save millions of man-hour-typing-units per year. Something to think about.

          A single employee, in a single task, for a single location may not equal much gained productivity, but companies now think on much larger scales than a single office location.

          • moron4hire 4 hours ago

            This is a fallacy because there is no way to add up 1% savings across 100 employees into an extra full time employee.

            Work gets scheduled on short time frames. 5% savings isn't enough to change the schedule for any one person. At most, it gives me time to grab an extra coffee. I can't string together "foregone extra coffees" into "more tasks/days in the schedule".

            • robertlagrant 2 hours ago

              This. I had the same conversation years ago with someone who said "imagine if Windows booted 30s faster, all the productivity gains across the world!" And I said the same thing you did: people turn their computer on and then make a cup of tea.

              Now making a kettle faster? That might actually be something.

      • rustcleaner 4 hours ago

        If 25% of code was AI-written, wouldn't it be a 33[.333...]% increase in productivity?

        • PeterStuer 40 minutes ago

          It is not a direct correlation. I might write 80% of the lines of code in a week, then spend the next 6 months on the remaining 20%. If the AI was mostly helpfull in that first week, overall productivity gain would be very low.

        • card_zero 3 hours ago

          Not if there was also an 8.333̅% increase in slacking off.

          Wait, no. That should be based on how much slacking off Google employees do ordinarily, an unknown quantity.

          • saagarjha 2 hours ago

            You can just check Memegen traffic to figure that one out.

    • afro88 5 hours ago

      So more or less on par with continue.dev using a local starcoder2:3b model

    • hackerknew 6 hours ago

      I wondered if this the real context. i.e. They are just referring to code-completion as AI-generated code. But, the article seems like it is referring to more than that?

    • nycdatasci 5 hours ago

      This is a great anecdote. SOTA models will not provide “engineering” per se, but they will easily double productivity of a product manager that is exploring new product ideas or technologies. They are much more than intelligent auto-complete. I have done more with side projects in the last year than I did in the preceding decade.

      • llm_trw 5 hours ago

        One of my friends put it best: I just did a months worth of experimentation in two hours.

        • Sateeshm 4 hours ago

          I find this hard to believe. Can someone give me an example of something that takes months that AI can correctly do in hours?

          • jvanveen 4 hours ago

            Not hours; but days instead of months: porting around 30k lines of legacy livescript project to typescript. Most of the work is in tweaking a prompt for Claude (using Aider) so the porting process is done correctly.

            • cdchn 4 hours ago

              Thankfully it seems like AI is best at automating the most tedious and arguably most useless endeavor in software engineering- rewriting perfectly good code in whatever the language du jour is.

              • disgruntledphd2 2 hours ago

                Again, what AI is good at shows the revealed preferences of the training data, so it does make sense that it would excel at pointless rewrites.

    • ImaCake 6 hours ago

      This autocomplete seems about on par with github copilot. Do you also get options for prompting it on specific chunks of code and performing specific actions such as writing docs or editing existing code? All things that come standard with gh copilot now.

    • cryptica 3 hours ago

      This is my experience as well. LLMs are great to boost productivity, especially in the hands of senior engineers who have a deep understanding of what they're doing because they know what questions to ask, they know when it's safe to use AI-generated code and they know what issues to look for.

      In the hands of a junior, AI can create a false sense of confidence and it acts as a technical debt and security flaw multiplier.

      We should bring back the title "Software engineer" instead of "Software developer." Many people from other engineering professions look down on software engineers as "Not real engineers" but that's because they have the same perspective on coding as typical management types have. They think all code is equal, it's unavoidable spaghetti. They think software design and architecture doesn't matter.

      The problems a software engineer faces when building a software system are the same kinds of problems that a mechanical or electrical engineer faces when building any engine or system. It's about weighing up trade-offs and making a large number of nuanced technical decisions to ultimately meet operational requirements in the most efficient, cost-effective way possible.

    • Galatians4_16 5 hours ago

      Kerry said hi

  • ntulpule a day ago

    Hi, I lead the teams responsible for our internal developer tools, including AI features. We work very closely with Google DeepMind to adapt Gemini models for Google-scale coding and other Software Engineering usecases. Google has a unique, massive monorepo which poses a lot of fun challenges when it comes to deploying AI capabilities at scale.

    1. We take a lot of care to make sure the AI recommendations are safe and have a high quality bar (regular monitoring, code provenance tracking, adversarial testing, and more).

    2. We also do regular A/B tests and randomized control trials to ensure these features are improving SWE productivity and throughput.

    3. We see similar efficiencies across all programming languages and frameworks used internally at Google and engineers across all tenure and experience cohorts show similar gain in productivity.

    You can read more on our approach here:

    https://research.google/blog/ai-in-software-engineering-at-g...

    • hitradostava a day ago

      I'm continually surprised by the amount of negativity that accompanies these sort of statements. The direction of travel is very clear - LLM based systems will be writing more and more code at all companies.

      I don't think this is a bad thing - if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss and everyone has examples of LLMs producing buggy or ridiculous code. But once the tooling improves to:

      1. align produced code better to existing patterns and architecture 2. fix the feedback loop - with TDD, other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

      Then we will definitely start seeing more and more code produced by LLMs. Don't look at the state of the art not, look at the direction of travel.

      • latexr a day ago

        > if this can be accompanied by an increase in software quality

        That’s a huge “if”, and by your own admission not what’s happening now.

        > other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

        What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

        > Then we will definitely start seeing more and more code produced by LLMs.

        We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.

        > Don't look at the state of the art not, look at the direction of travel.

        That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.

        • danielmarkbruce 13 hours ago

          > What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

          Humans are machines which make errors. Somehow, we got to the moon. The suggestion that errors just mindlessly compound and that there is no way around it, is what's stupid.

          • latexr 11 hours ago

            > Humans are machines

            Even if we accept the premise (seeing humans as machines is literally dehumanising and a favourite argument of those who exploit them), not all machines are created equal. Would you use a bicycle to fill your taxes?

            > Somehow, we got to the moon

            Quite hand wavey. We didn’t get to the Moon by reading a bunch of text from the era then probabilistically joining word fragments, passing that around the same funnel a bunch of times, then blindly doing what came out, that’s for sure.

            > The suggestion that errors just mindlessly compound and that there is no way around it

            Is one that you made up, as that was not my argument.

            • danielmarkbruce 8 hours ago

              LLMs are a lot better at a lot of things than a lot of humans.

              We got to the moon using a large number of systems to a) avoid errors where possible and b) build in redundancies. Even an LLM knows this and knew what the statement meant:

              https://chatgpt.com/share/6722e04f-0230-8002-8345-5d2eba2e7d...

              Putting "corrected" in quotes and saying "death spiral" implies error compounding.

              https://chatgpt.com/share/6722e19c-7f44-8002-8614-a560620b37...

              These LLMs seem so smart.

              • philipwhiuk 8 hours ago

                > LLMs are a lot better at a lot of things than a lot of humans.

                Sure, I'm really poor painter, Midjourney is better than me. Are they better than a human trained for that task, on that task? That's the real question.

                And I reckon the answer is currently no.

                • danielmarkbruce 7 hours ago

                  The real question is can they do a good enough job quickly and cheaply to be valuable. ie, quick and cheap at some level of quality is often "better". Many people are using them in the real world because they can do in 1 minute what might take them hours. I personally save a couple hours a day using ChatGPT.

          • kelnos 10 hours ago

            The difference is that when we humans learn from our errors, we learn how to make them less often.

            LLMs get their errors fed back into them and become more confident that their wrong code is right.

            I'm not saying that's completely unsolvable, but that does seem to be how it works today.

            • danielmarkbruce 8 hours ago

              That isn't the way they work today. LLMs can easily find errors in outputs they themselves just produced.

              Start adding different prompts, different models and you get all kinds of ways to catch errors. Just like humans.

              • Lio an hour ago

                I don’t think LLMs can easily find errors in their output.

                There was a recent meme about asking LLMs to draw a wineglass full to the brim with wine.

                Most really struggle with that instruction. No matter how much you ask them to correct themselves they can’t.

                I’m sure they’ll get better with more input but what it reveals is that right now they definitely do not understand their own output.

                I’ve seen no evidence that they are better with code than they are with images.

                For instance, if the time to complete only scales with length of the token and not the complexity of its contents then it probably safe to assume it’s not being comprehended.

              • lomase 40 minutes ago

                You said humans are machines that make errors ans that LLMs can easily find errors in output they themself produce.

                Are you sure you wanted to say that? Or is the other way around?

              • 0points an hour ago

                > LLMs can easily find errors in outputs they themselves just produced.

                Really? That must be a very recent development, because so far this has been a reason for not using them at scale. And noone is.

                Do you have a source?

              • philipwhiuk 8 hours ago

                > LLMs can easily find errors in outputs they themselves just produced.

                No. LLMs can be told that there was an error and produce an alternative answer.

                In fact LLMs can be told there was an error when there wasn't one and produce an alternative answer.

          • reverius42 12 hours ago

            To err is human. To err at scale is AI.

            • cetu86 2 hours ago

              I fear that we'll see a lot of humans err at scale next Tuesday. Global warming is another example of human error at scale.

            • fuzztester 4 hours ago

              err, "hallucinate" is the euphemism you're looking for. ;)

              • arkh an hour ago

                I don't like the use of hallucinate. It implies that LLM have some kind of model of reality and some times get confused. They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.

            • danielmarkbruce 8 hours ago

              To err at scale isn't unique to AI. We don't say "no software, it can err at scale".

              • trod123 5 hours ago

                It is by will alone that I set my mind in motion.

                It is by the juice of Sapho that thoughts acquire speed, the lips become stained, the stains become a warning...

          • goatlover 2 hours ago

            Machines are intelligently designed for a purpose. Humans are born and grow up, have social lives, a moral status and are conscious, and are ultimately the product of a long line of mindless evolution that has no goals. Biology is not design. It's way messier.

          • nuancebydefault 12 hours ago

            Exactly my thought. Humans can correct humans. Machines can correct, or at least point to failures in the product of, machines.

      • spockz 4 hours ago

        My main gripe with this form of code generation is that is primarily used to generate “leaf” code. Code that will not be further adjusted or refactored into the right abstractions.

        It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

        In the hands of a skilled engineer it is a good tool. But for the rest it mainly serves to output more garbage at a higher rate.

        • cdchn 4 hours ago

          >It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

          Some people are touting this as a major feature. "I don't have to pull in some dependency for a minor function - I can just have AI write that simple function for me." I, personally, don't see this as a net positive.

          • spockz 3 hours ago

            Yes, I have heard similar arguments before. It could be an argument for including the functionality in the standard lib for the language. There can be a long debate about dependencies, and then there is still the benefit of being able to vendor and prune them.

            The way it is now just leads to bloat and cruft.

      • paradox242 13 hours ago

        I don't see how this is sustainable. We have essentially eaten the seed corn. These current LLMs have been trained by an enormous corpus of mostly human-generated technical knowledge from sources which we already know to be currently being polluted by AI-generated slop. We also have preliminary research into how poorly these models do when training on data generated by other LLMs. Sure, it can coast off of that initial training set for maybe 5 or more years, but where will the next giant set of unpolluted training data come from? I just don't see it, unless we get something better than LLMs which is closer to AGI or an entire industry is created to explicitly create curated training data to be fed to future models.

        • _DeadFred_ 13 hours ago

          These tools also require the developer class to that they are intended to replace to continue to do what they currently do (create the knowledge source to train the AI on). It's not like the AIs are going to be creating the accessible knowledge bases to train AIs on, especially for new language extensions/libraries/etc. This is a one and f'd development. It will give a one time gain and then companies will be shocked when it falls apart and there's no developers trained up (because they all had to switch careers) to replace them. Unless Google's expectation is that all languages/development/libraries will just be static going forward.

          • layer8 10 hours ago

            One of my concerns is that AI may actually slow innovation in software development (tooling, languages, protocols, frameworks and libraries), because the opportunity cost of adopting them will increase, if AI remains unable to be taught new knowledge quickly.

            • mathw 29 minutes ago

              It also bugs me that these tools will reduce the incentive to write better frameworks and language features if all the horrible boilerplate is just written by an LLM for us rather than finding ways to design systems which don't need it.

              The idea that our current languages might be as far as we get is absolutely demoralising. I don't want a tool to help me write pointless boilerplate in a bad language, I want a better language.

            • batty_alex 7 hours ago

              This is my main concern. What's the point of other tools when none of the LLMs have been trained on it and you need to deliver yesterday?

              It's an insanely conservative tool

            • jamil7 3 hours ago

              You already see this if you use a language outside of Python, JS or SQL.

            • wahnfrieden 6 hours ago

              that is solved via larger contexts

              • layer8 4 hours ago

                It’s not, unless contexts get as large as comparable training materials. And you’d have to compile adequate materials. Clearly, just adding some documentation about $tool will not have the same effect as adding all the gigabytes of internet discussion and open source code regarding $tool that the model would otherwise have been trained on. This is similar to handing someone documentation and immediately asking questions about the tool, compared to asking someone who had years of experience with the tool.

                Lastly, it’s also a huge waste of energy to feed the same information over and over again for each query.

          • 0points an hour ago

            Yea, I'm thinking along the same lines.

            The companies valuing the expensive talent currently working on Google will be the winner.

            Google and others are betting big right now, but I feel the winner might be those who watches how it unfolds first.

        • brainwad 13 hours ago

          The LLM codegen at Google isn't unsupervised. It's integrated into the IDE as both autocomplete and prompt-based assistant, so you get a lot of feedback from a) what suggestions the human accepts and b) how they fix the suggestion when it's not perfect. So future iterations of the model won't be trained on LLM output, but on a mixture of human written code and human-corrected LLM output.

          As a dev, I like it. It speeds up writing easy but tedious code. It's just a bit smarter version of the refactoring tools already common in IDEs...

          • kelnos 10 hours ago

            What about (c) the human doesn't realize the LLM-generated code is flawed, and accepts it?

            • monocasa 5 hours ago

              I mean what happens when a human doesn't realize the human generated code is wrong and accepts the PR and it becomes part of the corpus of 'safe' code?

              • jaredsohn 5 hours ago

                Presumably someone will notice the bug in both of these scenarios at some point and it will no longer be treated as safe.

        • loki-ai 6 hours ago

          maybe most of the code in the future will be very different from what we’re used to. For instance, AI image processing/computer vision algorithms are being adopted very quickly given the best ones are now mostly transformers networks.

      • philipwhiuk 8 hours ago

        > The direction of travel is very clear

        And if we get 9 women we can produce a baby in a single month.

        There's no guarantee such progression will continue. Indeed, there's much more evidence it is coming to a a halt.

        • Towaway69 2 hours ago

          It might also be an example of 80/20 - we're just entering the 20% of features that take 80% of the time & effort.

          It might be possible but will shareholders/investors foot the bill for the 80% that they still have to pay.

        • farseer 3 hours ago

          Its not even been 2 years, and you think things are coming to a halt?

          • 0points an hour ago

            Yes. The models require training data and they already been fed the internet.

            More and more of the content generated since is LLM generated and useless as training data.

            The models get worse, not better by being fed their own output, and right now they are out of training data.

            This is why Reddit just went profitable, AI companies buy their text to train their models because it is at least somewhat human written.

            Of course, even reddit is crawling with LLM generated text, so yes. It is coming to a halt.

          • simianparrot 2 hours ago

            I know for a fact they are because rate _and_ quality of improvement is diminishing exponentially. I keep a close eye on this field as part of my job.

      • olalonde 2 hours ago

        > I'm continually surprised by the amount of negativity

        Maybe I'm just old, but to me, LLMs feel like magic. A decade ago, anyone predicting their future capabilities would have been laughed at.

        • guappa an hour ago

          Nah, you just were not up to speed with the current research. Which is completely normal. Now marketing departments are on the job.

          • davedx an hour ago

            Transformers were proposed in 2017. A decade ago none of this was predictable.

        • Towaway69 2 hours ago

          Magic Makes Money - the more magical something seems, the more people are willing to pay for that something.

          The discussion here seems to bare this out: CEO claims AI is magical, here the truth becomes that it’s just an auto-complete engine.

      • 0points 2 hours ago

        > LLM based systems will be writing more and more code at all companies.

        At Google, today, for sure.

        I do believe we still are not across the road on this one.

        > if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss

        So, is it really a smart move of Google to enforce this today, before quality have increased? Or did this set off their path to losing market shares because their software quality will deteriorate further over the next couple years?

        From the outside it just seems Google and others have no choice, they must walk this path or lose market valuation.

      • mmmpetrichor 4 hours ago

        That's the hype isn't it. The direction of travel hasn't been proven to be more than a surface level yet.

      • randomNumber7 12 hours ago

        Because there seems to be a fundamental misunderstanding producing a lot of nonsense.

        Of course LLMs are a fantastic tool to improve productivity, but current LLM's cannot produce anything novel. They can only reproduce what they have seen.

        • visarga 4 hours ago

          But they assist developers and collect novel coding experience from their projects all the time. Each application of LLM creates feedback to the AI code - the human might leave it as is, slightly change it, or refuse it.

      • baxtr 6 hours ago

        I think that at least partially the negativity is due to the tech bros hyping AI just like they hyped crypto.

      • fallingknife 6 hours ago

        I'm not really seeing this direction of travel. I hear a lot of claims, but they are always 3rd person. I don't know or work with any engineers who rely heavily on these tools for productivity. I don't even see any convincing videos on Youtube. Just show me on engineer sitting down with theses tools for a couple hours and writing a feature that would normally take a couple of days. I'll believe it when I see it.

        • Roark66 2 hours ago

          Well, I rely on it a lot, but not in the IDE, I copy/paste my code and prompts between the ide and LLM. By now I have a library of prompts in each project I can tweak that I can just reuse. It makes me 25% up to 50% faster. Does this mean every project t is done in 50/75% of the time? No, the actual completion time is maybe 10% faster, but i do get a lot more time to spend on thinking about the overall design instead of writing boilerplate and reading reference documents.

          Why no youtube videos thought? Well, most dev you tubers are actual devs that cultivate an image of "I'm faster than LLM, I never re-read library references, I memorise them on first read" and do on. If they then show you a video how they forgot the syntax for this or that maven plugin config and how LLM fills it in 10s instead of a 5min Google search that makes them look less capable on their own. Why would they do that?

        • fuzztester 4 hours ago

          you said it, bro.

    • reverius42 a day ago

      To me the most interesting part of this is the claim that you can accurately and meaningfully measure software engineering productivity.

      • ozim a day ago

        You can - but not on the level of a single developer and you cannot use those measures to manage productivity of a specific dev.

        For teams you can measure meaningful outcomes and improve team metrics.

        You shouldn’t really compare teams but it also is possible if you know what teams are doing.

        If you are some disconnected manager that thinks he can make decisions or improvements reducing things to single numbers - yeah that’s not possible.

        • deely3 21 hours ago

          > For teams you can measure meaningful outcomes and improve team metrics.

          How? Which metrics?

          • neaanopri 6 hours ago

            There's only one metric that matters at the end of the day, and that's $. Revenue.

            Unfortunately there's a lot of lag

            • ImaCake 6 hours ago

              > Unfortunately there's a lot of lag

              A great generalisation and understatement! Often looking like you are becoming more efficient is more important than actually being more efficient, e.g you need to impress investors. So you cut back on maintenance and other cost centres and the new management can blame you in 6 years time for it when you are far enough away from it to not hurt you.

            • fuzztester 4 hours ago

              s/Revenue/profit/g

          • anthonyskipper 13 hours ago

            My company uses the Dora metrics to measure the productivity of teams and those metrics are incredibly good.

          • ozim 20 hours ago

            That is what we pay managers -to figure out- for. They should find out which and how by knowing the team, familiarity with domain knowledge, understanding company dynamics, understanding customer, understanding market dynamics.

            • seanmcdirmid 13 hours ago

              That's basically a non-answer. Measuring "productivity" is a well known hard problem, and managers haven't really figured it out...

              • mdorazio 12 hours ago

                It's not a non-answer. Good managers need to figure out what metrics make sense for the team they are managing, and that will change depending on the company and team. It might be new features, bug fixes, new product launch milestones, customer satisfaction, ad revenue, or any of a hundred other things.

                • seanmcdirmid 12 hours ago

                  I would want a specific example in that case rather than "the good managers figure it out" because in my experience, the bad managers pretend to figure it out while the good managers admit that they can't figure it out. Worse still, if you tell your reports what those metrics are, they will optimize them to death, potentially tanking the product (I can increase my bug fix count if there are more bugs to fix...).

                  • ozim 11 hours ago

                    So for a specific example I would have to outline 1-2 years of history of a team and product as a starter.

                    Then I would have to go on outlining 6-12 months of trying stuff out.

                    Because if I just give "an example" I will get dozens of "smart ass" replies how this specific one did not work for them and I am stupid. Thanks but don't have time for that or for writing an essay that no one will read anyway and call me stupid or demand even more explanation. :)

                    • seanmcdirmid 8 hours ago

                      I get it, you are a true believer. I just disagree with your belief, and the fact that you can't bring credible examples to the table just reinforces that disagreement in my mind.

                • randomNumber7 12 hours ago

                  I heard lines of code is a hot one.

                • hshshshshsh 10 hours ago

                  So basically you have nothing useful to say?

                  • ozim 26 minutes ago

                    I have to say that there is no solution that will work for "every team on every product".

                    This seems to be useful to understand and internalize that there are no simple answers like "use story points!".

                    There is also loads of people who don't understand that, so I stand by that is useful and important to repeat on every possible occasion.

              • yorwba 12 hours ago

                Economists are generally fine with defining productivity as the ratio of aggregate outputs to aggregate inputs.

                Measuring it is not the hard part.

                The hard part is doing anything about it. If you can't attribute specific outputs to specific inputs, you don't know how to change inputs to maximize outputs. That's what managers need to do, but of course they're often just guessing.

                • seanmcdirmid 12 hours ago

                  Measuring human productivity is hard since we can't quantify output beyond silly metrics like lines of code written or amount of time speaking during meetings. Maybe if we were hunter/gatherers we could measure it by amount of animals killed.

                  • js8 2 hours ago

                    > Maybe if we were hunter/gatherers we could measure it by amount of animals killed.

                    So that's how animal husbandry came about!

                  • ozim 11 hours ago

                    Well I pretty much see which team members are slacking and which are working hard.

                    But I do code myself, I write requirements so I do know which ones are trivial and which ones are not. I also see when there are complex migrations.

                    If you work in a group of people you will also get feedback - doesn't have to be snitching but still you get the feel who is a slacker in the group.

                    It is hard to quantify the output if you want to be removed from the group "give me a number" manager. If you actually do the work of a manager so you get the feel of the group like who is "Hermione Granger" nagging that others are slacking and disregard their opinion, you see who is the "silent doer" or you see who is "we should do it properly" bullshitter you can make a lot of meaningful adjustments.

                  • yorwba 12 hours ago

                    That's why upthread we have https://news.ycombinator.com/item?id=41992562

                    "You can [accurately and meaningfully measure software engineering productivity] - but not on the level of a single developer and you cannot use those measures to manage productivity of a specific dev."

                    At the level of a company like Google, it's easy: both inputs and outputs are measured in terms of money.

                    • ozim 11 hours ago

                      As you point back to my comment.

                      I am not Amazon person - but from my experience 2 pizza teams was what worked and I never implemented it myself just what I observed in wild.

                      Measuring Google in terms of money is also flawed, there is loads of BS hidden there and lots of people paying big companies more just because they are big companies.

            • beefnugs 7 hours ago

              haha that is not what managers do. Managers follow their KPIs exactly. If their KPIs say they get payed a bonus if profit goes up, then manager does smart number stuff and sees "if we fire 15% of employees this year, my pay goes up 63%" and then that happens

            • hshshshshsh 10 hours ago

              That sounds like a micro manager. I would imagine good engineers can figure out something for themselves.

      • UncleMeat a day ago

        At scale you can do this in a bunch of interesting ways. For example, you could measure "amount of time between opening a crash log and writing the first character of a new change" across 10,000s of engineers. Yes, each individual data point is highly messy. Alice might start coding as a means of investigation. Bob might like to think about the crash over dinner. Carol might get a really hard bug while David gets a really easy one. But at scale you can see how changes in the tools change this metric.

        None of this works to evaluate individuals or even teams. But it can be effective at evaluating tools.

        • fwip 12 hours ago

          There's lots of stuff you can measure. It's not clear whether any of it is correlated with productivity.

          To use your example, a user with an LLM might say "LLM please fix this" as a first line of action, drastically improving this metric, even if it ruins your overall productivity.

      • valval a day ago

        You can come up with measures for it and then watch them, that’s for sure.

        • lr1970 20 hours ago

          when metric becomes the target it ceases to be a good metric. when discovered how it works developers will type the first character immediately after opening the log.

          edit: typo

          • joshuamorton 12 hours ago

            Only if the developer is being judged on the thing. If the tool is being judged on the thing, it's much less relevant.

            That is, I, personally, am not measured on how much AI generated code I create, and while the number is non-zero, I can't tell you what it is because I don't care and don't have any incentive to care. And I'm someone who is personally fairly bearish on the value of LLM-based codegen/autocomplete.

    • nycdatasci 5 hours ago

      You mention safety as #1, but my impression is that Google has taken a uniquely primitive approach to safety with many of their models. Instead of influencing the weights of the core model, they check core model outputs with a tiny and much less competent “safety model”. This approach leads to things like a text-to-image model that refuses to output images when a user asks to generate “a picture of a child playing hopscotch in front of their school, shot with a Sony A1 at 200 mm, f2.8”. Gemini has similar issue: it will stop mid-sentence, erase its entire response and then claim that something is likely offensive and it can’t continue.

      The whole paradigm should change. If you are indeed responsible for developer tools, I would hope that you’re activity leveraging Claude 3.5 Sonnet and o1-preview.

    • LinuxBender 21 hours ago

      Is AI ready to crawl through all open source and find / fix all the potential security bugs or all bugs for that matter? If so will that become a commercial service or a free service?

      Will AI be able to detect bugs and back doors that require multiple pieces of code working together rather than being in a single piece of code? Humans have a hard time with this.

      - Hypothetical Example: Authentication bugs in sshd that requires a flaw in systemd which then requires a flaw in udev or nss or PAM or some underlying library ... but looking at each individual library or daemon there are no bugs that a professional penetration testing organization such as the NCC group or Google's Project Zero would find. In other words, will AI soon be able to find more complex bugs in a year than Tavis has found in his career and will they start to compete with one another and start finding all the state sponsored complex bugs and then ultimately be able to create a map that suggests a common set of developers that may need to be notified? Will there be a table that logs where AI found things that professional human penetration testers could not?

      • 0points an hour ago

        No, that would require AGI. Actual reasoning.

        Adversaries are already detecting issues tho, using proven means such as code review and fuzzing.

        Google project zero consists of a team of rock star hackers. I don't see LLM even replacing junior devs right now.

      • paradox242 13 hours ago

        Seems like there is more gain on the adversary side of this equation. Think nation-states like North Korea or China, and commercial entities like Pegasus Group.

        • saagarjha an hour ago

          FWIW: NSO is the group, Pegasus is their product

        • AnimalMuppet 12 hours ago

          Google's AI would have the advantage of the source code. The adversaries would not. (At least, not without hacking Google's code repository, which isn't impossible...)

    • bcherny 7 hours ago

      How are you measuring productivity? And is the effect you see in A/B tests statistically significant? Both of these were challenging to do at Meta, even with many thousands of engineers —- curious what worked for you.

    • assanineass 10 hours ago

      Was this comment cleared by comms

    • bogwog 12 hours ago

      Is any of the AI generated code being committed to Google's open source repos, or is it only being used for private/internal stuff?

    • hshshshshsh 13 hours ago

      Seems like everything is working out without any issues. Shouldn't you be a bit suspicious?

    • wslh 12 hours ago

      As someone working in cybersecurity and actively researching vulnerability scanning in codebases (including with LLMs), I’m struggling to understand what you mean by “safe.” If you’re referring to detecting security vulnerabilities, then you’re either working on a confidential project with unpublished methods, or your approach is likely on par with the current state of the art, which primarily addresses basic vulnerabilities.

    • mysterydip 20 hours ago

      I assume the amount of monitoring effort is less than the amount of effort that would be required to replicate the AI generated code by humans, but do you have numbers on what that ROI looks like? Is it more like 10% or 200%?

    • fhdsgbbcaA a day ago

      I’ve been thinking a lot lately about how an LLM trained in really high quality code would perform.

      I’m far from impressed with the output of GPT/Claude, all they’ve done is weight against stack overflow - which is still low quality code relative to Google.

      What is probability Google makes this a real product, or is it too likely to autocomplete trade secrets?

    • Twirrim 13 hours ago

      > We work very closely with Google DeepMind to adapt Gemini models for Google-scale coding and other Software Engineering usecases.

      Considering how terrible and frequently broken the code that the public facing Gemini produces, I'll have to be honest that that kind of scares me.

      Gemini frequently fails at some fairly basic stuff, even in popular languages where it would have had a lot of source material to work from; where other public models (even free ones) sail through.

      To give a fun, fairly recent example, here's a prime factorisation algorithm it produce for python:

        # Find the prime factorization of n
        prime_factors = []
        while n > 1:
          p = 2
          while n % p == 0:
            prime_factors.append(p)
            n //= p
          p += 1
        prime_factors.append(n)
      
      Can you spot all the problems?
      • kgeist 12 hours ago

        They probably use AI for writing tests, small internal tools/scripts, building generic frontends and quick prototypes/demos/proofs of concept. That could easily be that 25% of the code. And modern LLMs are pretty okayish with that.

      • gerash 12 hours ago

        I believe most people use AI to help them quickly figure out how to use a library or an API without having to read all their (often out dated) documentation instead of helping them solve some mathematical challenge

        • taeric 12 hours ago

          If the documentation is out of date, such that it doesn't help, this doesn't bode well for the training data of the AI helping it get it right, either?

          • macintux 12 hours ago

            AI can presumably integrate all of the forum discussions talking about how people really use the code.

            Assuming discussions don't happen in Slack, or Discord, or...

            • woodson 11 hours ago

              Unfortunately, it often hallucinates wrong parameters (or gets their order wrong) if there are multiple different APIs for similar packages. For example, there are plenty ML model inference packages, and the code suggestions for NVIDIA Triton Inference Server Python code are pretty much always wrong, as it generates code that’s probably correct for other Python ML inference packages with slightly different API.

            • jon_richards 5 hours ago

              I often find the opposite. Documentation can be up to date, but AI suggests deprecated or removed functions because there’s more old code than new code. Pgx v5 is a particularly consistent example.

            • randomNumber7 12 hours ago

              And all the code on which it was trained...

        • delfinom 8 hours ago

          I've never had an AI not just make up API when it didn't exist, instead of saying "it doesn't exist". Lol

        • randomNumber7 12 hours ago

          I think that too but google claims something else.

      • calf 12 hours ago

        We are sorely lacking a "Make Computer Science a Science" movement, the tech lead's blurb is par for the course, talking about "SWE productivity" with no reference to scientific inquiry and a foundational understanding of safety, correctness, verification, validation of these new LLM technologies.

        • almostgotcaught 10 hours ago

          Did you know that Google is a for-profit business and not a university? Did you know that most places where people work on software are the same?

          • zifpanachr23 11 minutes ago

            So are most medical facilities. Somehow, the vibes are massively different.

          • calf 13 minutes ago

            Did you know that Software Engineering is a university level degree? That it is a field of scientific study, with professors who dedicate their lives to it? What happens when companies ignore science and worse yet cause harm like pollution or medical malpractice, or in this case, spread Silicon Valley lies and bullshit???

            Did you know? How WEIRD.

            How about you not harass other commenters with such arrogantly ignorant sarcastic questions?? Or is that part of corporate "for-profit" culture too????

      • justinpombrio 11 hours ago

        > Can you spot all the problems?

        You were probably being rhetorical, but there are two problems:

        - `p = 2` should be outside the loop

        - `prime_factors.append(n)` appends `1` onto the end of the list for no reason

        With those two changes I'm pretty sure it's correct.

        • kenjackson 7 hours ago

          You don't need to append 'p' in the inner while loop more than once. Maybe instead of an array for keeping the list of prime factors do it in a set.

          • zeroonetwothree 5 hours ago

            It’s valid to return the multiplicity of each prime, depending on the goal of this.

        • rmbyrro 10 hours ago

          `n` isn't defined

          • justinpombrio 8 hours ago

            The implicit context that the poster removed (as you can tell from the indentation) was a function definition:

                def factorize(n):
                  ...
                  return prime_factors
      • ijidak 9 hours ago

        I'm the first to say that AI will not replace human coders.

        But I don't understand this attempt to tell companies/persons that are successfully using AI that no they really aren't.

        In my opinion, if they feel they're using AI successfully, the goal should be to learn from that.

        I don't understand this need to tell individuals who say they are successfully using AI that, "no you aren't."

        It feels like a form of denial.

        Like someone saying, "I refuse to accept that this could work for you, no matter what you say."

      • senko 12 hours ago

        We collectively deride leetcoding interviews yet ask AI to flawlessly solve leetcode questions.

        I bet I'd make more errors on my first try at it.

        • AnimalMuppet 12 hours ago

          Writing a prime-number factorization function is hardly "leetcode".

          • senko 11 hours ago

            I didn't say it's hard, but it's most definitely leetcode, as in "pointless algorithmic exercise that will only show you if the candidate recently worked on a similar question".

            If that doesn't satisfy, here's a similar one at leetcode.com: https://leetcode.com/problems/distinct-prime-factors-of-prod...

            I would not expect a programmer of any seniority to churn stuff like that and have it working without testing.

            • AnimalMuppet 11 hours ago

              > "pointless algorithmic exercise that will only show you if the candidate recently worked on a similar question".

              I've been able to write one, not from memory but from first principles, any time in the last 40 years.

              • senko 10 hours ago

                Curious, I would expect a programmer of your age to remember Knuth's "beware of the bugs in above code, I have only proven it's correct but haven't actually run it".

                I'm happy you know math, but my point before this thread got derailed was that we're holding (coding) AI to a higher standard than actual humans, namely to expect to write bug-free code.

                • 0points 38 minutes ago

                  > my point before this thread got derailed was that we're holding (coding) AI to a higher standard than actual humans, namely to expect to write bug-free code

                  This seems like a very layman attitude and I would be surprised to find many devs adhering to this idea. Comments in this thread alone suggests that many devs on HN do not agree.

                • smrq 5 hours ago

                  I hold myself to a higher standard than AI tools are capable of, from my experience. (Maybe some people don't, and that's where the disconnect is between the apologists and the naysayers?)

                • Jensson 6 hours ago

                  Humans can actually run the code and knows what it should output. the LLM can't, and putting it in a loop against code output doesn't work well either since the LLM can't navigate that well.

          • atomic128 12 hours ago

            Empirical testing (for example: https://news.ycombinator.com/item?id=33293522) has established that the people on Hacker News tend to be junior in their skills. Understanding this fact can help you understand why certain opinions and reactions are more likely here. Surprisingly, the more skilled individuals tend to be found on Reddit (same testing performed there).

            • louthy 12 hours ago

              I’m not sure that’s evidence; I looked at that and saw it was written in Go and just didn’t bother. As someone with 40 years of coding experience and a fundamental dislike of Go, I didn’t feel the need to even try. So the numbers can easily be skewed, surely.

              • atomic128 11 hours ago

                Only individuals who submitted multiple bad solutions before giving up were counted as failing. If you look but don't bother, or submit a single bad solution, you aren't counted. Thousands of individuals were tested on Hacker News and Reddit, and surprisingly, it's not even close: Reddit is where the hackers are. I mean, at the time of the testing, years ago.

                • louthy 11 hours ago

                  That doesn’t change my point. It didn’t test every dev on all platforms, it tested a subset. That subset may well have different attributes to the ones that didn’t engage. So, it says nothing about the audience for the forums as a whole, just the few thousand that engaged.

                  Perhaps even, there could be fewer Go programmers here and some just took a stab at it even though they don’t know the language. So it could just select for which forum has the most Go programmers. Hardly rigourous.

                  So I’d take that with a pinch of salt personally

                  • atomic128 11 hours ago

                    Agreed. But remember, this isn't the only time the population has been tested. This is just the test (from two years ago, in 2022) that I happen to have a link to.

                    • louthy 11 hours ago

                      The population hasn’t been tested. A subset has.

                      • 59nadir 2 hours ago

                        It's also fine to be an outlier. I've been programming for 24 years and have been hanging out on HackerNews on and off for 11. HN was way more relevant to me 11 years ago than it is now, and I don't think that's necessarily only because the subject matter changed, but probably also because I have.

            • freilanzer 16 minutes ago

              Yeah, this is useless.

            • 0xDEAFBEAD 11 hours ago

              Where is the data?

            • Izikiel43 11 hours ago

              How is that thing testing? Is it expecting a specific solution or actually running the code? I tried some solutions and it complained anyways

              • atomic128 11 hours ago

                The way the site works is explained in the first puzzle, "Hack This Site". TLDR, it builds and runs your code against a test suite. If your solutions weren't accepted, it's because they're wrong.

  • yangcheng 4 hours ago

    Having worked at both FAANG companies and startups, I can offer a perspective on AI's coding impact in different environments. At startups, engineers work with new tech stacks, start projects from scratch, and need to ship something quickly. LLMs can wrtie way more code. I've seen ML engineers build React frontends without any previous frontend experience, flutter developers write 100-line SQL queries for data analysis, with LLM 10x productivity for this type of work. At FAANG companies, codebases contain years of business logic, edge cases, and 'not-bugs-but-features.' Engineers know their tech stacks well, and legacy constraints make LLMs less effective, and can generate wrong code that needs to be fixed

  • ttul 3 hours ago

    I wanted a new feature in our customer support console and the dev lead suggested I write a JIRA. I’m the CEO, so this is not my usual thing (and probably should not be). I told Claude what I wanted and pasted in a .js file from the existing project so that it would get a sense of the context. It cranked out a fully functional React component that actually looks quite nice too. Two new API calls were needed, but Claude helpfully told me that. So I pasted the code sample and a screenshot of the HTML output into the JIRA and then got Claude to write me the rest of the JIRA as well.

    Everyone knows this was “made by AI” because there’s no way in hell I would ever have the time. These models might not be able to sit there and build an entire project from scratch yet, but if what you need is some help adding the next control panel page, Claude’s got your back on that.

    • simianparrot 2 hours ago

      You’re also the CEO so chances are the people looking at that ticket aren’t going to tell you the absolute mess the AI snippet actually is and how pointless it was to include it instead of a simple succinct sentence explaining the requirements.

      If you’re not a developer chances are very high the code it produces will look passable but is actually worthless — or worse, it’s misleading and now a dev has to spend more time deciphering the task.

    • JonChesterfield 23 minutes ago

      > Everyone knows this was “made by AI” because there’s no way in hell I would ever have the time.

      Doubtful. A decent fraction of the people reading it will guess that you've wasted your time writing incoherent nonsense in the jira. Engineers don't usually have much insight into what the C suite are doing. It would be a prudent move to spend the couple of seconds to write "something like this AI sketch:" before the copy&paste.

    • gloflo 2 hours ago

      > Everyone knows this was “made by AI” because ...

      They should know because you told them so.

      Having to decipher weird code only to discover it was not written by a human is not nice.

  • devonbleak 12 hours ago

    It's Go. 25% of the code is just basic error checking and returning nil.

    • QuercusMax 12 hours ago

      In Java, 25% of the code is import statements and curly braces

      • layer8 10 hours ago

        You generally don’t write those by hand though.

        I’m pretty sure around 50% of the code I write is already auto-complete, without any AI.

        • amomchilov 10 hours ago

          Exactly, you write them with AI

          • throwaway106382 9 hours ago

            IDEs have been auto completing braces, inserting imports and generating framework boilerplate for decades.

            We don’t need AI for this and it’s 10x the compute to do it slower with AI.

            LLMs are useful but they aren’t a silver bullet. We don’t need to replace everything with it just because.

            • philipwhiuk 8 hours ago

              Yeah, but the management achievement is to call 'autocomplete' AI.

              AI doesn't mean LLM after all. AI means 'a computer thing'.

              • throwaway106382 7 hours ago

                I’ve been calling if-statements AI since before I graduated college

        • jansan 3 hours ago

          Simply strech your definition of AI and voilá, you are writing it with AI.

      • contravariant 12 hours ago

        In lisp about 50% of the code is just closing parentheses.

        • harry8 11 hours ago

          Heh, but it can't be that, no reason to think llms can count brackets needing a close any more than they can count words.

          • int_19h 9 hours ago

            LLMs can count words (and letters) just fine if you train them to do so.

            Consider the fact that GPT-4 can generate valid XML (meaning balanced tags, quotes etc) in base64-encoded form. Without CoT, just direct output.

          • overhead4075 10 hours ago

            Logically, it couldn't be 50% since that would imply that the other 50% would be open brackets and that would leave 0% room for macros.

      • xxs 3 hours ago

        Over 3 imports from the same package - use an asterisk.

      • NeoTar 12 hours ago

        Does auto-code generation count as AI?

    • remram 8 hours ago

      Another 60% is auto-generated protobuf/grpc code. Maybe protoc counts as "AI".

      • GeneralMayhem 6 hours ago

        Google does not check in protoc-generated code. It's all generated on demand by Blaze/Bazel.

    • hiddencost 4 hours ago

      Go is a very small fraction of the code at Google.

  • fzysingularity 13 hours ago

    While I get the MBA-speak of lines-of-code that AI is now able to accomplish, it does make me think about their highly-curated internal codebase that makes them well placed to potentially get to 50% AI-generated code.

    One common misconception is that all LLMs are the same. The models are trained the same, but trained on wildly different datasets. Google, and more specifically the Google codebase is arguably one of the most curated, and iterated on datasets in existence. This is a massive lever for Google to train their internal code-gen models, that realistically could easily replace any entry-level or junior developer.

    - Code review is another dimension of the process of maintaining a codebase that we can expect huge improvements with LLMs. The highly-curated commentary on existing code / flawed diff / corrected diff that Google possesses give them an opportunity to build a whole set of new internal tools / infra that's extremely tailored to their own coding standard / culture.

    • bqmjjx0kac 11 hours ago

      > that realistically could easily replace any entry-level or junior developer.

      This is a massive, unsubstantiated leap.

      • throwaway106382 10 hours ago

        I’d take pair programming with a junior over a GPT bot any day.

        • neaanopri 6 hours ago

          I'd take coding by own damn self over either a junior or a gpt bot

      • risyachka 10 hours ago

        The issue is it doesn't really replace junior dev. You become one - as you have to babysit it all the time, check every line of code, and beg it to make it work.

        In many cases it is counterproductive

    • unit149 4 hours ago

      Philosophically, these models are akin to scholars prior to differentiation during their course of study. Throttling data, depending on one's course of study, and this shifting of the period in history step-by-step. Either it's a tit-for-tat manner of exchange that the junior developer is engaged in, when overseeing every edit that an LLM has modified, or I'd assume that there are in-built methods of garbage collection, that another LLM evaluating a hash function partly identifying a block of tokenized internal code would be responsible for.

    • morkalork 11 hours ago

      Is the public gemini code gen LLM trained on their internal repo? I wonder if one could get it to cough up propriety code with the right prompt.

      • p1esk 10 hours ago

        I’m curious if Microsoft lets OpenAI train on GH private repos.

      • happyopossum 4 hours ago

        > Is the public gemini code gen LLM trained on their internal repo?

        Nope

  • makerofthings an hour ago

    I keep trying to use these things but I always end up back in vim (in which I don't have any ai autocomplete set up.)

    The AI is fine, but every time it makes a little mistake that I have to correct it really breaks my flow. I might type a lot more boilerplate without it but I get better flow and overall that saves me time with less mistakes.

  • Taylor_OD 13 hours ago

    If we are talking about the boilerplate code and autofill syntax code that copilot or any other "AI" will offer me when I start typing... Then sure. Sounds about right.

    The other 75% is the stuff you actually have to think about.

    This feels like saying linters impact x0% of code. This just feels like an extension of that.

    • creativenolo 10 hours ago

      It probably does. But an amazing number of commenters think they are prompting the copy & pasting, and hoping for the best.

      • Kalabasa 7 hours ago

        Yep, a lot of headline readers here.

        It's just a very advanced autocomplete, completely integrated into the internal codebase and IDE. You can read this on the research blog (maybe if everyone just read the blog).

        e.g.

        I start typing `var notificationManager`

        It would suggest `= (Notification Manager) context.getSystemService(NOTIFICATION_MANAGER);`

        If you've done Android then you know how much boilerplate there is to suggest.

        I press Ctrl+Enter or something to accept the suggestion.

        Voila, more than 50% of that code was written by AI.

        > blindly committing AI code

        Even before AI, no one blindly accepts autocomplete.

        A lot of headline-readers seem to imagine some sort of semi-autonomous or prompt based code generation that writes whole blocks of code to then be blindly accepted by engineers.

    • esjeon 6 hours ago

      > The other 75% is the stuff you actually have to think about.

      I’m pretty sure the actual ratio is much lower than that. In other words, LLMs are currently not good enough to remove the majority of chores, even with the state of the art model trained on highly curated dataset.

  • arethuza 31 minutes ago

    I'm waiting for some Google developer to say "More than a quarter of the CEOs statements are now created by AI"... ;-)

  • drunken_thor 8 hours ago

    A company that used to be the pinnacle of software development is now just generating code in order to sell their big data models. Horrifying. Devastating.

  • pfannkuchen 13 hours ago

    It’s replaced the 25% previously copy pasted from stack overflow.

    • rkagerer 12 hours ago

      This may have been intended as a joke, but it's the only explanation that reconciles the quote for me.

    • brainwad 12 hours ago

      The split is roughly 25% AI, 25% typed, 50% pasted.

  • agilob 19 minutes ago

    So we're using CoL as a metric now?

  • alienchow an hour ago

    When setting up unit tests traditionally took more time and LOC than the logic itself, LLMs are particularly useful.

    1. Paste in my actual code.

    2. Prompt: Write unit tests, test tables. Include scenarios: A, B, C, D, E. Include all other scenarios I left out, isolate suggestions for review.

    I used to spend the majority of the coding time writing unit tests and mocking test data, now it's more like 10%.

    • arkh an hour ago

      > Paste in my actual code.

      > Prompt: Write unit tests

      TDD in shambles. What you'd like is:

      > Give your specs to some AI

      > Get a test suite generated with all edge cases accounted for

      > Code

      • alienchow 41 minutes ago

        Matter of preference. I've found TDD to be inflexible for my working style. But your suggestion would indeed work for a staunch TDD practitioner.

  • ryoshu 13 hours ago

    Spoken like an MBA who counts lines of code.

  • summerlight 9 hours ago

    In Google, there is a process called "Large Scale Change" which is primarily meant for trivial/safe but extremely tedious code changes that potentially span over the entire monorepo. Such as foundational API changes, trivial optimization, code style etc etc. This is a perfectly suitable for LLM driven code changes (in fact I'm seeing more and more of LLM generated LSC) and I guess a large fraction of mentioned "AI generated codes" can be actually attributable to this.

    • bubaumba 5 hours ago

      yeh, but the main problem is the quality. with algorithm bug can be fixed. with llm it's more complicated. in practice they do some mistakes consistently, and in some cases cannot recover even with assistance. (don't take me wrong, I'm very happy with the results most of the time)

      • afro88 4 hours ago

        You just fix the mistakes and keep moving. It's like autocomplete where you still need to fill in the blanks or select a different completion.

        • saagarjha an hour ago

          Spotting and fixing mistakes in a LSC is no small feat.

  • imaginebit a day ago

    I think he's trying to promote AI, somehow raises questions about thrir code quality among some

    • dietr1ch a day ago

      I think it just shows how much noise there is in coding. Code gets reviewed anyways (although review quality was going down rapidly the more PMs where added to the team)

      Most of the code must be what could be snippets (opening files and handling errors with absl::, and moving data from proto to proto). One thing that doesn't help here, is that when writing for many engineers on different teams to read, spelling out simple code instead of depending on too many abstractions seems to be preferred by most teams.

      I guess that LLMs do provide smarter snippets that I don't need to fill out in detail, and when it understands types and whether things compile it gets quite good and "smart" when it comes to write down boilerplate.

  • mgaunard 34 minutes ago

    AI is pretty good at helping you manage a messy large codebase and making it even more messy and verbose.

    Is that a good thing though? We should work and making code small and easy to manage without AI tools.

  • 0xCAP 13 hours ago

    People overestimate faang. There are many talents working there, sure, but a lot of garbage gets pumped into their codebases as well.

  • meindnoch 29 minutes ago

    I saw code on master which was parsing HTML with regex. The author was proud that this code was mostly generated by AI.

    :)

  • motoxpro 10 hours ago

    People talk about how AI is bad at generating non-trivial code, but why are people using it to generate non-trivial code?

    25% of coding is just the most basic boilerplate. I think of AI not as a thinking machine but as a 1000 WPM boilerplate typer.

    If it is halucinating, you're trying to make it do stuff that is too complex.

    • ghosty141 10 hours ago

      But for this boiletplate creating a few snippets in your code generally works better. Especially if things change you dont have to retrain your model.

      Thats my main problem: for trivial things it works but isnt much better than conventional tools, for hard things it just produces incorrect code such that writing it from scratch barely makes a difference

      • motoxpro 9 hours ago

        I think thats a great analogy.

        What would it look like if I could have 3-500 snippets instead of 30. Those 300 are things that I do all over my codebase e.g. same basic where query but in the context of whatever function I am in, a click handler with the correct types for that purpose, etc.

        There is no way I can have enough hotkeys or memorize that much, and I truly can't type faster than I can hit tab.

        I don't need it to think for me. Most coding (front-end/back-end web) involves typing super basic stuff, not writing complex algorithms.

        This is where the 10-20% speed-up comes in. On average I am just typing 20% faster by hitting tab.

    • globular-toast an hour ago

      Were people seriously writing this boilerplate by hand up until this point? I started using snippets and stuff more than 15 years ago!

  • daylet an hour ago
  • nosbo a day ago

    I don't write code as I'm a sysadmin. Mostly just scripts. But is this like saying intellisense writes 25% of my code? Because I use autocomplete to shortcut stuff or to create a for loop to fill with things I want to do.

    • n_ary a day ago

      You just made it less attractive to the target corps who are to buy this product from Google. Saying, intellisense means corps already have license of various of these and some are even mostly free. Saying AI generate our 25% code sounds more attractive to corps, because it feels like something new and novel and you can imagine laying off 25% of the personnel and justify buying this product from Google.

      When someone who uses a product says it, there is a 50% chance of it being true, but when someone far away from the user says it, it is 100% promotion of product and setup for trust building for a future sale.

    • coldpie 13 hours ago

      Looks like it's an impressive autocomplete feature, yeah. Check out the video about halfway down here: https://research.google/blog/ai-in-software-engineering-at-g... (linked from other comment https://news.ycombinator.com/item?id=41992028 )

      Not what I thought when I heard "AI coding", but seems pretty neat.

    • stephenr 5 hours ago

      > I don't write code as I'm a sysadmin. Mostly just scripts.

      .... so what do you put in your scripts if not code?

  • 1GZ0 34 minutes ago

    I wonder how much of that code is boilerplate vs. actual functionality.

  • randomNumber7 12 hours ago

    I cannot imagine this to be true, cause imo current LLM's coding abilities are very limited. It definitely makes me more productive to use it as a tool, but I use it mainly for boilerplate and short examples (where I had to read some library documentation before).

    Whenever the problem requires thinking, it horribly fails because it cannot reason (yet). So unless this is also true for google devs, I cannot see that 25% number.

    • Wheatman 2 hours ago

      My guess is that they counted each line of code made by an engineer using AI coding tools.

      Besides, even google employees write a lot of boilerplate, especially android IIRC, not to mention simple but essential code, so AI can prevent carpal tunnel for the junior devs working on that.

      • zifpanachr23 9 minutes ago

        Roughly only one quarter (assuming they are outputting similar amounts of code as non AI using engineers) of engineers actually using AI regularly for coding is a statistic that is actually believable to me based on my own experience. A lot of small teams have their "AI guy" who has drunk the kool aid, but it's not as widespread as HackerNews would make you think.

  • avsteele 7 hours ago

    Everyone here is arguing about the average AI code quality and I'm here just not believing the claim.

    Is Google out there monitoring the IDE activity of every engineer, logging the amount of code created, by what, lines, characters, and how it was generated? Dubious.

    • kunley 20 minutes ago

      Very good point. How was the 25% measured?

  • ausbah a day ago

    i would be may more impressed if LLMs could do code compression. more code == more things that can break, and when llms can generate boatloads of it with a click you can imagine what might happen

    • Scene_Cast2 a day ago

      This actually sparked an idea for me. Could code complexity be measured as cumulative entropy as measured by running LLM token predictions on a codebase? Notably, verbose boilerplate would be pretty low entropy, and straightforward code should be decently low as well.

      • jeffparsons a day ago

        Not quite, I think. Some kinds of redundancy are good, and some are bad. Good redundancy tends to reduce mistakes rather than introduce them. E.g. there's lots of redundancy in natural languages, and it helps resolve ambiguity and fill in blanks or corruption if you didn't hear something properly. Similarly, a lot of "entropy" in code could be reduced by shortening names, deleting types, etc., but all those things were helping to clarify intent to other humans, thereby reducing mistakes. But some is copy+paste of rules that should be enforce in one place. Teaching a computer to understand the difference is... hard.

        Although, if we were to ignore all this for a second, you could also make similar estimates with, e.g., gzip: the higher the compression ratio attained, the more "verbose"/"fluffy" the code is.

        Fun tangent: there are a lot of researchers who believe that compression and intelligence are equivalent or at least very tightly linked.

        • 8note a day ago

          Interpreting this comment, it would predict low complexity for code copied unnecessarily.

          I'm not sure though. If it's copied a bunch of times, and it actually doesn't matter because each usecase of the copying is linearly independent, does it matter that it was copied?

          Over time, you'd still see copies being changed by themselves show up as increased entropy

      • david-gpu 13 hours ago

        > Could code complexity be measured as cumulative entropy as measured by running LLM token predictions on a codebase? Notably, verbose boilerplate would be pretty low entropy, and straightforward code should be decently low as well.

        WinRAR can do that for you quite effectively.

      • malfist 17 hours ago

        Code complexity can already be measured deterministically with cyclomatic complexity. No need to use an AI fuzzy logic at this. Especially when they're bad at math.

        • contravariant 12 hours ago

          There's nothing fuzzy about letting an LLM determine the probability of a particular piece of text.

          In fact it's the one thing they are explicitly designed to do, the rest is more or less a side-effect.

    • ks2048 a day ago

      I agree. It seems like counting lines of generated code is like counting bytes/instructions of compiled code - who cares? If “code” becomes prompts, then AI should lead to much smaller code than before.

      I’m aware that the difference is that AI-generated code can be read and modified by humans. But that quantity is bad because humans have to understand it to read or modify it.

      • TZubiri a day ago

        What's that line about accounting for lines of code on the wrong side of the balance sheet?

      • latexr a day ago

        > If “code” becomes prompts, then AI should lead to much smaller code than before.

        What’s the point of shorter code if you can’t trust it to do what it’s supposed to?

        I’ll take 20 lines of code that do what they should consistently over 1 line that may or may not do the task depending on the direction of the wind.

    • AlexandrB a day ago

      Exactly this. Code is a liability, if you can do the same thing with less code you're often better off.

      • EasyMark a day ago

        Not if it’s already stable and has been running for years. Legacy doesn’t necessarily mean “need replacement because of technical debt”. I’ve seen lots of people want to replace code that has been running basically bug free for years because “there are better coding styles and practices now”

    • 8note a day ago

      How would it know which edge cases are being useful and which ones aren't?

      I understand more code as being more edge cases

      • wvenable 12 hours ago

        More code could just be useless code that no longer serves any purpose but still looks reasonable to the naked eye. An LLM can certainly figure out and suggest maybe some conditional is impossible given the rest of the code.

        I can also suggest alternatives, like using existing library functions for things that might have been coded manually.

        • ekwav an hour ago

          Or just refactor to use early returns

    • asah a day ago

      meh - the LLM code I'm seeing isn't particularly more verbose. And as others have said, if you want tighter code, just add that to the prompt.

      fun story: today I had an LLM write me a non-trivial perl one-liner. It tried to be verbose but I insisted and it gave me one tight line.

  • Kiro an hour ago

    I find it interesting that the people who dismiss the utility of AI are being so aggressive, sarcastic and hateful about it. Why all the anger? Where's the curiosity?

  • sbochins 13 hours ago

    It’s probably code that was previously machine generated that they’re now calling “AI Generated”.

    • frank_nitti 13 hours ago

      That would make sense and be a good use case, essentially doing what OpenAPI generators do (or Yeoman generators of yore), but less deterministic I’d imagine. So optimistically I would guess it covers ground that isn’t already solved by mainstream tools.

      For the example of generating an http app scaffolding from an openapi spec, it would probably account for at least 25% of the text in the generated source code. But I imagine this report would conveniently exclude the creation of the original source yaml driving the generator — I can’t imagine you’d save much typing (or mental overhead) trying to prompt a chatbot to design your api spec correctly before the codegen

  • haccount an hour ago

    No wonder Gemini is a garbage fire if had chatgpt write the code for it.

  • mirkodrummer 10 hours ago

    Sometimes I wonder why we would want LLMs spit out human readable code. Wouldn’t be a better future where LLMs generate highly efficient machine code and eventually we read the “source map” for debugging? Wasn’t source code just for humans?

    • palata 10 hours ago

      Because you can't trust what the LLM generates, so you have to read it. Of course the question then is whether you can trust your developer or not.

      • mirkodrummer 10 hours ago

        I’d rather reply with LLMs aren’t just capable of that. They’re okay with Python and JS simply because there’s a lot of training data out in the open. My point was that it seems like we’re delegating the future to tools that could generate critical code using languages originally thought to be easy to learn.. it doesn’t make sense

    • sparcpile 8 hours ago

      You just reinvented the compiler.

    • mattxxx 8 hours ago

      I think they spit out human-readable code, because they've been tried on human authors.

      But you make an interesting point: eventually AI will be making for other AI's + machines, and human verification can be an after thought.

  • prmoustache 11 hours ago

    Aren't we just talking about auto completion?

    In that case those 25% are probably the very same 25% that were automatically generated by LTP based auto-completion.

  • lysace 13 hours ago

    Github Copilot had an outage for me this morning. It was kind of shocking. I now believe this metric. :-)

    I'll be looking into ways of running a local LLM for this purpose (code assistance in VS Code). I'm already really impressed with various quite large models running on my 32 GB Mac Studio M2 Max via Ollama. It feels like having a locally running chatgpt.

    • evoke4908 12 hours ago

      Ollama, docker and "open webui".

      It immediately works out of the box and that's it. I've been using local LLMs on my laptop for a while, it's pretty nice.

      The only thing you really need to worry about is VRAM. Make sure your GPU has enough memory to run your model and that's pretty much it.

      Also "open webui" is the worst project name I've ever seen.

    • kulahan 13 hours ago

      I'm very happy to hear this; maybe it's finally time to buy a ton of ram for my PC! A local, private LLM would be great. I'd try talking to it about stuff I don't feel comfortable being on OpenAI's servers.

      • lysace 13 hours ago

        Getting lots of ram will let you run large models on the CPU, but it will be so slow.

        The Apple Silicon Macs have this shared memory between CPU and GPU that let's the (relatively underpowered GPU, compared to a decent Nvidia GPU) run these models at decent speeds, compared with a CPU, when using llama.cpp.

        This should all get dramatically better/faster/cheaper within a few years, I suspect. Capitalism will figure this one out.

        • kulahan 13 hours ago

          Interesting, so this is a Mac-specific solution? That's pretty cool.

          I assume, then, that the primary goal would be to drop in the beefiest GPU possible when on windows/linux?

          • evilduck 6 hours ago

            There's nothing Mac specific about running LLMs locally, they just happen to be a convenient way to get a ton of VRAM in a single small power efficient package.

            In Windows and Linux, yes you'll want at least 12GB of VRAM to have much of any utility but the beefiest consumer GPUs are still topping out at 24GB which is still pretty limiting.

          • lysace 12 hours ago

            With Windows/Linux I think the issue is that NVidia is artificially limiting the amount of onboard RAM (they want to sell those devices for 10x more to openai, etc) and that AMD for whatever reason can't get their shit together.

            I'm sure that there are other much more knowledgeable people here though, on this topic.

  • standardUser 10 hours ago

    I use it all the time for work. Not much for actual code that goes into production, but a lot for "tell me what this does" or "given x, how do I do y". It speeds me up a ton. I'll also have it do code review when I'm uncertain about something, asking if there's any bugs or inefficiencies in a given chunk of code. I've actually found it to be more reliable about code than more general topics. Though I'm using it in a fairly specific way with code, versus asking for deep information about history for example, where is frequently gets facts very wrong.

  • zxilly an hour ago

    As a go developer, Copilot write 100% "if err != nnil for me

  • pixelat3d a day ago

    Sooo... is this why Google sucks now?

  • tgtweak 6 hours ago

    I feel like, given my experience lately with all the API models currently available, that this is only a fact if the models google is using internally are SIGNIFICANTLY better than what is available publicly even on closed models.

    Claude 3.5-sonnet (latest) is barely able to stay coherent on 500 LOC files, and easily gets tripped up when there are several files in the same directory.

    I have tried similarly with o1-preview and 4o, and gemini pro...

    If google is using a 5M token context window LLM with 100k+ token-output trained on all the code that is not public... then I can believe this claim.

    This just goes to show how critical of an issue this is that these models are behind closed doors.

    • nomel 6 hours ago

      > This just goes to show how critical of an issue this is that these models are behind closed doors.

      How is competitive advantage, using in-house developed/funded tools, a critical issue? Every company has tools that only they have, that they pay significantly for to develop, and use extensively. It's can often be the primary thing that really differentiates companies who are all doing similar things.

  • holtkam2 10 hours ago

    Can we also see the stats for how much code used to come from StackOverflow? Probably 25%

  • LudwigNagasena 12 hours ago

    How much of that generated code is `if err != nil { return err }`?

  • SavageBeast 12 hours ago

    Google needs to bolster their AI story and this is good click bait. I'm not buying it personally.

  • agomez314 9 hours ago

    I thought great engineers reduce the amount of new code in a codebase?

  • rcarmo a day ago

    There is a running gag among my friends using Google Chat (or whatever their corporate IM tool is now called) that this explains a lot of what they’re experiencing while using it…

    • tdeck 13 hours ago

      I didn't know anyone outside Google actually used that...

  • hsuduebc2 7 hours ago

    I believe it is absolutely suitable for generating controllers in java spring or connecting to database and making a simple query which from my experience as an ordinary enterprise developer in Fintech is most of the job. Making these huge applicatins is a lot of repetitive work and integrations. Not a work that usually requires some advanced logic.

  • echoangle 3 hours ago

    Does protobuf count as AI now?

  • blibble 11 hours ago

    this is the 2024 version of "25% of our code is now produced by outsourced resources"

  • skatanski 11 hours ago

    I think at this moment, this sounds more like "quarter of the company's new code is created using stackoverflow and other forums. Many many people use all these tools to find information, as they did using stackoverflow a month ago, but now suddenly we can call it "created by AI". It'd be nice to have a distinction. I'm saying this, while being very excited about using LLMs as a developer.

  • jmartin2683 10 hours ago

    I’m gonna bet this is a lie.

    • freedomben 10 hours ago

      I don't think it's a lie, but I do think it's very misleading. With common languages probably 25% of code can be generated by an AI, but IME it's mostly just boilerplate or some pattern that largely just saves typing time, not programming/thinking time. In other words it's the 25% lowest hanging fruit, so thinking like "1/4 of programming is now done by AI" is misleading. It's probably more like 5 to 10 percent.

  • hiptobecubic 4 hours ago

    I've had mixed results writing "normal" business logic in c++, but i gotta say, for SQL it's pretty incredible. Granted SQL has a lot of boilerplate and predictable structure, but it saves a ton of time honestly.

  • mjhay 12 hours ago

    100% of Sundar Pichai could be replaced by an AI.

  • bryanrasmussen 9 hours ago

    Public says more than a quarter of Google's search results are absolute crap.

  • mastazi 10 hours ago

    The auto-linter in my editor probably generates a similar percentage of the characters I commit.

  • tabbott 11 hours ago

    Without a clear explanation of methodology, this is meaningless. My guess is this statistic is generated using misleading techniques like classifying "code changes generated by existing bulk/automated refactoring tools" as "AI generated".

  • davidclark 11 hours ago

    If I tab complete my function and variable symbols, does my lsp write 80%+ of my lines of code?

  • ThinkBeat 12 hours ago

    This is quite interesting to know.

    I will be curious to see if it has any impact positive or negative over a couple of years.

    Will the code be more secure since the AI does not make the mistakes humans do?

    Or will the code, not well enough understood by the employees, exposes exploits that would not be there?

    Will it change average up time?

    • kunley 14 minutes ago

      what makes you think that current direction of AI development would lead to making less mistakes than humans do, as opposed to repeating same miskates plus hallucinating more?

  • Terr_ 11 hours ago

    My concern is that "frequently needed and immediately useful results" is strongly correlated to "this code should already be abstracted away into a library by now."

    Search Copy-Paste as a Service is hiding a deeper issue.

  • _spduchamp 12 hours ago

    I can ask AI to generate the same code multiple times, and get new variations on programming style each time, and get the occasional solution that is just not quite right but sort of works. Sounds like a recipe for a gloppy mushy mess of style salad.

  • Starlevel004 13 hours ago

    No wonder search barely works anymore

  • shane_kerns 7 hours ago

    It's no wonder that their search absolutely sucks now. Duckduckgo is so much better in comparison now.

  • elzbardico 12 hours ago

    Well. When I developed in Java, I think that Eclipse did similar figures circa 2005.

  • nektro 9 hours ago

    Google used to be respected, a place so highly sought after that engineers who worked there were revered like wizards. oh how they've fallen :(

  • twis 12 hours ago

    How much code was "written by" autocomplete before LLMs came along? From my experience, LLM integration is advanced autocomplete. 25% is believable, but misleading.

    • scottyah 12 hours ago

      My linux terminal tab-complete has written 50% of my code

  • hi_hi 11 hours ago

    > More than a quarter of new code created at Google is generated by AI, said CEO Sundar Pichai...

    How do they know this? At face value, it sounds like alot, but it only says "new code generated". Nothing about code making it into source control or production, or even which parts of googles vast business units.

    For all we know, this could be the result of some internal poll "Tell us if you've been using Goose recently" or some marketing analytics on the Goose "Generate" button.

    It's puff piece to put Google back in the lime light, and everyone is lapping it up.

  • cebert 12 hours ago

    Did AI have to go thru several rounds of Leetcode interviews?

  • defactor 7 hours ago

    Try any AI tool to write basic factor code.hallucinates most of the time

  • foobarian 9 hours ago

    The real question is, what fraction of the company’s code is deleted by AI :-)

  • mjbale116 a day ago

    If you manage to convince software engineers that you are doing them a favour by employing them then they will approach any workplace negotiations with a specific mindset which will make them grab the first number it gets thrown to them.

    These statements are brilliant.

    • akira2501 13 hours ago

      These statements rely on an unchallenged monopoly position. This is not sustainable. These statements will hasten the collapse.

  • Timber-6539 3 hours ago

    All this talk means nothing until Google gives AI permissions to push to prod.

  • skywhopper 11 hours ago

    All this means is that 25% of code at Google is trivial boilerplate that would be better factored out of their process rather than tasking inefficient LLM tools with. The more they are willing to leave the “grunt work” to an LLM, the less likely they are to ever eliminate it from the process.

  • chabes 13 hours ago

    When Google announced their big layoffs, I noted the timing in relation to some big AI announcements. People here told me I was crazy for suggesting that corporations could replace employees with AI this early. Now the CEO is confirming that more than a quarter of new code is created by AI. Can’t really deny that reality anymore folks.

    • hbn 13 hours ago

      I'd suggest the bigger factor in those layoffs is the money was made in earlier covid years where money was flowing and everyone was overhiring to show off record growth, then none of those employees had any justification for being kept around and were just a money sink so they fired them all.

      Not to mention Elon publicly demonstrated losing 80% of staff when he took over twitter and - you can complain about his management all you want - as someone who's been using it the whole way through, from a technical POV their downtimes and software quality has not been any worse and they're shipping features faster. A lot of software companies are overstaffed, especially Google who has spent years paying people to make projects just to get a PO promoted, then letting the projects rot and die to be replaced by something else. That's a lot of useless work being done.

    • akira2501 13 hours ago

      > Can’t really deny that reality anymore folks.

      You have to establish that the CEO is actually aware of the reality and is interested in accurately conveying that to you. As far as I can tell there is absolutely no reason to believe any part of this.

    • paradox242 13 hours ago

      When leaders without the requisite technical knowledge are making decisions then the question of whether AI is capable of replacing human workers is orthogonal to the question of whether human workers will be replaced by AI.

    • robohoe 12 hours ago

      Who claims that he is speaking the truth and not some marketing jargon?

      • randomNumber7 11 hours ago

        People who have replaced 25% of their brain with ai.

  • DidYaWipe 3 hours ago

    No wonder it sucks. Google's vaunted engineering has always been suspect, but their douchebaggery has been an accepted fact (even by them)>

  • niobe 10 hours ago

    This explains a LOT about Google's quality decline.

  • soperj 12 hours ago

    The real question is how many lines of code was it responsible for removing.

  • ChrisArchitect a day ago

    Related:

    Alphabet ($GOOG) 2024 Q3 earnings release

    https://news.ycombinator.com/item?id=41988811

  • marstall 13 hours ago

    maps with recent headlines about AI improving programmer productivity 20-30%.

    which puts it in line with previous code-generation technologies i would imagine. I wonder which of these increased productivity the most?

    - Assembly Language

    - early Compilers

    - databases

    - graphics frameworks

    - ui frameworks (windows)

    - web apps

    - code generators (rails scaffolding)

    - genAI

    • akira2501 13 hours ago

      Early Compilers. By a wide margin. They are the enabling factor for everything that comes below it. It's what allows you to share library interfaces and actually use them in a consistent manor and across multiple architectures. It entirely changed the shape of software development.

      The gap between "high level assembly" and "compiled language" is about as large as it gets.

  • zxvkhkxvdvbdxz 11 hours ago

    I feel this made me loose the respect I still had for Google

  • rockskon 9 hours ago

    No shit a quarter of Google's new code is created by AI. How else do you explain why Google search has been so aggressively awful for the past 5~ years?

    Seriously. The penchant for outright ignoring user search terms, relentlessly forcing irrelevant or just plain wrong information on users, and the obnoxious UI changes on YouTube! If I'm watching a video on full screen I have explicitly made it clear that I want YouTube to only show me video! STOP BRINGING UP THE FUCKING VIDEO DESCRIPTION TO TAKE UP HALF THE SCREEN IF I TRY TO BRIEFLY SWIPE TO VIEW THE TIME OR READ A MESSAGE.

    I have such deep-seated contempt for AI and it's products for just how much worse it makes people's lives.

    • remram 8 hours ago

      Yeah that might explain some of the loss of quality. Google apps and sites used to be solid, now they are full of not-breaking-but-annoying bugs like race conditions (don't press buttons too fast), display glitches, awful recommendations, and other usability problems.

      Then again, their devices are also coming out with known fatal design flaws, like not being able to make phone calls, or the screen going black permanently.

  • otabdeveloper4 13 hours ago

    That explains a lot about Google's so-called "quality".

  • wokkaflokka 12 hours ago

    No wonder their products are getting worse and worse...

  • sigmonsays 7 hours ago

    imho code that is written by AI is code that is not worth having.

  • ThinkBeat 12 hours ago

    So um. With making this public statement, can we expect that 25% of "the bottom" coders at Google will soon be granted a lot more time and ability to spend time with their loves ones.

  • martin82 3 hours ago

    I guess that must be the reason for the shocking enshitification of Google

  • deterministic 12 hours ago

    Not impressed. I currently auto generate 90% or more of the code I need to implement business solutions. With no AI involved. Just high level declarations of intent auto translated to C++/Typescript/…

  • hggigg 13 hours ago

    I reckon he’s talking bollocks. Same as IBM was when it was about to disguise layoffs as AI uplift and actually just shovelled the existing workload on to other people.

  • marstall 13 hours ago

    first thought is that much of that 25% is test code for non-ai-gen code...

  • oglop 17 hours ago

    No surprise. I give my career about 2 years before I’m useless.

    • k4rli 12 hours ago

      Seems just overhyped tech to push up stock prices. It was already claimed 2 years ago that half of the jobs would be taken by "AI" but barely any have and AI has barely improved since GPT3.5. Latest Anthropic is only slightly helpful for software development, mostly for unusual bug investigations and logs analysis, at least in my experience.

    • phi-go 15 hours ago

      They still need someone to write 75% of the code.

  • marviel 11 hours ago

    > 80% at Reasonote

  • xyst 10 hours ago

    I remember Google used to market "lines of code" for their products. Chrome at one point had 6.7 LoC. Now the new marketing term is: "product was made with 1M lines of AI generated code (slop)!11!". Or "Chrome refactored with 10% AI" or some bs

  • horns4lyfe 9 hours ago

    I’d bet at least a quarter of their code is class definitions, constructors, and all the other minutiae files required for modern software, so that makes sense. But people weren’t writing most of that before either, we’ve had autocomplete and code geb for a long time.

  • Hamuko 11 hours ago

    How do Google's IP lawyers feel about a quarter of the company's code not being copyrightable?

  • jeffbee 11 hours ago

    It's quite amusing to me because I am old enough to remember when Copilot emerged the HN mainthought was that it was the death sentence for big corps, the scrappy independent hacker was going to run circles around them. But here we see the predictable reality: an organization that is already in an elite league in terms of developer velocity gets more benefit from LLM code assistants than Joe Hacker. These technologies serve to entrench and empower those who are already enormously powerful.

  • est 8 hours ago

    Now maintain quarter of your old code base with AI, don't shut down services randomly.

  • arminiusreturns 12 hours ago

    I was a luddite about the generative LLMs at first, as a crusty sysadmin type. I came around and started experimenting. It's been a boon for me.

    My conclusion is that we are at the first wave of a split between those who use LLMs to augment their abilities and knowledge, and those who delay. In cyberpunk terminally, it's aug-tech, not real AGI. (and the lesser ones code abilities and simpler the task, the more benefit, it's an accelerator)

  • nephy 9 hours ago

    Can we move on to the next grift yet?

  • yapyap 8 hours ago

    yikes

  • skrebbel 13 hours ago

    To my experience, AIs can generate perfectly good code relatively easy things, the kind you might as well copy&paste from stackoverflow, and they'll very confidently generate subtly wrong code for anything that's non-trivial for an experienced programmer to write. How do people deal with this? I simply don't understand the value proposition. Does Google now have 25% subtly wrong code? Or do they have 25% trivial code? Or do all their engineers babysit the AI and bugfix the subtly wrong code? Or are all their engineers so junior that an AI is such a substantial help?

    Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both?

    • toasteros 13 hours ago

      > the kind you might as well copy&paste from stackoverflow

      This bothers me. I completely understand the conversational aspect - "what approach might work for this?", "how could we reduce the crud in this function?" - it worked a lot for me last year when I tried learning C.

      But the vast majority of AI use that I see is...not that. It's just glorified, very expensive search. We are willing to burn far, far more fuel than necessary because we've decided we can't be bothered with traditional search.

      A lot of enterprise software is poorly cobbled together using stackoverflow gathered code as it is. It's part of the reason why MS Teams makes your laptop run so hot. We've decided that power-inefficient software is the best approach. Now we want to amplify that effect by burning more fuel to get the same answers, but from an LLM.

      It's frustrating. It should be snowing where I am now, but it's not. Because we want to frivolously chase false convenience and burn gallons and gallons of fuel to do it. LLM usage is a part of that.

      • jcgrillo 10 hours ago

        What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti. The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it. There's this bizarre misconception popular among bigtech managers that there's some tunable tradeoff between quality and development speed. But it doesn't actually work that way at all. I can't even count anymore how many times I've had to explain how taking this or that locally optimal shortcut will make it take longer overall to complete the project.

        In other words, it's a skill issue. LLMs can only make this worse. Hiring unskilled programmers and giving them a machine for generating garbage isn't the way. Instead, train them, and reject low quality work.

        • aleph_minus_one 8 hours ago

          > What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti. The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it.

          I don't think finding such programmers is really difficult. What is difficult is finding such people if you expect them to be docile to incompetent managers and other incompetent people involved in the project who, for example, got their position not by merit and competence, but by playing political games.

        • giantg2 9 hours ago

          "What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti."

          In my opinion the reason we get enterprise spaghetti is largely due to requirement issues and scope creep. It's nearly impossible to create a streamlined system without knowing what it should look like. And once the system gets to a certain size, it's impossible to get business buy-in to rearchitect or refactor to the degree that is necessary. Plus the full requirements are usually poorly documented and long forgotten by that time.

          • jcgrillo 8 hours ago

            When scopes creep and requirements change, simply refactor. Where is it written in The Law that you have to accrue technical debt? EDIT: I'm gonna double down on this one. The fact that your organization thinks they can demand of you that you can magically weathervane your codebase to their changeable whims is evidence that you have failed to realistically communicate to them what is actually possible to do well. The fact that they think it's a move you can make to creep the scope, or change the requirements, is the problem. Every time that happens it should be studied within the organization as a major, costly failure--like an outage or similar.

            > it's impossible to get business buy-in to rearchitect or refactor to the degree that is necessary

            That's a choice. There are some other options:

            - Simply don't get business buy-in. Do without. Form a terrorist cell within your organization. You'll likely outpace them. Or you'll get fired, which means you'll get severance, unemployment, a vacation, and the opportunity to apply to a job at a better company.

            - Fight viciously for engineering independence. You business people can do the businessing, but us engineers are going to do the engineering. We'll tell you how we'll do it, not the other way.

            - Build companies around a culture of doing good, consistent work instead of taking expedient shortcuts. They're rare, but they exist!

            • aleph_minus_one 8 hours ago

              > Fight viciously for engineering independence.

              Or simply find a position in an industry or department where you commonly have more independence. In my opinion this fight is not worth it - look for another position instead is typically easier.

            • llm_trw 8 hours ago

              >When scopes creep and requirements change, simply refactor.

              Congratulations, you just refactored out a use case which was documented in a knowledge base which has been replaced by 3 newer ones since then, happens once every 18 months and makes the company go bankrupt if it isn't carried out promptly.

              The type of junior devs who think that making code tidy is fixing the application are the type of dev who you don't let near the heart of the code base, and incidentally the type who are best replaced with code gen AI.

              • wpietri 7 hours ago

                Refactoring is improving the design of existing code. It shouldn't change behavior.

                And regardless, the way you prevent loss of important functionality isn't by hoping people read docs that no longer exist. It's by writing coarse-grained tests that makes sure the software does the important things. If a programmer wants to change something that breaks a test like that, they go ask a product manager (or whatever you call yours) if that feature still matters.

                And if nobody can say whether a feature still matters, the organization doesn't have a software problem, it has a serious management problem. Not all the coding techniques in the world can fix that.

              • jcgrillo 7 hours ago

                If you don't understand your systems well enough to comfortably refactor them, you're losing the war. I probably should have put "simply" in scare quotes, it isn't simple--and that's the point. Responding to unreasonable demands, like completely changing course at the 11th hour, shouldn't come at a low price.

        • galdosdi 6 hours ago

          It's a market for lemons.

          Without redoing their work or finding a way to have deep trust (which is possible, but uncommon at a bigcorp) it's hard enough to tell who is earnest and who is faking it (or buying their own baloney) when it comes to propositions like "investing in this piece of tech debt will pay off big time"

          As a result, if managers tend to believe such plans, bad ideas drive out good and you end up investing in a tech debt proposal that just wastes time. Burned managers therefore cope by undervaluing any such proposals and preferring the crappy car that at least you know is crappy over the car that allegedly has a brand new 0 mile motor on it but you have no way of distinguishing from a car with a rolled back odometer. They take the locally optimal path because it's the best they can do.

          It's taken me 15 years of working in the field and thinking about this to figure it out.

          The only way out is an organization where everyone is trusted and competent and is worthy of trust, which again, hard to do at most random bigcorps.

          This is my current theory anyway. It's sad, but I think it kind of makes sense.

        • c0balt 8 hours ago

          It's relatively easy to find a programmer(s) who can realize enterprise project X, it's hard to find a programmer(s) who cares about X. Throwing an increased requirement like speed at it makes this worse because it usually ends up burning out both ends of the equation.

        • noisy_boy 4 hours ago

          > The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it.

          There is no incentive to do it. I worked that way, focused on quality and testing and none of my changes blew up in production. My manager opined that this approach is too slow and that it was ok to have minor breakages as long as they are fixed soon. When things break though, it's blame game all around. Loads of hypocrisy.

        • wpietri 7 hours ago

          Agreed.

          The way I explain this to managers is that software development is unlike most work. If I'm making widgets and I fuck up, that widget goes out the door never to be seen again. But in software, today's outputs are tomorrow's raw materials. You can trade quality for speed in the very short term at the cost of future productivity, so you're really trading speed for speed.

          I should add, though, that one can do the rigorous thinking before or after the doing, and ideally one should do both. That was the key insight behind Martin Fowler's "Refactoring: Improving the Design of Existing Code". Think up front if you can, but the best designs are based on the most information, and there's a lot of information that is not available until later in a project. So you'll want to think as information comes in and adjust designs as you go.

          That's something an LLM absolutely can't do, because it doesn't have access to that flow of information and it can't think about where the system should be going.

        • jihadjihad 9 hours ago

          > The problem is finding people to do it with who care enough to think rigorously

          > ...

          > train them, and reject low quality work.

          I agree very strongly with both of these points.

          But I've observed a truth about each of them over the last decade-plus of building software.

          1) very few people approach the field of software engineering with anything remotely resembling rigor, and

          2) there is often little incentive to train juniors and reject subpar output (move fast and break things, etc.)

          I don't know where this takes us as an industry? But I feel your comment on a deep level.

          • jcgrillo 8 hours ago

            > 1) very few people approach the field of software engineering with anything remotely resembling rigor

            This is a huge problem. I don't know where it comes from, I think maybe sort of learned helplessness? Like, if systems are so complex that you don't believe a single person can understand it then why bother trying anyway? I think it's possible to inspire people to not accept not understanding. That motivation to figure out what's actually happening and how things actually work is the carrot. The stick is thorough, critical (but kind and fair) code--and, crucially, design--review, and demanding things be re-done when they're not up to par. I've been extremely lucky in my career to have had senior engineers apply both of these tools excellently in my general direction.

            > 2) there is often little incentive to train juniors and reject subpar output (move fast and break things, etc.)

            One problem is our current (well, for years now) corporate culture is this kind of gig-adjacent-economy where you're only expected to stick around for a few years at most and therefore in order to be worth your comp package you need to be productive on your first day. Companies even advertise this as a good thing "you'll push code to prod on your first day!" It reminds me of those scammy books from when I was a kid in the late 90s "Learn C In 10 Days!".

            • wpietri 7 hours ago

              > This is a huge problem. I don't know where it comes from

              I think it's a bunch of things, but one legitimate issue is that software is stupidly complex these days. I had the advantage of starting when computers were pretty simple and have had a chance to grow along with it. (And my dad started when you could still lift up the hood and look at each bit. [1])

              When I'm working with junior engineers I have a hard time even summing up how many layers lie beneath what they're working on. And so much of what they have to know is historically contingent. Just the other day I had to explain what LF and CR mean and how it relates to physical machinery that they probably won't see outside of a museum: https://sfba.social/@williampietri/113387049693365012

              So I get how junior engineers struggle to develop a belief that the can sort it all out. Especially when so many people end up working on garbage code, where little sense is to be had. It's no wonder so many turn to cargo culting and other superstitious rituals.

              [1] https://en.wikipedia.org/wiki/Magnetic-core_memory

          • steve_adams_86 8 hours ago

            I agree as well. These are actually things that bother me a lot about the industry. I’d love to write software that should run problem-free in 2035, but the reality is almost no one cares.

            I’ve had the good fortune of getting to write some firmware that will likely work well for a long time to come, but I find most things being written on computers are written with (or very close to) the minimum care possible in order to get the product out. Clean up is intended but rarely occurs.

            I think we’d see real benefits from doing a better job, but like many things, we fail to invest early and crave immediate gratification.

          • karolinepauls 8 hours ago

            > very few people approach the field of software engineering with anything remotely resembling rigor, and

            I have this one opinion which I would not say at work:

            In software development it's easy to feel smart because what you made "works" and you can show "effects".

            - Does it wrap every failable condition in `except Exception`? Uhh, but look, it works.

            - Does it define a class hierarchy for what should be a dictionary lookup? It works great tho!

            - Does it create a cyclic graph of objects calling each other's methods to create more objects holding references to the objects that created them? And for what, to produce a flat dictionary of data at end of the day? But see, it works.

            this is getting boring, maybe just skip past the list

            - Does it stuff what should be local variables and parameters in self, creating a big stateful blob of an object where every attribute is optional and methods need to be called in the right order, otherwise you get an exception? Yes, but it works.

            - Does it embed a browser engine? But it works!

            The programmer, positively affirmed, continues spewing out crap, while the senior keep fighting fires to keep things running, while insulating the programmer from the taste of their own medicine.

            But more generally, it's hard to expect people to learn how to solve problems simply if they're given gigantic OO languages with all the features and no apparent cost to any of them. People learn how to write classes and then never learn get good at writing code with a clear data flow.

            Even very bright people can get fall for this trap because engineering isn't just about being smart but about using intelligence and experience to solve a problem while minmaxing correctly chosen properties. Those properties should generally be: dev time, complexity (state/flow), correctness, test coverage, ease of change, performance (anything else?). Anyway, "Affirming one's opinions about how things should be done" isn't one of them.

            • mos_basik 7 hours ago

              The whole one about the stateful blob of an object with all optional attributes got me real good. Been fighting that for years. But the dev that writes this produces code faster than me and understands parts of the system no one else does and doesn't speak great English, so it continues. And the company is still afloat. So who's right in the end? And does it matter?

        • sethammons 9 hours ago

          "Slow is smooth and smooth is fast"

          • jcgrillo 8 hours ago

            It's true every single time.

        • A4ET8a8uTh0 9 hours ago

          << Instead, train them, and reject low quality work.

          Ahh, well, in order to save money, training is done via an online class with multiple choice questions, or, if your company is like mine and really committed to making sure that you know they take your training seriously, they put portions of a generic book on 'tech Z' in pdf spread spread over a drm ridden web pages.

          As for code, that is reviewed, commented and rejected by llms as well. It is used to be turtles. Now it truly is llms all the way down.

          That said, in a sane world, this is what should be happening for a company that actually wants to get good results over time .

      • chongli 12 hours ago

        we've decided we can't be bothered with traditional search

        Traditional search (at least on the web) is dying. The entire edifice is drowning under a rapidly rising tide of spam and scam sites. No one, including Google, knows what to do about it so we're punting on the whole project and hoping AI will swoop in like deus ex machina and save the day.

        • photonthug 12 hours ago

          Maybe it is naive but I think search would probably work again if they could roll back code to 10 or 15 years ago and just make search engines look for text in webpages.

          Google wasn’t crushed by spam, they decided to stop doing text search and build search bubbles that are user specific, location-specific, decided to surface pages that mention search terms in metadata instead of in text users might read, etc. Oh yeah, and about a decade before LLMs were actually usable, they started to sabotage simple substring searches and kind of force this more conversational interface. That’s when simple search terms stopped working very well, and you had to instead ask yourself “hmm how would a very old person or a small child phrase this question for a magic oracle”

          This is how we get stuff like: Did you mean “when did Shakespeare die near my location”? If anyone at google cared more about quality than printing money, that thirsty gambit would at least be at the bottom of the page instead of the top.

          • hughesjj 10 hours ago

            I remember in like 5th grade rural PA schools learning about Boolean operators in search engines and falling in love with them. For context, they were presenting alta vista and yahoo kids search as the most popular with Google being a "simple but effective new search platform" we might want to check out.

            By the time I graduated highschool you already couldn't trust that Boolean operators would be treated literally. By the time I graduated college, they basically didn't seem to do anything, at best a weak suggestion.

            Nowadays quotes don't even seem to be consistently honored.

            • II2II 9 hours ago

              Even though I miss using boolean operators in search, I doubt that it was ever sustainable outside of specialized search engines. Very few people seem to think in those terms. Many of those who do would still have difficulty forming complex queries.

              I suspect the real problem is that search engines ceased being search engines when they stopped taking things literally and started trying to interpret what people mean. Then they became some sort of poor man's AI. Now that we have LLMs, of course it is going to replace the poor excuse for search engines that exist today. We were heading down that road already, and it actually summarizes what is out there.

              • jordanb 6 hours ago

                People were learning. Just like with mice and menus, people are capable of learning new skills and querying search engines was one. I remember when it was considered a really "n00b" thing to type a full question into a search engine.

                Then Google decided to start enforcing that, because they had this idea that they would be able to divine your "intent" from a "natural question" rather than just matching documents including your search terms.

          • layer8 11 hours ago

            > just make search engines look for text in webpages.

            Google’s verbatim search option roughly does that for me (plus an ad blocker that removes ads from the results page). I have it activated by default as a search shortcut.

            (To activate it, one can add “tbs=li:1” as a query parameter to the Google search URL.)

            • alex1138 10 hours ago

              To me the stupidest thing was the removal of things like + and -. You can say it's because of Google+ but annoyingly duckduckgo also doesn't seem to honor it. Kagi seems to and I hope they don't follow the others down the road of stupid

            • jcgrillo 8 hours ago

              > ?tbs=li:1

              Thank you, this is almost life-alteringly good to know.

              • photonthug 5 hours ago

                Funny, I can’t even test this because I’d need to know another neat trick to get my browser to let me actually edit the URL.

                Seems that Firefox on mobile allows editing the url for most pages, but on google search results pages, the url bar magically turns into a did-you-mean alternate search selector where I cannot see nor edit a url. Surprised but not surprised.

                Sure, there’s a work around for this too, somehow. But I don’t want to spend my life collecting and constantly updating a huge list of temporary hacks to fix things that others have intentionally broken.

                • layer8 3 hours ago

                  You can select verbatim search manually on the Google results page under Search tools > All results > Verbatim. You can also have a bookmark with a dummy search activating it, so you can then type your search terms into the Google search field instead of into the address bar.

                  Yes, it’s annoying that you can’t set it as the default on Google search itself.

            • tru3_power 8 hours ago

              Wow what? Thanks!

          • CapeTheory 9 hours ago

            > Maybe it is naive but I think search would probably work again if they could roll back code to 10 or 15 years ago and just make search engines look for text in webpages.

            Even more naive, but my personal preference: just ban all advertising. The fact that people will pay for ChatGPT implies people will also pay for good search if the free alternative goes away.

            • Atreiden 8 hours ago

              It's working for Kagi

        • masfuerte 12 hours ago

          Google results are not polluted with spam because Google doesn't know how to deal with it.

          Google results are polluted with spam because it is more profitable for Google. This is a conscious decision they made five years ago.

          • redwall_hp 8 hours ago

            If you own the largest ad network that spam sites use and own the traffic firehose, pointing the hose at the spam sites and ensuring people spend more time clicking multiple results that point to ad-filled sites will make you more money.

            Google not only has multiple monopolies, but a cut and dry perverse incentive to produce lower quality results to make the whole session longer instead of short and effective.

          • chongli 11 hours ago

            because it is more profitable for Google

            Then why are DuckDuckGo results also (arguably even more so) polluted with spam/scam sites? I doubt DDG is making any profit from those sites since Google essentially owns the display ad business.

        • skissane 12 hours ago

          I personally think a big problem with search is major search engines try to be all things to all people and hence suffer as a result.

          For example: a beginner developer is possibly better served by some SEO-heavy tutorial blog post; an experienced developer would prefer results weighted towards the official docs, the project’s bug tracker and mailing list, etc. But since less technical and non-technical people vastly outnumber highly technical people, Google and Bing end up focusing on the needs of the former, at the cost of making search worse for the later.

          One positive about AI: if an AI is doing the search, it likely wants the more advanced material not the more beginner-focused one. It can take more advanced material and simplify it for the benefit of less experienced users. It is (I suspect) less likely to make mistakes if you ask it to simplify the more advanced material than if you just gave it more beginner-oriented material instead. So if AI starts to replace humans as the main clients of search, that may reverse some of the pressure to “dumb it down”.

          • photonthug 11 hours ago

            > But since less technical and non-technical people vastly outnumber highly technical people, Google and Bing end up focusing on the needs of the former, at the cost of making search worse for the later.

            I mostly agree with your interesting comment, and I think your analysis basically jives with my sibling comment.

            But one thing I take issue with is the idea that this type of thing is a good faith effort, because it’s more like a convenient excuse. Explaining substring search or even include/exclude ops to children and grandparents is actually easy. Setting preferences for tutorials vs API docs would also be easy. But companies don’t really want user-directed behavior as much as they want to herd users to preferred content with algorithms, then convince the user it was their idea or at least the result of relatively static ranking processes.

            The push towards more fuzzy semantic search and “related content” everywhere is not to cater to novice users but to blur the line between paid advertisement and organic user-directed discovery.

            No need to give megacorp the benefit of the doubt on stuff like this, or make the underlying problems seem harder than they are. All platforms land in this place by convergent evolution wherein the driving forces are money and influence, not insurmountable technical difficulties or good intentions for usability.

          • consp 10 hours ago

            > For example: a beginner developer is possibly better served by some SEO-heavy tutorial blog post

            Good luck finding those, you end op with SEO spam and clone page spam. These days you have to look for unobvious hidden meanings which only relate to your exact problem to find what you are looking for.

            I have the strong feeling search these days is back to the Altavista era. You'd have to use trickery to find what you were looking for back then as well. Too bad + no longer works in google due to their stupid naming of a dead product (no, literal is not the same and no replacement).

            • tru3_power 8 hours ago

              Yeah but this is just the name of the game. How can you even stop SEO style gamification at this point? I’m sure even LLMs are vulnerable/have been trained on SEO bs. End of the day it takes an informed user. Remember back in the day? Don’t trust the internet? I think that mindset will become the main school of thought once again. Which tbh, I think maybe a good thing.

        • skydhash 11 hours ago

          > Traditional search (at least on the web) is dying.

          That's not my experience at all. While there are scammy sites, using the search engines as an index instead of an oracle still yields useful results. It only requires to learn the keywords which you can do by reading the relevant materials .

        • rubyfan 10 hours ago

          AI will make the problem of low quality, fake, fraudulent and arbitrage content way worse. I highly doubt it will improve searching for quality content at all.

        • layer8 10 hours ago

          Without a usable web search index, AI will be in trouble eventually as well. There is no substitute for it.

        • ponector 11 hours ago

          >> No one, including Google, knows what to do about it

          I'm sure they can. But they have no incentive. Try to Google an item, and it will show you a perfect match of sponsored ads and some other not-so-relevant non-sponsored results

        • fmos 4 hours ago

          Kagi has fixed traditional search for me.

        • lokar 12 hours ago

          It took the scam/spam sites a few years to catch up to Google search. Just wait a bit, equilibrium will return.

        • AtlasBarfed 11 hours ago

          There's no way the search AI will beat out the spamgen AI.

          Tailoring/retraining the main search AI will be so much more expensive that retraining the spam special purpose AIs.

        • AnimalMuppet 12 hours ago

          But it can't save the day.

          The problem with Google search is that it indexes all the web, and there's (as you say) a rising tide of scam and spam sites.

          The problem with AI is that it scoops up all the web as training data, and there's a rising tide of scam and spam sites.

        • petre 12 hours ago

          AI will generate even more spam and scam sites more trivially.

          • ses1984 7 hours ago

            What do you mean “will”, we are a few years past that point.

        • quickthrowman 11 hours ago

          Google could fix the problem if they wanted to, but it’s not in their interests to fix it since the spam sites generally buy ads from Google and/or display Google ads on their spam websites. Google wants to maximize their income, so..

        • cyanydeez 12 hours ago

          If only google was trying to solve search rather than shareholdet values.

        • akoboldfrying 12 hours ago

          >The entire edifice is drowning under a rapidly rising tide of spam and scam sites.

          You make this claim with such confidence, but what is it based on?

          There have always been hordes of spam and scam websites. Can you point to anything that actually indicates that the ratio is now getting worse?

          • chongli 11 hours ago

            There have always been hordes of spam and scam websites. Can you point to anything that actually indicates that the ratio is now getting worse?

            No, there haven't always been hordes of spam and scam websites. I remember the web of the 90s. When Google first arrived on the scene every site on the results page was a real site, not a spam/scam site.

            • ShroudedNight 8 hours ago

              That was PageRank flexing its capability. There were lots of sites with reams of honeypot text that caught the other search engines.

        • romwell 12 hours ago

          Narrator: it did not, in fact, save the day.

      • jihadjihad 9 hours ago

        Another frustration I have with these models is that it is yet another crutch and excuse for turning off your brain. I was tagged on a PR a couple days ago where a coworker had added a GIN index to a column in Postgres, courtesy of GPT-4o, of course.

        He couldn't pronounce the name of the extension, apparently not noticing that trgm == trigram, or what that might even be. Copying the output from the LLM and pasting it into a PR didn't result in anything other than him checking off a box, moving a ticket in Jira, and then onto the next thing--not even a pretense of being curious about what any of it all meant. But look at those query times now!

        It's been possible for a while to shut off your brain as a programmer and blindly copy-paste from StackOverflow etc., but the level of enablement that LLMs afford is staggering.

        • tru3_power 8 hours ago

          Out of curiosity- did it work though?

      • gonzobonzo 7 hours ago

        Doesn't this get to one of the fundamental issues though, that many of these frameworks and languages are poorly constructed in the first place? A lot of the times people turn to web searches, Stack Overflow, or AI is because they want to do X, and there's no quick, clear, and intuitive way to do X. I write cheat sheets for opaque parts of various frameworks myself. A lot of them aren't fundamentally difficult once you understand them, but they're constructed in an extremely convoluted way, and there's usually extremely poor documentation explaining how to actually use them.

        In fact, I'd say I use AI more for documentation than I do for code itself, because AI generated documentation is often superior to official documentation.

        In the end, these things shouldn't be necessary (or barely necessary) if we had well constructed languages, frameworks, libraries and documentation, but it appears like it's easier to build AI than to make things non-convoluted in the first place.

      • braiamp 8 hours ago

        > because we've decided we can't be bothered with traditional search

        Traditional search was only Google, and Google figured out that they don't need to improve their tools to make it better, because everyone will continue to use it as a force of habit (google is a verb!). Traditional search is being abandoned because traditional search isn't good enough for the kinds of search we need (also, while google may claim their search is very useful, people rarely search stuff nowadays, instead prefer being passively fed content via recommendations algorithm (that also use AI!))

        • dleeftink 7 hours ago

          Algolia, Marginalia, Kagi, Scopus, ConnectedPapers, Lense[0] all stick to more or less traditional search and yield consistent high quality results. It shouldn't be one or the other, and I think the first one to combine both paradigms in a seamless fashion would be quite successfull (it has been tried, I know, but it's still a niche in many cases).

          [0]: https://www.lens.org/lens/search/

      • hawski 10 hours ago

        A human can't be trusted to not make memory safety bugs. At the same time we can trust AI with logic bugs.

        • kelnos 10 hours ago

          Since LLMs are just based on human output, we should trust LLMs (at best) as much as we trust the average human coder. And in reality we should probably trust them less.

      • GaggiX 8 hours ago

        These models are simply much more powerful than a tradition search engine and stackoverflow, so many people use these models for a reason, a friend of mine that never tried ChatGPT until very recently managed to solve a problem he couldn't find a solution on stackoverflow using GPT-4o, next time he's probably going to ask the model directly.

      • worik 10 hours ago

        > But the vast majority of AI use that I see is...not that. It's just glorified, very expensive search.

        Since the collapse of Internet search (rose tinted hindsight - was it ever any good?) I have been using a LLM as my syntax advisor. I pay for my own tokens, and I can say it is astonishingly cheap

        It is also very good.

      • Dalewyn 9 hours ago

        >We are willing to burn far, far more fuel than necessary because we've decided we can't be bothered with traditional search.

        That's because traditional search fucking sucks balls.

    • rpcope1 12 hours ago

      I don't get it either. People will say all sorts of strange stuff about how it writes the code for them or whatever, but even using the new Claude 3.5 Sonnet or whatever variant of GPT4, the moment I ask it anything that isn't the most basic done-to-death boilerplate, it generates stuff that's wrong, and often subtly wrong. If you're not at least pretty knowledgeable about exactly what it's generating, you'll be stuck trying to troubleshoot bad code, and if you are it's often about as quick to just write it yourself. It's especially bad if you get away from Python, and try to make it do anything else. SQL especially, for whatever reason, I've seen all of the major players generate either stuff that's just junk or will cause problems (things that your run of the mill DBA will catch).

      Honestly, I think it will become a better Intellisense but not much more. I'm a little excited because there's going to be so many people buying into this, generating so much bad code/bad architecture/etc. that will inevitably need someone to fix after the hype dies down and the rug is pulled, that I think there will continue to be employment opportunities.

      • solumunus 12 hours ago

        Supermaven is an incredible intellisense. Most code IS trivial and I barely write trivial code anymore. My imports appear instantly, with high accuracy. I have lots of embedded SQL queries and it’s able to guess the structure of my database very accurately. As I’m writing a query the suggested joins are accurate probably 80% of the time. I’m significantly more productive and having to type much less. If this is as good as it ever gets I’m quite happy. I rarely use AI for non trivial code, but non trivial code is what I want to work on…

        • ta_1138 12 hours ago

          This is all about the tooling most companies choose when building software: Things with more than enough boilerplate most code is trivial. We can build tools that have far less triviality and more density, where the distance between the code we write and business logic is very narrow.. but then every line of code we write is hard, because it's meaningful, and that feels bad enough to many developers, so we end up with tools where we might not be more productive, but we might feel productive, even though most of that apparent productivity is trivially generated.

          We also have the ceremonial layers of certain forms of corporate architecture, where nothing actually happens, but the steps must exist to match the holy box, box cylinder architecture. Ceremonial input massaging here, ceremonial data transformation over there, duplicated error checking... if it's easy for the LLM to do, maybe we shouldn't be doing it everywhere in the first place.

          • thfuran 11 hours ago

            >but then every line of code we write is hard, because it's meaningful, and that feels bad enough to many developers,

            I don't know that I've ever even met a developer who wants to be writing endless pools of trivial boilerplate instead of meaningful code. Even the people at work who are willing to say they don't want to deal with the ambiguity and high level design stuff and just want to be told what to do pretty clearly don't want endless drudgery.

            • Aeolun 11 hours ago

              That, but boilerplate stuff is also incredibly easy to understand. As compared to high density, high meaning code anyway. I prefer more low density low meaning code as it makes it much easier to reason about any part of the system.

              • wruza 6 hours ago

                So basically it’s a presentation problem.

                We want to control code at the call site, boilerplate helps with that by being locally modifiable.

                We also want to systematize chunks of code so that they don’t flicker around and mess with a reader.

                We wanted this since forever and no one does anything because anything above simple text completion is traditionally seen as an overkill, not the true way, not unix, etc. All sorts of stubborn arguments.

                This can be solved by simply allowing code trees instead of lines of code (tree vs table). You drop a boilerplate into code marked as “boilerplate ‘foo’ {…}” and edit it as you see fit, which creates a boilerplate-local patch. Then you can instantly see diffs, find, update boilerplates, convert them to and from regular functions, merge best practices from boilerplate libraries, etc. Problem solved.

                It feels like the development itself got collectively stuck in some stupid principles that no one dares to question. Everything that we invent stumbles upon the simple fact that we don’t have any sensible devtime structure, apart from this “file” and “import file” dullness.

          • codr7 6 hours ago

            I think you just nailed the paradox of Go's popularity among developers, managers are obvious.

        • monksy 11 hours ago

          I don't think that is the signal that I think most people are hoping for here.

          When I hear that most code is trivial, I think of this as a language design or a framework related issue making things harder than they should be.

          Throwing AI or generates at the problem just to claim that they fixed it is just frustrating.

          • throw234234234 10 hours ago

            > When I hear that most code is trivial, I think of this as a language design or a framework related issue making things harder than they should be.

            This was one of my thoughts too. If the pain of using bad frameworks and clunky languages can be mitigated by AI, it seems like the popular but ugly/verbose languages will win out since there's almost no point to better designed languages/framework. I would rather a good language/framework/etc where it is just as easy to just write the code directly. Similar time in implementation to a LLM prompt, but more deterministic.

            If people don't feel the pain of AI slop why move to greener pastures? It almost encourages things to not improve at the code level.

          • int_19h 10 hours ago

            Well, Google did design Go...

      • Kiro 11 hours ago

        Interesting that you believe your subjective experience outweighs the claims of all others who report successfully using LLMs for coding. Wouldn't a more charitable interpretation be that it doesn't fit the stuff you're doing?

        • kelnos 10 hours ago

          Why wouldn't someone's subjective experience outweigh someone else's subjective experience?

          Regardless, I do wonder how accurate those successful reports are. Do people take LLM output, use it verbatim, not notice subtle bugs, and report that as success?

          • Kiro 3 hours ago

            There's a big difference between "I've seen X" and "I've not seen X". The latter does not invalidate the former, unless you believe the person is lying or being delusional.

    • tobyjsullivan 12 hours ago

      I'm not a Google employee but I've heard enough stories to know that a surprising amount of code changes at google are basically updating API interfaces.

      The way google works, the person changing an interface is responsible for updating all dependent code. They create PRs which are then sent to code owners for approval. For lower-level dependencies, this can involve creating thousands of PRs across hundreds of projects.

      Google has had tooling to help with these large-scale refactors for decades, generally taking the form of static analysis tools. However, these would be inherently limited in their capability. Manual PR authoring would still be required in many cases.

      With this background, LLM code gen seems like a natural tool to augment Google's existing process.

      I expect Google is currently executing a wave of newly-unblocked refactoring projects.

      If anyone works/worked at google, feel free to correct me on this.

      • cj 9 hours ago

        Do they have tooling for generating scaffolding for various things (like unit/integration tests)?

        If we’re guessing what code is easiest and largest proportion of codebase to write, my first guess would be test suites. Lots of lines of repetitive code patterns that repeat and AI is decent at dealing with

    • slibhb 12 hours ago

      Most programming is trivial. Lots of non-trivial programming tasks can be broken down into pure, trivial sections. Then, the non-trivial part becomes knowing how the entire system fits together.

      I've been using LLMs for about a month now. It's a nice productivity gain. You do have to read generated code and understand it. Another useful strategy is pasting a buggy function and ask for revisions.

      I think most programmers who claim that LLMs aren't useful are reacting emotionally. They don't want LLMs to be useful because, in their eyes, that would lower the status of programming. This is a silly insecurity: ultimately programmers are useful because they can think formally better than most people. For the forseeable future, there's going to be massive demand for that, and people who can do it will be high status.

      • tonyedgecombe 11 hours ago

        >I think most programmers who claim that LLMs aren't useful are reacting emotionally.

        I don't think that's true. Most programmers I speak to have been keen to try it out and reap some benefits.

        The almost universal experience has been that it works for trivial problems, starts injecting mistakes for harder problems and goes completely off the rails for anything really difficult.

        • theshackleford 8 hours ago

          > I don't think that's true. Most programmers I speak to have been keen to try it out and reap some benefits.

          I’ve been seeing the complete opposite. So it’s out there.

      • gorjusborg 12 hours ago

        > Most programming is trivial

        That's a bold statement, and incorrect, in my opinion.

        At a junior level software development can be about churning out trivial code in a previously defined box. I don't think its fair to call that 'most programming'.

        • BobbyJo 11 hours ago

          Probably overloading of the term "programming" is the issue here. Most "software engineering" is non-programming work. Most programming is not actually typing code.

          Most of the time, when I am typing code, the code I am producing is trivial, however.

        • uh_uh 7 hours ago

          Think of all the menial stuff you must perform regardless of experience level. E.g. you change the return type of a function and now you have to unpack the results slightly differently. Traditional automated tools fail at this. But if you show some examples to Cursor, it quickly catches on to the pattern and start autocompleting semi-automatically (semi because you still have to put the cursor to the right place but then you can tab, tab, tab…).

      • r14c 11 hours ago

        From my perspective, writing out the requirements for an AI to produce the code I want is just as easy as writing it myself. There are some types of boilerplate code that I can see being useful to produce with an LLM, but I don't write them often enough to warrant actually setting up the workflow.

        Even with the debugging example, if I just read what I wrote I'll find the bug because I understand the language. For more complex bugs, I'd have to feed the LLM a large fraction of my codebase and at that point we're exceeding the level of understanding these things can have.

        I would be pretty happy to see an AI that can do effective code reviews, but until that point I probably won't bother.

      • johnnyanmac 7 hours ago

        > I think most programmers who claim that LLMs aren't useful are reacting emotionally. They don't want LLMs to be useful because, in their eyes, that would lower the status of programming.

        I think revealing the domain each programmer works in and asking in hose domains would reveal obvious trends. I imagine if you work in Web that you'll get workable enough AI gen code, but something like High Performance computing would get slop worse than copying and lasting the first result on Stackoverflow.

        A model is only as good as its learning set, and not all types are code are readily able to be indexable.

      • er4hn 11 hours ago

        It's reasonable to say that LLMs are not completely useless. There is also a very valid case to make that LLMs are not good at generating production ready code. I have found asking LLMs to make me Nix flakes to be a very nice way to make use of Nix without learning the Nix language.

        As an example of not being production ready: I recently tried to use ChatGPT-4 to provide me with a script to manage my gmail labels. The APIs for these are all online, I didn't want to read them. ChatGPT-4 gave me a workable PoC that was extremely slow because it was using inefficient APIs. It then lied to me about better APIs existing and I realized that when reading the docs. The "vibes" outcome of this is that it can produce working slop code. For the curious I discuss this in more specific detail at: https://er4hn.info/blog/2024.10.26-gmail-labels/#using-ai-to...

        • Aeolun 11 hours ago

          I find a recurring theme in these kind of comments where people seem to blame their laziness on the tool. The problem is not that the tools are imperfect, it’s that you apparently use them in situations where you expect perfection.

          Does a carpenter blame their hammer when it fails to drive in a screw?

          • er4hn 10 hours ago

            I'd argue that a closer analogy is I bought a laser based measuring device. I point it a distant point and it tells me the distance from the tip of the device to that point. Many people are excited that this tool will replace rulers and measuring tapes because of the ease of use.

            However this laser measuring tool is accurate within a range. There's a lot of factors that affect it's accuracy like time of day, how you hold it, the material you point it at, etc. Sometimes these accuracy errors are minimal, sometimes they are pretty big. You end up getting a lot of measurements that seem "close enough". but you still need to ask if each one is correct. "Measure Twice, Cut Once" begins to require one measurement with the laser tool and once with the conventional tool when accuracy matters.

            One could have a convoluted analogy where the carpenter has an electric hammer that for some reason has a rounded head that does cause some number of nails to not go in cleanly, but I like my analogy better :)

          • johnnyanmac 7 hours ago

            >Does a carpenter blame their hammer when it fails to drive in a screw?

            That's the exact problem. I have plenty of screwdrivers but there's so much pressure from people not in carpentry telling me to use this shiny new army Swiss knife contraption. Will it work? Probably, if I'm just screwing in a few screws. Would I readily abandon my set of precision built, magnetic tip, etc. Screwdriver set for it? Definitely not.

            I'm sure it's great for non-carpenters to have so many tools in so small a space. But I developed skills and tools already. My job isn't just to screw in a few screws a day and call it quits. People wanting to replace me for a quarter the cost for this Swiss army carpenter will quickly see a quality difference and realize why it's not a solution to everything.

            Or in the software sense, maybe they are fine with unlevel shelves and hanging nails in carpet. It's certainly not work I'd find acceptable.

      • adriand 11 hours ago

        > Lots of non-trivial programming tasks can be broken down into pure, trivial sections. Then, the non-trivial part becomes knowing how the entire system fits together.

        I think that’s exactly right. I used to have to create the puzzle pieces and then fit them together. Now, a lot of the time something else makes the piece and I’m just doing the fitting together part. Whether there will come a day when we just need to describe the completed puzzle remains to be seen.

      • boringg 8 hours ago

        Trivial is fine but as you compound all the triviality the system starts to have a difficult time with putting it together. I don't expect it to nail it but then you have to unwind everything and figure out the issues so it isn't all gravy - fair bit of debug.

      • shinycode 10 hours ago

        It’s always harder to build a mental model of the code written by someone else. No matter what, if you trust an LLM on small things in the long run you’ll trust it for bigger things. And the most code the LLM writes, the harder it is to build this mental construct. In the end it’ll be « it worked on 90% of cases so we trust it ». And who will debug 300 millions of code written by a machine that no one read based on trust ?

      • jerb 9 hours ago

        Yes. Productivity tools make programmer time more valuable, not less. This is basic economics. You’re now able to generate more value per hour than before.

        (Or if you’re being paid to waste time, maybe consider coding in assembly?)

        So don’t be afraid. Learn to use the tools. They’re not magic, so stop expecting that. It’s like anything else, good at some things and not others.

      • jolt42 10 hours ago

        They are useful, but so far, I haven't seen LLMs being obviously more useful than stackoverflow. It might generate code closer to what I need than what I find already coded, but it also produces buggier code. Sometimes it will show me a function I wasn't aware of or approach I wouldn't have considered, but I have to balance that with all the other attempts that didn't produce something useful.

      • Reason077 12 hours ago

        A good farmer isn’t likely to complain about getting a new tractor. But it might put a few horses out of work.

      • derefr 11 hours ago

        I would add that a lot of the time when I'm programming, I'm an expert on the problem domain but not the solution domain — that is, I know exactly what the pseudocode to solve my problem should look like; but I'm not necessarily fluent in the particular language and libraries/APIs I happen to have to use, in the particular codebase I'm working on, to operationalize that pseudocode.

        LLMs are great at translating already-rigorously-thought-out pseudocode requirements, into a specific (non-esoteric) programming language, with calls to (popular) libraries/APIs of that language. They might make little mistakes — but so can human developers. If you're good at catching little mistakes, then this can still be faster!

        For a concrete example of what I mean:

        I hardly ever code in JavaScript; I'm mostly a backend developer. But sometimes I want to quickly fix a problem with our frontend that's preventing end-to-end testing; or I want to add a proof-of-concept frontend half to a new backend feature, to demonstrate to the frontend devs by example the way the frontend should be using the new API endpoint.

        Now, I can sit down with a JS syntax + browser-DOM API cheat-sheet, and probably, eventually write correct code that doesn't accidentally e.g. incorrectly reject reject zero or empty strings because they're "false-y", or incorrectly interpolate the literal string "null" into a template string, or incorrectly try to call Element.setAttribute with a boolean true instead of an empty string (or any of JS's other thousand warts.) And I can do that because I have written some JS, and have been bitten by those things, just enough times now to recognize those JS code smells when I see them when reviewing code.

        But just because I can recognize bad JS code, doesn't mean that I can instantly conjure to mind whole blocks of JS code that do everything right and avoid all those pitfalls. I know "the right way" exists, and I've probably even used it before, and I would know it if I saw it... but it's not "on the tip of my tongue" like it would be for languages I'm more familiar with. I'd probably need to look it up, or check-and-test in a REPL, or look at some other code in the codebase to verify how it's done.

        With an LLM, though, I can just tell it the pseudocode (or equivalent code in a language I know better), get an initial attempt at the JS version of it out, immediately see whether it passes the "sniff test"; and if it doesn't, iterate just by pointing out my concerns in plain English — which will either result in code updated to solve the problem, or an explanation of why my concern isn't relevant. (Which, in the latter case, is a learning opportunity — but one to follow up in non-LLM sources.)

        The product of this iteration process is basically the same JS code I would have written myself — the same code I wanted to write myself, but didn't remember exactly "how it went." But I didn't have to spend any time dredging my memory for "how it went." The LLM handled that part.

        I would liken this to the difference between asking someone who knows anatomy but only ever does sculpture, to draw (rather than sculpt) someone's face; vs sitting the sculptor in front of a professional illustrator (who also knows anatomy), and having the sculptor describe the person's face to the illustrator in anatomical terms, with the sketch being iteratively improved through conversation and observation. The illustrator won't perfectly understand the requirements of the sculptor immediately — but the illustrator is still a lot more fluent in the medium than the sculptor is; and both parties have all the required knowledge of the domain (anatomy) to communicate efficiently about the sculptor's vision. So it still goes faster!

      • coliveira 8 hours ago

        > people who can do it will be high status

        They don't have high status even today, imagine in a world where they will be seen as just reviewers for AI code...

        • uh_uh 7 hours ago

          > They don't have high status even today

          Try putting on a dating website that you work at Google vs you work in agriculture and tell us which yielded more dates.

          • johnnyanmac 7 hours ago

            Does it matter? I imagine the tanned shirtless farmer would get more hits than the pasty million dollar salary Googler anyway. (no offense to Googleers).

            With so many hits, it's about hitting all the checkmarks instead of minmaxing on one check.

            • uh_uh 7 hours ago

              You can't just arbitrarily change (confounding) variables like that for a proper experiment. All other factors (including physique) must remain the same while you change one thing only: occupation.

              • johnnyanmac 6 hours ago

                "confounding" implies occupancy doesn't influce other factors of your life. I'm sure everyone wants the supermodel millionaire genius who's perfectly in touch with the feelings of their parter. If that was the norm then sure, farmers would be in trouble.

                My comment was more a critique on online dating culture and the values it weighs compared to in person meetups.

                • uh_uh 5 hours ago

                  I think it’s possible to create 2 dating profiles with the same pictures and change occupation only. It doesn’t have to be real to measure the impact of occupation.

    • wvenable 12 hours ago

      > Or do they have 25% trivial code?

      We all have probably 25% or more trivial code. AI is great for that. I have X (table structure, model, data, etc) and I want to make Y with it. A lot of code is pretty much mindless shuffling data around.

      The other thing is good for is anything pretty standard. If I'm using a new technology and I just want to get started with whatever is the best practice, it's going to do that.

      If I ever have to do PowerShell (I hate PowerShell), I can get AI to generate pretty much whatever I want and then I'm smart enough to fix any issues. But I really don't like starting from nothing in a tech I hate.

      • lambdasquirrel 12 hours ago

        I’ve already had one job interview where the applicant seemed broadly knowledgeable about everything we asked them during lead-in questions before actual debugging. Then when they had to actually dig deeper or demonstrate understanding while solving some problem, they fell short.

        I’m pretty sure they weren’t the first and there’ve been others we didn’t know about. So now I don’t ask lead-in questions anymore. Surprisingly, it doesn’t seem to make much of a difference and I don’t need to get burned again.

      • randomNumber7 12 hours ago

        Yes but then it would be more logical to say "AI makes our devs 25% more efficient". This is not what he said, but imo you are obviously right.

        • wvenable 12 hours ago

          Not necessarily. If 25% of the code is written by AI but that code isn't very interesting or difficult, it might not be making the devs 25% more efficient. It could even possibly be more but, either way, these are different metrics.

        • johannes1234321 12 hours ago

          The benefit doesn't translate 1:1. The generated code has to be read and verified and might require small adaptions. (Partially that can be done by AI as well)

          But for me it massively improved all the boilerplate generic work. A lot of those things which are just annoying work, but not interesting.

          Then I can focus on the bigger things, on the important parts.

    • groestl 12 hours ago

      > do they have 25% trivial code?

      From what I've seen on Google Cloud, both as a user and from leaked source code, 25% of their code is probably just packing and unpacking of protobufs.

      • hughesjj 10 hours ago

        I'd bet at least 25% of code attributes to me in gitfarm at Amazon was generated by octane and/or bones.

        God I miss that, thanks for the other person on HN introducing me to projen. Yeoman wasnt cutting it.

        These days I write a surprising amount of shell script and awk with LLMs. I review and adapt it, of course, but for short snippets of low context scripting it's been a huge time saver. I'm talking like 3-4, up to 20 lines of POSIX shell.

        Idk. Some day I'll actually learn AWK, and while I've gotten decent with POSIX shell (and bash), it's definitely been more monkey see monkey do than me going over all the libraries and reference docs like I did for python and the cpp FAQ.

    • akira2501 13 hours ago

      > isn't this announcement a terrible indictment

      Of obviously flawed corporate structures. This CEO has no particular programming expertise and most of his companies profits do not seem to flow from this activity. I strongly doubt he has a grip on the actual facts here and is uncritically repeating what was told to him in a meeting.

      He should, given his position, been the very _first_ person to ask the questions you've posed here.

    • nimithryn 8 hours ago

      An example:

      I'm looking for a new job, so I've been grinding leetcode (oof). I'm an experienced engineer and have worked at multiple FAANGs, so I'm pretty good at leetcode.

      Today I solved a leetcode problem 95% of the way to completion, but there was a subtle bug (maybe 10% of the test cases failing). I decided to see if Claude could help debug the code.

      I put the problem and the code into Claude and asked it to debug. Over the course of the conversation, Claude managed to provide 5 or 6 totally plausible but also completely wrong "fixes". Luckily, I am experienced enough at leetcode, and leetcode problems are simple enough, that I could easily tell that Claude was mistaken. Note that I am also very experienced with prompt engineering, as I ran a startup that used prompt engineering very heavily. Maybe it's a skill issue (my company did fail, hence why I need a job), but somehow I doubt it.

      Eventually, I found the bug on my own, without Claude's help. But leetcode are super simple, with known answers, and probably mostly in the training set! I can't imagine writing a big system and using an LLM heavily.

      Similarly, the other day I was trying to learn about e-graphs (the data structure). I went to Claude for help. I noticed that the more I used Claude, the more confused I became. I found other sources, and as it turns out, Claude was subtly wrong about e-graphs, an uncommon but reasonably well-researched data structure! Once again, it's lucky I was able to recognize that something was up. If the problem wasn't limited in scope, I'd have been totally lost!

      I use LLMs to help me code. I'm pro new technology. But when I see people bragging on Twitter about their fully automated coding solutions, or coding complex systems, or using LLMs for medical records or law or military or other highly critical domains, I seriously question their wisdom and experience.

    • bluerooibos 11 hours ago

      At what point are people going to stop shitting on the code that Copilot or other LLM tools generate?

      > how trivial the problems they solve are

      A single line of code IS trivial. Simple code is good code. If I write the first 3 lines of a complex method and I let Copilot complete the 4th, that's 25% of my code written by an LLM.

      These tools have exploded in popularity for good reason. If they were no good, people wouldn't be using them.

      I can only assume people making such comments don't actually code on a daily basis and use these tools daily. Either that or you haven't figured out the knack of how to make it work properly for you.

      • thegrim33 9 hours ago

        These tools have exploded in popularity for good reason. If they were no good, people wouldn't be using them.

        You're saying anything that's ever been popular is popular for a good reason? You can't think of counter examples that disprove this?

        You're saying anything that people decide to do is good, or else people wouldn't do it? People never act irrationally? People never blindly act on trends? People never sacrifice long-term results for short-term gain? You can't come up with any counter examples?

        • nijave 7 hours ago

          remembers Bitcoin et al

      • ghosty141 10 hours ago

        I havent seen anybody use them and be more productive.

        With c++ my experience is that the results are completely worthless. It saves you from writing a few keywords but nothing that really helps in a big way.

        Yes Copilot CAN work, for example writing some JS or filter functions, but in my job these trivial snippets are rather uncommon.

        I‘d genuinely love to see some resources that show its usefulness that arent just PR bs.

    • fuzzy2 10 hours ago

      I'll just answer here, but this isn't about this post in particular. It's about all of them. I've been struggling with a team of junior devs for the past months. How would I describe the experience? It's easy: just take any of these posts, replace "AI" with "junior dev", done.

      Except of course AI at least can do spelling. (Or at least I haven't encountered a problem in that regard.)

      I'm highly skeptical regarding LLM-assisted development. But I must admit: it works. If paired with an experienced senior developer. IMHO it must not be used otherwise.

      • palata 10 hours ago

        Isn't the whole point of hiring a junior dev that they will learn and become senior devs eventually?

        • johnnyanmac 7 hours ago

          Your mindset is sadly a decade put of touch. Companies long since shifted to churn mentality. They not only slashed retention perks, they actively expect people to move around every few years. So they don't bother stopping them or counter offering unless they are a truly exceptional person.

      • alfiedotwtf 10 hours ago

        > replace "AI" with "junior dev", done.

        Damn, that’s a good way of putting it. But I’ll go one further:

        replace "AI" with "junior dev who doesn’t like reading documentation or googling how things work so instead confidently types away while guessing the syntax and API so it kind of looks right”

        • hughesjj 10 hours ago

          I've been saying it's like an intern who has an incredible breadth of knowledge but very little depth, is excessively over confident in their own abilities given the error rates they commit, and is anxious to the point they'll straight up lie to you rather than admit a mistake.

          Currently, they don't learn skills as fast as a motivated intern. A stellar intern can go from no idea to "makes relevant contributions to our product with significant independence and low error rate" (hi Matt if you ever see this) in 3 months. LLMs, to my understanding, take significantly more attention from super smart people working long hours and an army of mechanical Turks, but won't be able to independently implement a feature and will still have a higher error rate in the same 3 months.

          It's still super impressive what LLMs can do, but that same intern is going to keep growing at that faster rate in skills and competency as they go from jr->mid->sr. Sure the intern won't have as large of a competency pool, and takes longer to respond to any given question, but the scope of what they can implement is so much greater.

    • skissane 12 hours ago

      > To my experience, AIs can generate perfectly good code relatively easy things, the kind you might as well copy&paste from stackoverflow, and they'll very confidently generate subtly wrong code for anything that's non-trivial for an experienced programmer to write. How do people deal with this?

      Well, just in the last 24 hours, ChatGPT gave me solutions to some relatively complex problems that turned out to be significantly wrong.

      Did that mean it was a complete waste of my time? I’m not sure. Its broken code gave me a starting point for tinkering and exploring and trying to understand why it wasn’t working (even if superficially it looked like it should). I’m not convinced I lost anything by trying its suggestions. And I learned some things in the process (e.g. asyncio doesn’t play well together with Flask-Sock)

    • aiforecastthway 10 hours ago

      I decided to go into programming instead of becoming an Engineer because most Engineering jobs seemed systematic and boring. (Software Engineers weren't really a thing at the time.)

      For most of my career, Software Engineering was a misnomer. The field was too young, and the tools used changed too quickly, for an appreciable amount of the work to be systematic and boring enough to consider it an Engineering discipline.

      I think we're now at the point where Software Engineering is... actually Engineering. Particularly in the case of large established companies that take software seriously, like Google (as opposed to e.g. a bank).

      Call it "trivial" and "boring" all you want, but at some point a road is just a road, and a train track is just a train track, and if it's not "trivial and boring" then you've probably fucked up pretty badly.

      • javaunsafe2019 10 hours ago

        Since when is engineering boring? Stranges ideas and claims u made.

        I’m an engineer who writes code since 20 years and it’s far away from trivial . Maybe to do web dev for a simple Webshop is. Elsewhere software has often times special requirements. Be them technical or domain wise both make the process complex and not simple IMHO

        • aiforecastthway 10 hours ago

          Boring is the opposite of exciting/dynamic.

          Not all engineering is boring. Also, boring is not bad.

          A lot of my career has been spent working to make software boring. To the extent that I've helped contribute to the status quo, where we can build certain types of software in a relatively secure fashion and on relatively predictable timelines, I am proud to have made the world more boring!

          (Also, complexity can be extraordinarily boring. Some of the most complex things are also the most boring. Nothing more boring than a set of business rules that has an irreducible complexity coming in at 5,211 lines of if-else blocks wrapped in two while loops! Give me a simple set of partial differential equations any day -- much more exciting to work with those! If you're the type of person who enjoys reading tax code, then we just have different definitions of boring; and if you're the type of person doesn't think tax code is complex, then I'm just a dummy compared to you :))

          But e.g. in the early naughts doing structural engineering work for residential new build projects was certainly less engaging and exciting work than building websites.

          Most engineering works aims for repeatable and predictable outcomes. That's a good thing, and it's not easy to achieve! But if Software has reached the point where the process of building certain types of software is "repeatable and predictable", and if Google needs a lot of that type of software, then if the main criticism of AI code assistants is "it's only good for repeatable and predictable", well, then the criticism isn't exactly the indictment that skeptics think it is.

          There is nothing wrong with boring in the sense I'm using it. Boring can be tremendously intellectually demanding. Also, predictable and repeatable processes are incredibly important if you want quality work at scale. Engineering is a good thing. Maturing as a field is a good thing.

          But if we're maturing from "wild west everything is a greenfield project" to "70% of things are pretty systematic and repeatable" then that says something about the criticism of AI coding assistants as being only good for the systematic and repeatable stuff, right?

          Also: the AI coding assistant paradigm is coming for structural/mechanical/civil engineering next, and in a big way!

          • sally_glance 9 hours ago

            I was totally with you until "70% of things are pretty systematic and repeatable". This has not been my experience, and I think you acknowledged it yourself when you said "Google (as opposed to e.g. a bank)" - there are many more banks in the world than Googles. The main challenge will be transitioning all those "banks" to "Google's" and further still. They have 10y+ codebases written in 5 months by a single genius engineer (who later found his luck elsewhere), then hammered by multiple years of changing maintainers. That's the real "70% of things" :D

            • aiforecastthway 7 hours ago

              No, I think we agree! Google SWE roles will be automated faster SWE roles in the financial sector :)

    • JohnMakin 12 hours ago

      > To my experience, AIs can generate perfectly good code relatively easy things, the kind you might as well copy&paste from stackoverflow,

      This, imho, is what is happening. In the olden days, when StackOverflow + Google used to magically find the exact problem from the exact domain you needed every time - even then you'd often need to sift through the answers (top voted one was increasingly not what you needed) to find what you needed, then modify it further to precisely fit whatever you were doing. This worked fine for me for a long time until search rendered itself worthless and the overall answer quality of StackOverflow has gone down (imo). So, we are here, essentially doing the exact same thing in a much more expensive way, as you said.

      Regarding future employment opportunities - this rot is already happening and hires are coming from it, at least from what I'm seeing in my own domain.

    • eco 12 hours ago

      I'd be terribly scared to use it in a language that isn't statically typed with many, many compile time error checks.

      Unless you're the type of programmer that is writing sabots all day (connecting round pegs into square holes between two data sources) you've got to be very critical of what these things are spitting out.

      • int_19h 9 hours ago

        I can't help but think that Go might be one of the better languages for AI to target - statically typed, verbose with a lot of repeated scaffolding, yet generally not that easy to shoot yourself in the foot. Which might explain why this is a thing at Google specifically.

      • randomNumber7 12 hours ago

        It is way more scary to use it for C or C++ than Python imo.

      • cybrox 12 hours ago

        If you use it as advanced IntelliSense/auto-complete, it's not any worse than with typed languages.

        If you just let it generate and run the code... yeah, probably, since you won't catch the issues at compile time.

    • grepLeigh 11 hours ago

      I have a whole "chop wood, carry water" speech born from leading corporate software teams. A lot of work at a company of sufficient size boils down to keeping up with software entropy while also chipping away at some initiative that rolls up to an OKR. It can be such a demotivating experience for the type of smart, passionate people that FANNGs like to hire.

      There's even a buzzword for it: KTLO (keep the lights on). You don't want to be spending 100% of your time on KTLO work, but it's unrealistic to expect to do done of it. Most software engineers would gladly outsource this type of scutwork.

      • girvo 6 hours ago

        > KTLO (keep the lights on)

        Some places also call this "RTB" for "run the business" type work. Nothing but respect for the engineers who enjoy that kind of approach, I work with several!

    • geodel 6 hours ago

      > Like, isn't this announcement a terrible indictment of how inexperienced their engineers are..

      Well, Rob Pike said same thing about experience and that seemed to pissed lot of people endlessly.

      However I don't think it as indictment It just seems very reasonable to me. In fact 25% seem to be on lower end. Amazon seems to have thousands of software engineers who are doing API calling API calling API.. kind of crap. Now their annual income might be more than my lifetime earnings. But to think that all these highly paid engineers are doing highly complex work that need high skills seems just a myth that is useful to boost ego of engineers and their employers alike.

    • dmurray 10 hours ago

      > Or do they have 25% trivial code?

      Surely yes.

      I (not at Google) rarely use the LLM for anything more than two lines at a time, but it writes/autocompletes 25% of my code no problem.

      I believe Google have character-level telemetry for measuring things like this, so they can easily count it in a way that can be called "writing 25% of the code".

      Having plenty of "trivial code" isn't an indictment of the organisation. Every codebase has parts that are straightforward.

    • pjmorris 8 hours ago

      > Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both?

      Or maybe there's a KPI around lines of code or commits.

    • hifromwork 13 hours ago

      25% trivial code sounds like a reasonable guess.

      • fzysingularity 13 hours ago

        This seems reasonable - but I'm interpreting this as most junior-level coding needs will end and be replaced with AI.

        • mrguyorama 12 hours ago

          And the non junior developers will then just magically appear from the aether!With 10 years experience in a four year old stack.

    • asdfman123 9 hours ago

      No, AI is generating a quarter of all characters. It's an autocomplete engine. You press tab, it finishes the line. Doesn't do any heavy lifting at all.

      Source: I work there, see my previous comment.

    • ants_everywhere 11 hours ago

      Google's internal codebase is nicer and more structured than the average open source code base.

      Their internal AI tools are presumably trained on their code, and it wouldn't surprise me if the AI is capable of much more internally than public coding AIs are.

    • djvuvtgcuehb 10 hours ago

      A better analogy is a self driving car where you need to keep your hands on the wheel in case something goes wrong.

      For the most part, it drives itself.

      Yes, the majority of my code is trivial. But I've also had ai iterate on some very non trivial work including writing the test suite.

      It's basically autocomplete on steroids that predicts your next change in the file, not just the next change on the line.

      The copy paste from stack overflow trope is a bit weird, I haven't done that in ten years and I don't think the code it produces is that low quality either. Copy paste from an open source repo on GitHub maybe?

    • sangnoir 12 hours ago

      > Does Google now have 25% subtly wrong code?

      How do you quantify "new code" - is it by lines of code or number of PRs/changesets generated? I can easily see it being the latter - if an AI workflow suggests 1 naming-change/cleanup commit to your PR made of 3 other human-authored commits, has it authored 25% of code? Arguably, yes - but it's trivial code that ought to be reviewed by humans. Dependabot is responsible for a good chunk of PRs already.

      Having a monorepo brings plenty of opportunities for automation when refactoring - whether its AI, AST manipulation or even good old grep. The trick is not to merge the code directly, but have humans in the loop to approve, or take-over and correct the code first.

    • afavour 11 hours ago

      > Or do they have 25% trivial code?

      If anything that's probably an underestimate. Not to downplay the complexity in much of what Google does but I'm sure they also do an absolute ton of tedious, boring CRUD operations that an AI could write.

    • manquer 8 hours ago

      if their sales and stock depends on saying that new shinny thing is changing the world then they have to say so, and say how it is changing their world .

      It is not Netflix or Airbnb or Stripe etc making this claim, google managers have a vested interest in this.

      If this metric was meaningful either of two things should have happened - google should have fired 25 % developers or built 25 % more product .

      Both of this would visible in their financial reporting and has not happened.

      metrics like this claim depends on how you count, that is easily gamed and can be made to show any % between 0-99 you want. Of the top of head

      - I could count all AI generated code used for training as new code

      - consider compiler output to assembly as AI code by adding some meaningless AI step in it

      - code generated with boilerplate perhaps even generated by llm now

      - mix autocomplete with llm prompts so on

      The number only needs to believable , 25 is believable now, it is not true but you would believe it >50 has psychological significance and bad PR on machines replacing humans jobs , less than 10 is bad for AI sales , 25 works all the commenters in this thread is testament to that

    • fsckboy 11 hours ago

      > Does Google now have 25% subtly wrong code?

      maybe the ai generates 100% of the company's new code, and then by the time the programmers have fixed it, only 25% is left of the AI's ship of Theseus

    • signa11 10 hours ago

      ``` Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both? ```

      there is a 3rd possibility as well: having spent a huge chunk of change on these techniques, why not overhype it (not outright lie about it) and hope to, somewhat recoup the cost from unsuspecting masses ?

    • sally_glance 9 hours ago

      I guess the obvious response would be - yes, they have _at least_ 25% trivial code (as any other enterprise), and yes, they should have lots of engineers 'babysitting' (aka generating training data). So in another year or two there will be no manpower at all needed for the trivial tasks.

    • herval 8 hours ago

      In my experience, that was always the case with gpt3.5, most times the case with gpt4, some times the case with the latest sonnet. It’s getting better FAST, and the kind of code they can handle is increasing fast too

    • airstrike 13 hours ago

      By definition, "trivial" code should make up a significant portion of any code base, so perhaps the 25% is precisely the bit that is trivial and easily automated.

      • Smaug123 12 hours ago

        I don't think the word "definition" means what you think it means!

    • Cthulhu_ 11 hours ago

      You're quick to jump to the assertion that AI only generates SO style utility code to do X, but it can also be used to generate boring mapping code (e.g. to/from SQL datasets). I heard one ex Google dev say that most of his job wat fiddling with Protobuf definitions and payloads.

    • cybrox 12 hours ago

      Depends if they include test code in this metric. I have found AI most valuable in generating test code. I usually want to keep tests as simple as possible, so I prefer some repetition over abstraction to make sure there's no issues with the test logic itself, AI makes this somewhat verbose process very easy and efficient.

    • skeeter2020 12 hours ago

      trivial code could very easily include the vast majority of most apps we're building these days. Most of it's just glue, and AI can probably stitch together a bunch of API calls and some UI as well as a human. It could also be a lot of non-product code, tooling, one-time things, etc.

    • aorloff 12 hours ago

      Its been a while since I was really fully in the trenches, but not that long.

      How people deal with this is they start by writing the test case.

      Once they have that, debugging that 25% comes relatively easily and after that its basically packaging up the PR

    • andyjohnson0 13 hours ago

      I suspect that a lot of the hard, google-scale stuff has already been done and packaged as an internal service or library - and just gets re-used. So the AIs are probably churning out new settings dialogs and the like.

    • nwellinghoff 13 hours ago

      They probably have ai that scans existing human written code and auto generates patches and fixes to improve performance or security. The 25% is just a top level stat with no real meaning without context.

    • jjtheblunt 13 hours ago

      Maybe the trick is to hide vetted correct code, of whatever origin, behind function calls for documented functions, thereby iteratively simplifying the work a later-trained LLM would need to do?

    • rh2323o4jl234 6 hours ago

      > Does Google now have 25% subtly wrong code?

      I think you underestimate the amount of boiler-plate code that a typical job at Google requires. I found it soul-crushingly boring (though their pay is insane).

    • notyourwork 12 hours ago

      To your point, I don't buy the truth of the statement. I work in big tech and am convinced that 25% of the code being written is not coming from AI.

    • ZiiS 11 hours ago

      Yes 25% of code is trivial; certainly for companies like Google that have always been a bit NIH.

    • tmoravec 13 hours ago

      Does the figure include unit tests?

    • ithkuil 12 hours ago

      Or perhaps that even for excellent engineers and complicated problems a quarter of the code one writes is stupid almost copy-pasteable boilerplate which is now an excellent target for the magic lArge text Interpolator

    • Kiro 11 hours ago

      You're doing circular reasoning based on your initial concern actually being a problem in practice. In my experience it's not, which makes all your other speculations inherently incorrect.

    • TacticalCoder 9 hours ago

      > and they'll very confidently generate subtly wrong code for anything that's non-trivial for an experienced programmer to write

      Thankfully I don't find it subtle but plain wrong for anything but trivial stuff. I use it (and pay an AI subscription) for things where false positive won't ruin the day, like parameters validation.

      But for anything advanced, it's pretty hopeless.

      I've talked with lawyers: same thing. With doctors: same thing.

      Which ain't no surprise see how these things do work.

      > Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both?

      Probably lots of highly repetitive boilerplate stuff everywhere. Which in itself is quite horrifying if you think about it.

    • dyauspitr 10 hours ago

      This subtly wrong thing happens maybe 10% of the time in my experience and asking it to generate unit tests or writing your own ahead of time almost completely eliminates it.

    • uoaei 10 hours ago

      I've suspected for a while now that the people who find value in AI-generated code don't actually have hard problems to solve. I wonder how else they might justify their salary.

    • vkou 11 hours ago

      How would you react to a tech firm that in 2018, proudly announced that 25% of their code was generated by IntelliJ/Resharper/Visual Studio's codegen and autocomplete and refactoring tools?

    • jajko 13 hours ago

      I can generate in eclipse pojo classes or their accessor methods. I can let maven build entire packages from say XSDs (I know I am talking old boring tech, just giving an example). I can copy&paste half the code (if not more) from stack overflow.

      Now replace all this and much more with 'AI'. If they said AI helped them increase say ad effectivity by 3-5%, I'll start paying attention.

    • Nasrudith 13 hours ago

      I wouldn't call it an indictment necessarily, because so much is dependent upon circumstances. They can't all be "deep problems" in the real world. Projects tend to have two components, "deep" work which is difficult and requires high skill and cannot be made up with by using masses of inexperienced and "shallow" work where being skilled doesn't really help, or doesn't help too much compared to throwing more bodies at the problem. To use an example it is like advanced accounting vs just counting up sales receipts.

      Even if their engineers were inexperienced that wouldn't be an indictment in itself so long as they had a sufficient necessary amount of shallow work. Using all experienced engineers to do shallow work is just inefficient, like having brain surgeons removing bunions. Automation is basically a way to transform deep work to a producer of "free" shallow work.

      That said, the real impressive thing with code isn't in its creation but in its ability to losslessly delete code and maintain or improve functionality.

    • llm_trw 8 hours ago

      Or alternatively you don't know how to use AI to help you code and are in the 2020s equivalent of the 'Why do I need google when I have the yellow pages?' phase a lot of adults went through in the 2000s.

      This is not a bad thing since you can improve, but constantly dismissing something that a lot of people are finding an amazing productivity boost should give you some pause.

      • johnnyanmac 7 hours ago

        It's like blockchain right now. I'm sure there is some killer feature that can justify its problem space.

        But as of now the field is full of swamps. Of grifters, of people with a solution looking for a problem. Of outright scams of questionable legality being challenged as we speak.

        I'll wait until the swamps work itself out before evaluating an LLM workflow.

        • llm_trw 5 hours ago

          Blockchain was always a solution looking for a problem.

          LLMs are being used right now by a lot of people, myself included, to do tasks which we would have never bothered with before.

          Again, if you don't know how to use them you can learn.

          • johnnyanmac 4 hours ago

            And the same was said with the last fad when Blockcbain was all investors wanted to hear about ("Big Data" I suppose). It's all a pattern.

            It's a legal nightmare in my domain as of now, so I'll make sure the Sam Breaker-Friends are weeded out. If it's really all the hype it won't be going anywhere in 5 years.

  • nine_zeros a day ago

    Writing more code means more needs to be maintained and they are cleverly hiding that fact. Software is a lot more like complex plumbing than people want to admit:

    More lines == more shit to maintain. Complex lines == the shit is unmanageable.

    But wall street investors love simplistic narratives such as More X == More revenue. So here we are. Pretty clever marketing imo.

  • throwaway290 6 hours ago

    "More than a quarter of our code is created by autocomplete!"

    That's not that much...

  • evbogue a day ago

    I'd be turning off the autocomplete in my IDE if I was at Google. Seems to double as a keylogger.

  • AI_beffr 7 hours ago

    i like how people say that ai can only write "trivial" code well or without mistakes. but what about from the point of view of the AI? writing "trivial" code is probably almost exactly as much of a challenge as writing the most complex code a human could ever write. the scales are not the same. dont allow yourself to feel so safe..

  • jrockway a day ago

    When I was there, way more than 25% of the code was copying one proto into another proto, or so people complained. What sort of memes are people making now that this task has been automated?

    • hn_throwaway_99 a day ago

      I am very interested in how this 25% number is calculated, and if it's a lot of boilerplate that in the past would have been just been big copy-paste jobs like a lot of protobuffers work. Would be curious if any Googlers could comment.

      Not that I'm really discounting the value of AI here. For example, I've found a ton of value and saved time getting AI to write CDKTF (basically, Terraform in Typescript) config scripts for me. I don't write Terraform that often, there are a ton of options I always forget, etc. So asking ChatGPT to write a Terraform config for, say, a new scheduled task for example saves me from a lot of manual lookup.

      But at the same time, the AI isn't really writing the complicated logic pieces for me. I think that comes down to the fact that when I do need to write complicated logic, I'm a decent enough programmer that it's probably faster for me to write it out in a high-level programming language than write it in English first.

    • dietr1ch a day ago

      I miss old memegen, but it got ruined by HR :/

      • rcarmo a day ago

        I am reliably told that it is alive and well, even if it’s changed a bit.

        • anon1243 12 hours ago

          Memegen is there but unrecognizable now. A dedicated moderator team deletes memes, locks comments, bans people for mentioning "killing a process" (threatening language!) and contacts their managers.

          • dietr1ch 12 hours ago

            Yup, I simply stopped using it, which means they won.

  • kev009 a day ago

    I would hope a CEO, especially a technical one, would have enough sense to couple that statement to some useful business metric, because in isolation it might be announcement of public humiliation.

    • dmix a day ago

      The elitism of programmers who think the boilerplate code they write for 25% of the job, that's already been written before by 1000 other people before, is in fact a valuable use of company time to write by hand again.

      IMO it's only really an issue if a competent human wasn't involved in the process, basically a person who could have written it if needed, then they do the work connecting it to the useful stuff, and have appropriate QA/testing in place...the latter often taking far more effort than the actual writing-the-code time itself, even when a human does it.

      • marcosdumay a day ago

        If 25% of your code is boilerplate, you have a serious architectural problem.

        That said, I've seen even higher ratios. But never in any place that survived for long.

        • hn_throwaway_99 a day ago

          Depends on how you define "boilerplate". E.g. Terraform configs count for a significant number of the total lines in one of my repos. It's not really "boilerplate" in that it's not the exact same everywhere, but it is boilerplate in the since that setting up, say, a pretty standard Cloud SQL instance can take many, many lines of code just because there are so many config options.

          • marcosdumay 17 hours ago

            Terraform is verbose.

            It's only boilerplate if you write it again to set almost the same thing again. What, granted, if you are writing bare terraform config, it's probably both.

            But on either case, if your terraform config is repetitive and a large part of the code on an entire thing (not a repo, repos are arbitraty divisions, maybe "product", but it's also a bad name). Than that thing is certainly close to useless.

        • TheNewsIsHere a day ago

          To add: it’s been my experience that it’s the company that thinks the boilerplate code is some special, secret, proprietary thing that no other business could possibly have produced.

          Not the developer who has written the same effective stanza 10 times before.

        • wvenable 12 hours ago

          25% of new code might be boilerplate. All my apps in my organization start out roughly the same way with all the same stuff. You could argue on day one that 100% of the code is boilerplate and by the end of the project it is only a small percentage.

        • 8note a day ago

          Is it though? It seems to me like a team ownership boundary question rather than an architecture question.

          Architecturally, it sounds like different architecture components map somewhere close to 1:1 to teams, rather than teams hacking components to be closer coupled to each other because they have the same ownership.

          I'd see too much boilerplate as being a organization/management org issue rather than a code architecture issue

        • cryptoz a day ago

          Android mobile development has gotten so …architectured that I would guess most apps have a much higher rate of “boilerplate” than you’d hope for.

          Everything is getting forced into a scalable, general purpose way, that most apps have to add a ridiculous amount of boilerplate.

        • dmix a day ago

          You're probably thinking of just raw codebases, your company source code repo. Programmers do far, far more boilerplate stuff than raw code they commit with git. Debugging, data processing, system scripts, writing SQL queries, etc.

          Combine that with generic functions, framework boilerplate, OS/browser stuff, or explicit x-y-z code then your 'boilerplate' (ie repetitive, easily reproducible) easily gets to 25% of code you're programmers write every month. If your job is >75% pure human cognition problem solving you're probably in a higher tier of jobs than the vast majority of programmers on the planet.

      • kev009 a day ago

        Doing the same thing but faster might just mean you are masturbating more furiously. Show me the money, especially from a CEO.

      • mistrial9 a day ago

        you probably underestimate the endless miles of verbose code that are possible, by human or machine but especially by machine.

    • dyauspitr a day ago

      Or a statement of pride that the intelligence they created is capable of lofty tasks.

  • joeevans1000 a day ago

    I read these threads and the usual 'I have to fix the AI code for longer than it would have taken to write it from scratch' and can't help but feel folks are truly trying to downplay what is going to eat the software industry alive.

    • steve_adams_86 8 hours ago

      I’m not convinced it’s there yet. I think it’s actively eating part of the software industry, but I wonder where that’ll stop—at least for some time—and a new shape of the industry is settled upon.

      There are still things I do in my IDE that I can’t seem to get AI to do. It’s not really close yet. I don’t doubt it could get there eventually, but I suppose I don’t believe it’s about to eat those parts of the industry.

      I do anticipate a massive issue from lower skill software jobs vanishing. I don’t know what entry into the industry will look like. There will be a strange gap that’s filled by AI and some people who use it to do basic things but have no idea how it does it. They will be somewhat like data entry workers, knowing how to use a spreadsheet or word processor but having no idea how the program actually works let alone the underlying operating system. I fully expect that to happen, and I can’t properly imagine what the implications will be.

  • tylerchilds a day ago

    if the golden rule is that code is a liability, what does this headline imply?

    • eddd-ddde a day ago

      The code would be getting written anyways, its an invariant. The difference is less time wasted typing keys (albeit small amount of time) and more importantly (in my experience) it helps A LOT for discoverability.

      With g3's immense amount of context, LLMs can vastly help you discover how other people are using existing libraries.

      • tylerchilds a day ago

        my experience dabbling with the ai and code is that it is terrible at coming up with new stuff unless it already exists

        in regards to how others are using libraries, that’s where the technology will excel— re-writing code. once it has a stable AST to work with, the mathematical equation it is solving is a refactor.

        until it has that AST that solves the business need, the game is just prompt spaghetti until it hits altitude to be able to refactor.

    • JimDabell a day ago

      Nothing at all. The headline talks about the proportion of code written by AI. Contrary to what a lot of comments here are assuming, it does not say that the volume of code written has increased.

      Google could be writing the same amount of code with fewer developers (they have had multiple layoffs lately), or their developers could be focusing more of their time and attention on the code they do write.

    • contravariant 12 hours ago

      Well, either they just didn't spend as much time writing the code or they increased their liability by about 33%.

      The truth is likely somewhere in between.

    • danielmarkbruce a day ago

      I'm sure google won't pay you money to take all their code off their hands.

      • AlexandrB a day ago

        But they would pay me money to audit it for security.

        • danielmarkbruce a day ago

          yup, you can get paid all kinds of money to fix/guard/check billion/trillion dollar assets..

  • an_d_rew a day ago

    Huh.

    That may explain why google search has, in the past couple of months, become so unusable for me that I switched (happily) to kagi.

    • twarge a day ago

      Which uses Google results?

  • croes a day ago

    Related?

    > New tool bypasses Google Chrome’s new cookie encryption system

    https://news.ycombinator.com/item?id=41988648

  • hipadev23 a day ago

    Google is now mass-producing techdebt at rates not seen since Martin Fowler’s first design pattern blogposts.

    • nelup20 11 hours ago

      We've now entered the age of exponential tech debt, it'll be a sight to behold

    • joeevans1000 a day ago

      Not really technical debt when you will be able to regenerate 20K lines of code in a minute then QA and deploy it automatically.

      • kibwen a day ago

        So a fresh, new ledger of technical debt every morning, impossible to ever pay off?

      • 1attice a day ago

        Assuming, of course:

        - You know which 20K lines need changing - You have perfect QA - Nothing ever goes wrong in deployment.

        I think there's a tendency in our industry to only take the hypotenuse of curves at the steepest point

        • TheNewsIsHere a day ago

          That is a fantastic way to put it. I’d argue that you’ve described a bubble, which fits perfectly with the topic and where _most_ of it will eventually end up.

  • Tier3r a day ago

    Google is getting enshittified. It's already visible in many small ways. I was just using Google maps and in the route they called X (bus) Interchange as X International. I can only assume this happened because they are using AI to summarise routes now. Why in the world are they doing that? They have exact location names available.

  • 1oooqooq a day ago

    this only means employees sign up to use new toys and they are paying enough seats for all employees.

    it's like companies paying all those todolist and tutorial apps left running on aws ec2 instances in 2007ish.

    I'd be worried if i were a google investor. lol.

    • fragmede a day ago

      I'm not sure I get your point. Google created Gemini and whatever internal LLM their employees are using for code generation. Who are they paying, and for what seats? Not Microsoft or OpenAI or Anthropic...

  • ultra_nick a day ago

    Why work at big businesses anymore? Let's just create more startups.

    • IAmGraydon a day ago

      Risk appetite.

      • game_the0ry 11 hours ago

        Not so sure nowadays. Given how often big tech lays off employees and the abundance of recently laid off tech talent, trying to start your own company sounds a lot more appealing than ever.

        I consider myself risk-averse and even I am contemplating starting a small business in the event I get laid off.

        • shiroiushi 5 hours ago

          > trying to start your own company sounds a lot more appealing than ever.

          It really isn't. Even if you get laid off from a large tech company, you probably didn't have to pay a cent to get the job there in the first place, and you started drawing a paycheck right away (after the initial delay due to the pay cycle). If you only work there for 6 months, you can save a really good amount of money if you have frugal habits.

          Starting a company isn't nearly as easy, usually requires up-front investment, and there can be a long time before you generate any profit. Either you need some business idea that's going to generate profit (or at least enough revenue to give the founder(s) a paycheck), or a business loan or other funding, which means convincing someone to invest in your company somehow.

          Starting your own company only sounds appealing if you ignore reality, or have the privilege of having plenty of cash saved up for such a venture.

        • wayoverthecloud 9 hours ago

          Interesting. I think the same thing but I wonder if the market is not ready for products created by the big guys, what can I offer? Have you thought in that line?