was looking for the quote, here is the full paragraph with it
"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."
I know about Google's code reviews (I had one CL accepted a long time ago), but I am not sure they are universally as good as they used to be. Every day, I see multiple bugs in Google's services accessed using stock Chrome for Android on a Pixel phone. Maybe many folks over there don't care anymore?
One of the hardest things to do is review someone else's code. You can't know what they were thinking when they wrote it. It's also a challenge to find non-trivial bugs unless you spend almost as much time as the author getting a mental model of the code.
Reviewing an AI's code strikes me as a pretty easy way to end up with a bunch of Heisenbugs and a team of developers that don't fully understand their codebase. If the internal expectation is a "25%" increase in development velocity a lot of engineers will just accept PRs with LGTM. If they heavily reviewed the code it might be perfectly fine but their number won't reflect the expected improvements and their quarterly or yearly reviews will suffer.
You also can't go query that AI about why they wrote the code the way they did, not only is there inherent variability in response to prompts but an updated model might have entirely different responses with no way back to the original "author".
Article title: Q3 earnings call: CEO’s remarks
PDF of Earnings (29 points) https://news.ycombinator.com/item?id=41988811
Coverage from NYT (10 points) https://news.ycombinator.com/item?id=41989256
Coverage from Verge (7 points) https://news.ycombinator.com/item?id=41989674
Coverage from CNBC (2 points) https://news.ycombinator.com/item?id=41989727
was looking for the quote, here is the full paragraph with it
"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."
That makes it sound like they're slapping "AI" on preexisting codegen tools.
…. then reviewed and accepted by engineers.
I know about Google's code reviews (I had one CL accepted a long time ago), but I am not sure they are universally as good as they used to be. Every day, I see multiple bugs in Google's services accessed using stock Chrome for Android on a Pixel phone. Maybe many folks over there don't care anymore?
One of the hardest things to do is review someone else's code. You can't know what they were thinking when they wrote it. It's also a challenge to find non-trivial bugs unless you spend almost as much time as the author getting a mental model of the code.
Reviewing an AI's code strikes me as a pretty easy way to end up with a bunch of Heisenbugs and a team of developers that don't fully understand their codebase. If the internal expectation is a "25%" increase in development velocity a lot of engineers will just accept PRs with LGTM. If they heavily reviewed the code it might be perfectly fine but their number won't reflect the expected improvements and their quarterly or yearly reviews will suffer.
You also can't go query that AI about why they wrote the code the way they did, not only is there inherent variability in response to prompts but an updated model might have entirely different responses with no way back to the original "author".
Ah, so that's what happened.
AI + coding has gradually become a very mature field