140 comments

  • ses1984 2 hours ago

    I asked copilot how developers would react if AI agents put ads in their PRs.

    >Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.

    Sometimes AI can be right.

    • simonw 2 hours ago

      Which product called Copilot did you ask?

    • temp0826 an hour ago

      I'm reminded of the ads when logging into Ununtu in the motd...nothing infuriated me more (I only used it for a short period).

      • Meneth an hour ago

        Me too, main reason I switched to Debian.

    • hk__2 2 hours ago

      It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.

      • phoe-krk 2 hours ago

        I agree. It's not an advertisement, it's simply a piece of information about your particular choice of technology.

        --------------

        Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io

        • cozzyd 31 minutes ago

          I'm curious about how a hacker news client on a smart TV would work...

          • phoe-krk 27 minutes ago

            You can try it now! Prices starting at €13.99 per month, billed yearly.

        • NetOpWibby an hour ago

          Domain available for $50 from Cloudflare

      • cozzyd 2 hours ago

        which is an ad...

        Sent from Firefox on AlmaLinux 9. https://getfirefox.com https://almalinux.org

      • layer8 40 minutes ago

        "Sent from my iPhone" actually is an ad when it’s the result of default settings.

        Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.

        • saidnooneever 31 minutes ago

          companies pay for ad distribution. its not like they give a free ad service -$-. maybe they dont chose how the campaigns are done (and dont give shits)

          brawndo - its what your brain needs

      • alsetmusic 12 minutes ago

        > It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.

        The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.

      • butterlesstoast 19 minutes ago

        Agreed. Barely notice it.

        -Sent from iPhone

        Wanting more from your sun tanning bed? Head over to Ultra Tan for a 10% off coupon right now!

      • MarsIronPI 39 minutes ago

        "Sent from my iPhone" is just as bad. If you don't see it then IDK what to tell you.

      • swimmingbrain 13 minutes ago

        the difference is "sent from my iPhone" is on YOUR outgoing email. you opted into that default. this is copilot editing someone else's PR description with promotional text for third party tools. that's not a signature, that's injection. imagine if gcc started appending "compiled with gcc, try our new optimization flags" to your README every time you built a project.

      • flumes_whims_ an hour ago

        If it only mentioned made with copilot that would be one thing, but it didn't just mention Copilot. It advertised a different third party app.

      • godzillabrennus 43 minutes ago

        It's not an ad, it's a message from our sponsor.

        This message brought to you by TempleOS

      • fortran77 16 minutes ago

        And everyone thought they were cool! Mac zeolots still put "Made with a Mac" on their webpages.

  • Aurornis 2 hours ago

    I actually love these ads and also the way Claude injects itself as a co-author.

    Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

    I think we should continue encouraging AI-generated PRs to label themselves, honestly.

    I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.

    • mikkupikku an hour ago

      It's not a self-own, it's honest disclosure. It's unethical (if not outright fraudulent) to publish LLM work as if it were your own. Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.

      • palmotea 8 minutes ago

        > It's unethical (if not outright fraudulent) to publish LLM work as if it were your own.

        I disagree on that. It's really a gray area.

        If it's some lazy vibecoded shit, I think what you say totally applies.

        If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.

        And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).

        > Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.

        I do agree that's a sensible default.

      • zeroonetwothree an hour ago

        I think it depends a lot if you reviewed it as carefully as you would your own code.

        Of course most people don’t do that

        • mikkupikku an hour ago

          I don't put human code reviewers down as coauthors let alone the sole authors of my commit. So honestly, the fact that a vibe coded commit lists me as the author at all is a little bit dodgy but I think I'm okay with it. The LLM needs to be coauthor at least though, if not outright the author.

          So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.

          • hombre_fatal 40 minutes ago

            The implementor only got credit in the day where the implementor was a human who had to do a lot of the work, often all of the work.

            Now that the cost of writing code is $0, the planner gets the credit.

            Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.

            It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.

            Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.

            "Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.

            • alsetmusic 4 minutes ago

              > And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything. Instead we should demand good software just like we did when it was all human-written and still crappy.

              It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.

        • singpolyma3 14 minutes ago

          Not just review but how you worked with the AI.

          If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.

        • raphinou 33 minutes ago

          In my project's readme I put this text:

             "There is no commit by an agent user, for two reasons:
          
              * If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
              * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
          
          
          It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.

          Interested to read opinions on this approach.

          • embedding-shape 29 minutes ago

            > * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."

            Seems... Not that useful?

            Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?

            I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".

            • raphinou 15 minutes ago

              The agents run in a container and have an other git identity configured. It happens that agents commit code and I don't want to push it accidentally from outside the container, which is where I work.

    • QuantumNomad_ 2 hours ago

      > […] and also the way Claude injects itself as a co-author.

      > Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

      I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.

      For changes that I made myself, I commit with myself as author.

      Why would I commit something written by AI with myself as author?

      > I think we should continue encouraging AI-generated PRs to label themselves, honestly.

      Exactly.

      • orwin 4 minutes ago

        I'm not against putting AI as coauthor, but removing the human who allowed the commit to be pushed/deployed from the commit would be a security issue at my job. The only reason we're allowed to deploy code with a generic account is that we tag the repo/commit hash, and we wrote a small piece of code that retrieve the author UID from git, so that in the log it say 'user XXXNNN opened the flux xxx' (or something else depending on what our code does)

      • yarn_ an hour ago

        "Why would I commit something written by AI with myself as author?"

        Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.

        What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?

        The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?

        Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.

        • Krssst an hour ago

          As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.

          Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).

          As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).

          • corndoge an hour ago

            Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.

            • josephg 43 minutes ago

              I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.

          • yarn_ an hour ago

            Future analysis is a valid reason to keep it, thats a good point and I agree with that.

        • waisbrot an hour ago

          Claude adds "Co-authored by" attribution for itself when committing, so you can see the human author and also the bot.

          I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.

          • yarn_ an hour ago

            > I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.

            Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?

            • 59nadir an hour ago

              Personally it would make the choice to say no to the entire thing a whole lot easier if they self-reported on themselves automatically and with no recourse to hide the fact that they've used LLMs. I want to see it for dependencies (I already avoid them, and would especially do so with ones heavily developed via LLMs), products I'd like to use, PRs submitted to my projects, and so on, so I can choose to avoid them.

              Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.

              • aydyn 4 minutes ago

                If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.

                Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.

              • rapind 36 minutes ago

                This is shouting at the clouds I'm afraid (I don't mean this in a dismissive way). I understand the reasoning, but it's frankly none of your business how I write my code or my commits, unless I choose to share that with you. You also have a right to deny my PRs in your own project of course, and you don't even have to tell me why! I think on github at least you can even ban me from submitting PRs.

                While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.

              • yarn_ 32 minutes ago

                I mean sure, in the same sense that law enforcement would be a lot easier if all the criminals just came to the police station and gave themselves up

                Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.

                Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.

            • ctxc an hour ago

              Accountability. Same reason I want to read human written content rather than obvious AI: both can be equally shit, but at least with humans there's a high probability of the aspirational quality of wanting to be considered "good"

              With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.

              • yarn_ 37 minutes ago

                The human who submitted the PR is 100% accountable either way, thats partly my point.

                Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.

            • jacobgkau an hour ago

              LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."

              Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).

              Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.

              • yarn_ 40 minutes ago

                Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).

                >Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.

                I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.

                This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.

      • smrtinsert an hour ago

        If you review the code then committing as yourself makes perfect sense to me

        • homebrewer an hour ago

          Linux has used "Reviewed-by" trailers for many years. If you've only done minor editing, or none at all, it's something to consider.

        • nemomarx an hour ago

          If you review a juniors code, do you commit it under your name?

          • corndoge an hour ago

            A junior is a person. A tool is a tool. Do you credit your text editor with authorship?

            • scottyah an hour ago

              If it contributed significantly to the design and execution, and was a major contributing factor yes. Would you say a reserve parachute saved your life or would you say you saved your own life? What about the maker of the parachute?

              I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.

              Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.

            • jacobgkau an hour ago

              False equivalence. A text editor does not type characters that you didn't explicitly type or select.

          • data-ottawa an hour ago

            That’s reviewing code vs contributing code.

      • Imustaskforhelp 2 hours ago

        > Why would I commit something written by AI as myself?

        I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.

        I then sometimes, manually paste it and just hit enter.

        These are prototypes though, although I build in public. Mostly done for experimental purpoess.

        I am not sure how many people might be doing the same though.

        But in some previous projects I have had projects stating "made by gemini" etc.

        maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.

    • trevor-e 5 minutes ago

      These are odd takes to me.

      > was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

      As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.

      > I would like to know when someone is trying to have the tool do all of their work for them.

      What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?

    • lokimedes 2 hours ago

      I just submitted my first Claude authored application to Github and noticed this. I actually like it, although anthropomorphizing my coding tools seems a bit weird, it also provides a transparent way for others to weigh the quality of the code. It didn’t even strike me as relevant to hide it, so I’d not exactly call it lazy, rather ask why bother pretending in first place?

      • waisbrot an hour ago

        Looking back, it would have been neat to have more metadata in my old Git commits. Were there any differences when I was writing with IntelliJ vs VSCode?

        • scottyah an hour ago

          Probably your linter, language, or intelligence/whatever tab-complete you used. Claude writes which model they used to write the code, not whether it was in the web ui, tui app, or desktop app.

    • 8cvor6j844qw_d6 2 hours ago

      It's part of the attribution settings from `.claude/settings.json` if you're referring to Claude Code.

      Personally, I adjusted the defaults since I don't like emojis in my PR.

      [1]: https://code.claude.com/docs/en/settings#attribution-setting...

      • silverwind an hour ago

        I have instructions for these because the attribution settings don't accept placeholder tokens like `<model>`, `<version>` etc.

    • junon 37 minutes ago

      Agreed! Easy close/ban for me.

    • neya 2 hours ago

      > I would like to know when someone is trying to have the tool do all of their work for them.

      Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.

      If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.

      Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?

      • scottyah an hour ago

        There will always be room for craftsmen stamping their work, like the expensive Japanese bonsai scissors. Most of the world just uses whatever mass-produced scissors were created by a system of rotating people, with no clear owner/maker. There's plenty of middle ground for systems who put their mark on their product.

        • neya 40 minutes ago

          Fair enough.

  • kstenerud 2 hours ago

    The ads are annoying, and I'm glad Microsoft will stop doing it.

    One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).

    Even when I edit the commit message, I still leave in the Claude co-author note.

    AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.

    • yarn_ an hour ago

      I don't quite see the benefit of this, personally.

      Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.

      Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).

      The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.

      • kstenerud 39 minutes ago

        You're quite right that the quality of the code is all that matters in a PR. My point is more historical.

        AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.

        I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.

        • philote 16 minutes ago

          Just curious, what metrics would you use to track how well your results are?

          • kstenerud 6 minutes ago

            The tools are still in their infancy, but it would likely be a series of metrics such as complexity, repetition, test coverage issues (such as tests that cover nothing meaningful), architectural issues that remain unfixed far beyond the point where it would have been more beneficial to refactor, superfluous instructions and comments, etc.

        • yarn_ 30 minutes ago

          Yep other people pointed this out as well, this makes sense to me.

      • sheept an hour ago

        As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.

        Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.

        But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.

        Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.

        • yarn_ an hour ago

          I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?

          As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.

          I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.

          The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".

          (But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)

      • layer8 33 minutes ago

        It’s not about who wrote it, but about who is submitting it. The LLM co-author indicates that the agent submitted it, which is a contraindication of there being a human taking responsibility for it.

        That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.

        • yarn_ 31 minutes ago

          Well if an agent is submitting it I'm just going to reject it, thats no problem. "Just send me the prompt".

      • Forgeties79 an hour ago

        > Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?

        Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.

        • yarn_ an hour ago

          > Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."

          That was my point here, it is a false signal in both directions.

          • Forgeties79 an hour ago

            According to you it’s all false. I don’t agree, and it certainly shouldn’t just be taken as a given.

            For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.

            • yarn_ 25 minutes ago

              I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"

              If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).

              I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.

    • fortran77 12 minutes ago

      Yes. I don't mind AI submissions to my hobby projects as long as there's a person behind it. Only fully automated slop I mind. Before AI I used to get all sorts of PRs from people changing a comment or a line of documentation just so they can get more green squares on their GitHub summary. Plus ça change....

      A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.

    • jackp96 2 hours ago

      So, philosophically speaking, I agree with this approach. But I did read that there was some speculation regarding the future legal implications of signalling that an AI wrote/cowrote a commit. I know Anthropic's been pretty clear that we own the generated code, but if a copyright lawsuit goes sideways (since these were all built with pirated data and licensed code) — does that open you or your company up to litigation risk in the future?

      And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.

      • mikkupikku an hour ago

        Let your employer's lawyers worry about that. If they say not to use LLMs, then you should abide by that or find a new job. But if they don't care, then why should you?

        As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.

      • nemomarx an hour ago

        If you're concerned about copyright risk, don't you want that kind of tagging so you could prove it wasn't used on particular code?

        • PunchyHamster an hour ago

          not tagging something doesn't prove AI wasn't used

      • dpoloncsak 2 hours ago

        I'm pretty sure IF a copyright lawsuit went sideways you would still be open to litigation risk, just hiding the evidence.

        What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions

        Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.

  • simonw 2 hours ago

    In case people missed it in the other thread, GitHub have now disabled this: https://twitter.com/martinwoodward/status/203861213108446452...

    > We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.

    • pinkmuffinere 2 hours ago

      I’m grateful they disabled it, but their response still feels a bit tone deaf to me.

      > Disabled product tips entirely thanks to the feedback.

      This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”

    • da_grift_shift 2 hours ago

      Accepting the megacorp euphemisms without critique ("product tips") is how enshittification festers.

      • simonw an hour ago

        I've not seen any evidence that these were ads and not "tips".

        Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.

        • wat10000 24 minutes ago

          I could buy it if this was just being shown to the person who was using Copilot. Hey, here's a feature you might like. Seems OK. But it was put into the PR description. That gets seen by potentially many people, who are not necessarily using Copilot.

        • iso1631 26 minutes ago

          When apple puts an advert for an apple show in front of for all mankind, that's an advert.

          Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.

          If it isn't an advert, then at very least there's a button to disable it.

  • john_strinlai 2 hours ago

    related: https://news.ycombinator.com/item?id=47570269

    response from timrogers (product manager at github):

    "Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.

    We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again."

    https://news.ycombinator.com/item?id=47573233

    • rvz 2 hours ago

      > "We won't do something like this again."

      They (Microsoft / GitHub) will do it again. Do not be fooled.

      Never ever trust them because their words are completely empty and they will never change.

      • Hussell 2 hours ago

        "We" here likely refers to Tim and his current coworkers who were present to see this, not every current and future employee of Microsoft / Github. Try not to think of any organization or institution as a person, but as lots of individual people, constantly joining and leaving the group.

        • embedding-shape 24 minutes ago

          Yeah, which is exactly why "We won't do something like this again" has about much value as Kubernetes would have value for HN.

          Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.

          Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.

  • Wojtkie 12 minutes ago

    Microslop strikes again! AI implementations have really distilled all the shitty business practices tech companies have been doing into highly visible missteps.

    It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.

  • 1vuio0pswjnm7 20 minutes ago
  • fraywing an hour ago

    As the "agent web" progresses, how will advertisers actually get access to human eyeballs?

    Will our agents just be proxies for garbage like injected marketing prompts?

    I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.

  • VadimPR an hour ago

    This is why one reason why local coding models are quite relevant, and will continue to be for the foreseeable future. No ads, and you are in control.

    • fph an hour ago

      In principle, one could train the AI to insert ads in its answers. So no, if you only do inference locally with an open-weight model you are still not in control.

      • kgeist 8 minutes ago

        I think ads can be removed with abliteration, just like refusals in "uncensored" versions. Find the "ad vector" across activations and cancel it.

  • siruwastaken 41 minutes ago

    I really wish this was an April fools story. It's good to see that at least it has been disabled again, although I can't imagine that it will be long before this comes back again. Also, (I can't find it now, but) I thought there was an article here on HN recently that clarified that inference cost can probably be covered by the subscription prices, just not training costs?

  • palmotea 13 minutes ago

    Hooray! This is the future we've all hoped for!

  • sanex an hour ago

    Cursor does similar at least. I hate it and therefore write my own commit messages.

    • delduca 5 minutes ago

      Claude Code does the same.

  • thomasgeelens 27 minutes ago

    Damn Microsoft out here really finding new ways to serve ads.

  • vicchenai an hour ago

    the SourceForge parallel is what gets me. they did the exact same thing with installers and it killed them. people moved to GitHub specifically to get away from that.

    1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.

  • ajkjk an hour ago

    This only gets better when there's a financial penalty for doing it. Ads do almost nothing but it costs them even less.

  • nickdothutton an hour ago

    Title is wrong, should be "New form of cancer discovered".

  • sandeepkd an hour ago

    It took me some time to understand how big the advertisement market is, things flowing in the direction seem natural when it comes to making money out of the investment.

  • gadders an hour ago

    The irony when NeoWin covers it's whole page with "promoted content" when you try and back out of the page.

  • m132 an hour ago

    I remember open-source projects announcing their intent to leave GitHub in 2018, as it was being acquired by Microsoft. I was thinking to myself back then: "It's really just a free Git hosting service, and Git was designed to be decentralized at its very core. They don't own anything, only provide the storage and bandwidth. How are they even going to enshittify this?".

    8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.

    • surgical_fire an hour ago

      This is nothing.

      I would bet that soon it will inject ads within the code as comments.

      Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.

      The possibilities are limitless.

      • m132 an hour ago

        If I recall correctly, what sparked the mass migration to GitHub was the controversy around SourceForge injecting ads into installers of projects hosted there. Now that we have tools that can stealthily inject native-looking ads into programs at the source code level...

        • data-ottawa 43 minutes ago

          Same as it ever was. Same as it ever was.

  • fortran77 17 minutes ago

    Well, CoPilot is a GitHub technology, and they're telling you that AI wrote the PR. It's not _that_ bad. I suppose they could distill it to "Written with CoPilot" with a link for more information.

  • dboreham 22 minutes ago

    At some point he who pays the piper was going to call the tune...

  • righthand 2 hours ago

    The future is here! Glorious ads that will make you so efficient! Save time coding by consuming ads, you were never going to attain expert level professional skills anyways.

  • dboreham 2 hours ago

    Ironically tfa is festooned with ads.

    • sunaookami 2 hours ago

      Over 1.5 trillion news articles have ads injected into them by the company's commerce team!

    • da_grift_shift an hour ago

      Sure, but the source blogpost isn't.

  • j45 an hour ago

    It's the hotmail signature all over again?

  • liendolucas 32 minutes ago

    Not surprised at all, just another enshitified product by Microsoft. Carry on.

  • kingjimmy an hour ago

    microslop at it again

  • lpcvoid an hour ago

    Once again, Microslop doing Microslop things

    • toastal 28 minutes ago

      Yet folks are refusing to migrate off their products/services—as if it hasn’t been like this for 3 decades already.

  • saberience 2 hours ago

    It's the same with Claude Code actually, and recently Codex too...

    Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.

    Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.

    Really, really annoying.

    • ray_v 2 hours ago

      Adding the agent (and maybe more importantly, the model that review it) actually seems like a very useful signal to me. In fact, it really should become "best practice" for this type of workflow. Transparency is important, and some PMs may want to scrutinize those types of submissions more, or put them into a different pipeline, etc.

    • coder543 2 hours ago

      That Codex one comes from the new `github` plugin, which includes a `github:yeet` skill. There are several ways to disable it: you can disconnect github from codex entirely, or uninstall the plugin, or add this to your config.toml:

          [[skills.config]]
          name = "github:yeet"
          enabled = false
      
      I agree that skill is too opinionated as written, with effects beyond just creating branches.
      • saberience 2 hours ago

        What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.

    • bonesss 2 hours ago

      When I started my career there was this little company called SCO, and according to them finding a comment somewhere in someone’s suppliers code that matched “x < y” was serious enough to trip up the entire industry.

      Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?

      Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.

    • bundie 2 hours ago

      I believe its easy to disable the Claude Code one.

  • ChrisArchitect 2 hours ago
    • tyleo 2 hours ago

      It’s a dupe but I hope the discussion continues in this more general thread. That other thread was earlier but more of an individual POV that doesn’t make it obvious there was ecosystem impact.

      • ChrisArchitect 35 minutes ago

        The article barely expands on the source content. Either way it's the same discussion. And there's lots of it. Over there.

        • iso1631 22 minutes ago

          The original looked like a one-off

          The article shows thousands of adverts, millions if you look more widely. It massively changes the scale.