This is a great example of why `pull_request_target` is fundamentally insecure, and why GitHub should (IMO) probably just remove it outright: conventional wisdom dictates that `pull_request_target` is "safe" as long as branch-controlled code is never executed in the context of the job, but these kinds of argument injections/local file inclusion vectors demonstrate that the vulnerability surface is significantly larger.
At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.
(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).
I don't disagree... but, there is a use case for orgs that don't allow forks. Some tools do their merging outside of github and thus allow for PRs that cannot be clean from a merge perspective. This won't trigger workflows that are pull_request. Because pull_request requires a clean merge. In those cases pull_request_target is literally the only option.
The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.
NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.
Why github didn't is beyond me. Even if something isn't merge clean doesn't mean linters shouldn't be run. I get not running deployments etc. but not even having the option is pain.
Inside private repos we use pull_request_target because 1. it runs the workflow as it exists on main and therefore provides a surface where untampered with test suites can run, and 2. provides a deterministic job_workflow_ref in the sub claim in the jwt that can be used for highly fine grained access control in OIDC enabled systems from the workflow
Private repos aren't as much of a concern, for obvious reasons.
However, it's worth noting that you don't (necessarily) need `pull_request_target` for the OIDC credential in a private repo: all first-party PRs will get it with the `pull_request` event. You can configure the subject for that credential with whatever components you want to make it deterministic.
> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.
Which is comical given how easily secrets were exilfiltrated.
Yeah, I think that documentation is irresponsibly misleading: it implies that (1) attacker code execution requires the attacker to be able to run code directly (it doesn't, per this post), and (2) that checking out at the base branch somehow stymies the attacker, when all it does is incentivizes people to check out the attacker-controlled branch explicitly.
GitHub has written a series of blog posts[1] over the years about "pwn requests," which do a great job of explaining the problem. But the misleading documentation persists, and has led to a lot of user confusion where maintainers mistakenly believe that any use of `pull_request_target` is somehow more secure than `pull_request`, when the exact opposite is true.
Had the Nix team rolled out signed commits/reviews and independent signed reproducible builds as my (rejected) RFC proposed, then it would not be possible to do any last mile supply chain attacks like this.
In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.
That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.
An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.
Hey! First, a disclaimer: I do not speak for anyone officially, but I am a very regular contributor to nixpkgs and have been involved in trying to increase nixpkgs' security through adopting the Full-Source Bootstrap that Guix and Stagex use. I also assume that the RFC you're talking about is RFC 0100, "Sign Commits"(ref: https://github.com/NixOS/rfcs/pull/100)
As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).
To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.
In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!
Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.
> the major blocker with this is the lack of an ability for contributors to sign from mobile devices
Do you mean a significant number of nixpkgs contributors make nixpkgs PRs from their phones... via the github web editor?
That seems weird to me at face value... editing code is hard enough on a phone, but this is also for a linux distro (definitely not a mobile os today), not a web app or something else you could even preview on your phone.
The reason I dislike this is this is the first thing in the article:
> in nixpkgs that would have allowed us to pwn pretty much the entire nix ecosystem and inject malicious code into nixpkg
OP provided a mechanism to stymie the attack. The counter from your position needs to be how the nix project otherwise solves this problem, not “this isn’t the right approach for hand wavy reasons”. Given the reasonings stated, OP has convinced me that Nix isn’t actually serious about security as this should be treated as an absolutely critical vulnerability that has several hardening layers wrapped to prevent such techniques.
I find it rather embarrassing that, after all these years of trying to design computer systems, modern workflows are still designed so that bearer tokens, even short-lived, are issued to trusted programs. If the GitHub action framework gave a privileged Unix socket or ssh-agent access instead, then this type of vulnerability would be quite a lot harder to exploit.
Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.
CI/CD actions for pull/merge requests are a nightmare. When a developer writes test/verification steps, they are mostly in the mindset "this is my code running in the context of my github/gitlab account", which is true for commits made by themselves and their team members.
But then in a pull request, the CI/CD pipeline actually runs untrusted code.
Getting this distinction correct 100% of the time in your mental model is pretty hard.
For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.
I don't think the problem is CI/CD runs on pull requests, per se: it's that GitHub has two extremely similar triggers (`pull_request` and `pull_request_target`). One of these is almost entirely safe (you have to go out of your way to misuse it), while the other is almost entirely unsafe (it's almost impossible to use safely).
To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.
As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
> If you’ve read the man page for xargs, you’ll see this warning:
>> It is not possible for xargs to be used securely
However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.
> It is not possible for xargs to be used securely
Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.
Yeah, I don't think the specific reason for that sentence in the manpage applies here. But the general sentiment is correct: not all programs support `--` as a delimiter between arguments and inputs, so many xargs invocations are one argument injection away from arbitrary code execution.
(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)
Well, anything POSIX or GNU does support the --. I think most golang libraries as well? And if the program does not, you can always pass the files as relative paths (./--help) to work around that.
For sure though, this can get tricky, but I am not really aware of an alternative. :/ Since the calling convention is just an array of strings, there is no generic way to handle this without knowing what program you are calling and how it handles command line. This is not specific to xargs...
Well, I guess FFI would be a way, but it seems like a major PITA to have to figure out how to call a golang function from bash shell just to "call" a program.
Right, it's just that xargs surfaces it easily. I suspect most people don't realize that they're fanning arbitrary arguments into programs when they use xargs to fan input files.
There's a huge footgun in that article that has broader impact:
> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file
So git allows committing soft links. So the issue above could affect almost any workflow.
Yes, but IIRC when you run `pull_request_target` the credentials are to the target repository - i.e. the one you're merging into. When you run `pull_request`, it's to the source repository, the one the attacker is in control of.
Well the "good" new is, OpenBSD and NetBSD still uses CVS, even for packages. So this will not work on those systems. I do not know about FreeBSD. Security by obscurity :)
But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).
Just to make it clear, what you say is correct, but this is not a git vulnerability, it's a github actions vulnerability. That is, the BSDs are secured by CVS only because github doesn't do CVS. If you use git and even github but don't do CI/CD using github actions you are not affected by this.
I haven't seen anything about requirements for gpg. Also the ux of it is not so great, so it's easy to just not have a signature without causing too much suspicion. Would be a much easier attack than what Jian Tan pulled off. Just wait for some contributor to go on holiday and send a malicious v2 patch. There are so many patches in the linux kernel processed that no one wouldn't notice.
This is a great example of why `pull_request_target` is fundamentally insecure, and why GitHub should (IMO) probably just remove it outright: conventional wisdom dictates that `pull_request_target` is "safe" as long as branch-controlled code is never executed in the context of the job, but these kinds of argument injections/local file inclusion vectors demonstrate that the vulnerability surface is significantly larger.
At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.
(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).
[1]: https://docs.zizmor.sh/audits/#dangerous-triggers
I don't disagree... but, there is a use case for orgs that don't allow forks. Some tools do their merging outside of github and thus allow for PRs that cannot be clean from a merge perspective. This won't trigger workflows that are pull_request. Because pull_request requires a clean merge. In those cases pull_request_target is literally the only option.
The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.
NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.
That's a fantastic use case that should be supported discretely!
Why github didn't is beyond me. Even if something isn't merge clean doesn't mean linters shouldn't be run. I get not running deployments etc. but not even having the option is pain.
Inside private repos we use pull_request_target because 1. it runs the workflow as it exists on main and therefore provides a surface where untampered with test suites can run, and 2. provides a deterministic job_workflow_ref in the sub claim in the jwt that can be used for highly fine grained access control in OIDC enabled systems from the workflow
Private repos aren't as much of a concern, for obvious reasons.
However, it's worth noting that you don't (necessarily) need `pull_request_target` for the OIDC credential in a private repo: all first-party PRs will get it with the `pull_request` event. You can configure the subject for that credential with whatever components you want to make it deterministic.
You’re right! I edited my comment to clarify I was talking about good ole job_workflow_ref.
This is what GitHub says about it:
> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.
Which is comical given how easily secrets were exilfiltrated.
Yeah, I think that documentation is irresponsibly misleading: it implies that (1) attacker code execution requires the attacker to be able to run code directly (it doesn't, per this post), and (2) that checking out at the base branch somehow stymies the attacker, when all it does is incentivizes people to check out the attacker-controlled branch explicitly.
GitHub has written a series of blog posts[1] over the years about "pwn requests," which do a great job of explaining the problem. But the misleading documentation persists, and has led to a lot of user confusion where maintainers mistakenly believe that any use of `pull_request_target` is somehow more secure than `pull_request`, when the exact opposite is true.
[1]: https://securitylab.github.com/resources/github-actions-prev...
This attack surface is essentially unfixed for almost a year now.
Remember the python packages that got pwned with a malicious branch name that contained shellshock like code? Yeah, that incident.
I blogged about all vulnerable variables at the time and how the attack works from a pentesting perspective [1].
[1] https://cookie.engineer/weblog/articles/malware-insights-git...
Had the Nix team rolled out signed commits/reviews and independent signed reproducible builds as my (rejected) RFC proposed, then it would not be possible to do any last mile supply chain attacks like this.
In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.
That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.
An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.
Shameless plug, that is why we built Stagex. https://stagex.tools https://codeberg.org/stagex/stagex/ (Don't worry, not selling anything, it is and will always be 100% free to the public)
Hey! First, a disclaimer: I do not speak for anyone officially, but I am a very regular contributor to nixpkgs and have been involved in trying to increase nixpkgs' security through adopting the Full-Source Bootstrap that Guix and Stagex use. I also assume that the RFC you're talking about is RFC 0100, "Sign Commits"(ref: https://github.com/NixOS/rfcs/pull/100)
As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).
To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.
In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!
Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.
> the major blocker with this is the lack of an ability for contributors to sign from mobile devices
Do you mean a significant number of nixpkgs contributors make nixpkgs PRs from their phones... via the github web editor?
That seems weird to me at face value... editing code is hard enough on a phone, but this is also for a linux distro (definitely not a mobile os today), not a web app or something else you could even preview on your phone.
Edit: Per https://docs.github.com/en/authentication/managing-commit-si... the web editor can/does sign commits...
The reason I dislike this is this is the first thing in the article:
> in nixpkgs that would have allowed us to pwn pretty much the entire nix ecosystem and inject malicious code into nixpkg
OP provided a mechanism to stymie the attack. The counter from your position needs to be how the nix project otherwise solves this problem, not “this isn’t the right approach for hand wavy reasons”. Given the reasonings stated, OP has convinced me that Nix isn’t actually serious about security as this should be treated as an absolutely critical vulnerability that has several hardening layers wrapped to prevent such techniques.
That's pretty impressive -- thanks for sharing the link.
Just a word of encouragement here, this is super interesting!
Wow...this is possibly exactly what I've wanted to do for a while, but you already did it!
I find it rather embarrassing that, after all these years of trying to design computer systems, modern workflows are still designed so that bearer tokens, even short-lived, are issued to trusted programs. If the GitHub action framework gave a privileged Unix socket or ssh-agent access instead, then this type of vulnerability would be quite a lot harder to exploit.
Exactly!
Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.
The SPIFFE standard does something like this.
It's not used by anyone because nobody actually gives a shit about security, the entire industry is basically a grift.
Lots of projects use SPIFFE, but lots of people don't like the new tech because they think the old ways are simpler
CI/CD actions for pull/merge requests are a nightmare. When a developer writes test/verification steps, they are mostly in the mindset "this is my code running in the context of my github/gitlab account", which is true for commits made by themselves and their team members.
But then in a pull request, the CI/CD pipeline actually runs untrusted code.
Getting this distinction correct 100% of the time in your mental model is pretty hard.
For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.
I don't think the problem is CI/CD runs on pull requests, per se: it's that GitHub has two extremely similar triggers (`pull_request` and `pull_request_target`). One of these is almost entirely safe (you have to go out of your way to misuse it), while the other is almost entirely unsafe (it's almost impossible to use safely).
To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.
> while the other is almost entirely unsafe (it's almost impossible to use safely).
I don't believe this is fair. "Don't run untrusted code" is what it comes down to. Don't trust test suites or scripts in the incoming branch, etc.
That pull_request_target workflows are (still) privileged by default is nuts and indeed a footgun but no need for "almost impossible" hysteria.
As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
> If you’ve read the man page for xargs, you’ll see this warning:
>> It is not possible for xargs to be used securely
However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.
> It is not possible for xargs to be used securely
Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.
Though that's like adding `<div>{escapeHtml(value)}</div>` everywhere you ever display a value in html to avoid xss.
If you have to opt in to safe usage at every turn, then it's an unsafe way of doing things.
I don't disagree but "it's not possible for xxx to be used securely" is a long way from "it's cumbersome and tedious to use xxx securely"
If using it securely requires you to never ever forget, even once, I'd agree with GP.
But "it's not possible for xxx to be used securely" is a better premise if it deflects people who can't do it correctly.
Lying to people because you think you're smarter than them is bad policy.
Yeah, I don't think the specific reason for that sentence in the manpage applies here. But the general sentiment is correct: not all programs support `--` as a delimiter between arguments and inputs, so many xargs invocations are one argument injection away from arbitrary code execution.
(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)
Well, anything POSIX or GNU does support the --. I think most golang libraries as well? And if the program does not, you can always pass the files as relative paths (./--help) to work around that.
For sure though, this can get tricky, but I am not really aware of an alternative. :/ Since the calling convention is just an array of strings, there is no generic way to handle this without knowing what program you are calling and how it handles command line. This is not specific to xargs...
Well, I guess FFI would be a way, but it seems like a major PITA to have to figure out how to call a golang function from bash shell just to "call" a program.
> This is not specific to xargs...
Right, it's just that xargs surfaces it easily. I suspect most people don't realize that they're fanning arbitrary arguments into programs when they use xargs to fan input files.
There's a huge footgun in that article that has broader impact:
> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file
So git allows committing soft links. So the issue above could affect almost any workflow.
Yes, but IIRC when you run `pull_request_target` the credentials are to the target repository - i.e. the one you're merging into. When you run `pull_request`, it's to the source repository, the one the attacker is in control of.
Well the "good" new is, OpenBSD and NetBSD still uses CVS, even for packages. So this will not work on those systems. I do not know about FreeBSD. Security by obscurity :)
But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).
Just to make it clear, what you say is correct, but this is not a git vulnerability, it's a github actions vulnerability. That is, the BSDs are secured by CVS only because github doesn't do CVS. If you use git and even github but don't do CI/CD using github actions you are not affected by this.
This is not a git issue, it is a github issue, and as far as I can see specific to github actions.
Don't they use email to accept contributions? Seems like security nightmare w.r.t to impersonation.
How? It's signed with their keys. Linux kernel also uses mail lists and I have yet to see someone trying to impersonate someone
I haven't seen anything about requirements for gpg. Also the ux of it is not so great, so it's easy to just not have a signature without causing too much suspicion. Would be a much easier attack than what Jian Tan pulled off. Just wait for some contributor to go on holiday and send a malicious v2 patch. There are so many patches in the linux kernel processed that no one wouldn't notice.
Aren't messages and/or patches signed?
I can't see any of that. They even tell you to not have any gnupg signatures: https://www.openbsd.org/mail.html