This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.”
This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.
There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.
I am feeling really uncomfortable sitting on a large React project.
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
> Note that capabilities would not help for those bugs we are discussing today.
Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.
Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.
A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.
> (2) that would be may more context switches, so a performance drop
I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).
But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.
I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.
Yes. But its nowhere near as powerful as capabilities.
- Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.
- Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.
- You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.
Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
I am feasting on Schadenfreude as SWEs industry grapples with the messes it made and an uncertain employability in the near future; AI is not 30 years away like when I started.
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people. Yet, somehow the world goes on, these companies siphon up money, and all harms they cause are externalized.
> Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people.
What else do you expect, given the economic incentives on one side, and the immaturity of the discipline on the other? Writing robust software requires time, money and competence, in a purely empirical approach, since we have no fundamental theory of software. The pressure is for quantity and features in minimum time. The approaches are incompatible, and economics win every time.
Well yeah; data breaches been a thing forever. Physical reality never opened a black hole in San Fran because someone committed a key to Github or a box of tapes destined for Iron Mountain vanished. A lot of the concerns are themselves social paranoias not real concerns.
Which is where the unserious emerges but in a subtle way; taking such unserious things so seriously is not serious behavior. It's anxious and paranoid, aloof and clueless behavior.
Secure in tech skills but unserious otherwise.
Lacking a broad set of skills will make office workers unable to grow a potato inherently paranoid about their job.
IT is (was?) one of the very few ways for us in third-world countries to pull ourselves out of poverty by our own bootstraps, since social mobility is quite limited if you lack the right connections. I'm pleased with you being so happy about it being taken away to make more money for billionaires.
My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.
A stdlib doesn't have to provide everything under the sun in order to be helpful here.
Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.
That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.
I don't have an answer what the alternative is going to look like. But smarter people than me may find something. C/C++ are doing fine without package managers. Go at least has a more capable standard library than Rust. But I'm not sure if Go's import github approach is the answer.
One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.
They're not either, every one of these projects contains a gigantic vendor/ folder full of unmaintained libraries, modified so much that keeping up with the latest changes is impossible so they're stuck with whatever version they copied back in 2009.
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
Also hilarious to see Drew Houston responding a bit later on the same thread:
> we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.
> let me know if it's something you're interested in, or if you want to chat about it sometime.
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
>Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
They exploited a linear stack buffer overflow. Not a write-what-where or arb write. A linear stack buffer overflow in 2026! There are at least two distinct failures there:
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
> Debian can't start digesting them until they're already public
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
The key words there are "when they're actually coordinated". Debian doesn't own the Linux kernel, and the kernel developers don't bother with coordinated disclosure, so the happy path of coordinated disclosure only happens when reporters make the non-obvious choice of reporting vulnerabilities to people other than the maintainers.
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
Every dependency compromise that I can remember "in the past few months" were discovered in hours, if not minutes (litllm, axios, bitwarden CLI, Checkmarx docker images, Pytorch lightning, intercom/intercom-php). What's more, the discovery of these compromises did not at all rely on whether the compromises were actively used.
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
> exempts security updates from its minimum release age
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.
What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.
Getting persistent root is actually quite difficult on mobile operating systems. iOS famously so, but unless you're running a custom ROM other than Graphene, Android has some solid protections as well.
Regular phone reboots are a security measure at this point.
It does though, the exploit exists in memory. When you reboot the phone the memory is reset, if it's modified system files, the checksums won't pass and your phone will refuse to boot. Requiring it to be wiped and reinstalled.
These days most exploits can not persist through a reboot due to secureboot and other bootchain attestations. In the boot process, everything loaded gets checksummed and compared to signed signatures from Apple, but this only helps at load time, not while the phone is running. Of course if the phone is not patched, the exploit could be reloaded, but this would require revising a malicious website or reopening a malicious bit of media.
In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.
I assume that while Mythos may be really good at finding vulnerabilities, lighter models may still do a pretty good job of explaining/exploiting the vulnerability if given the patch which fixes it.
Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit:
Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
With copy.fail the security patch wasn't listed as such so there wasn't a lot of attention on the issue as it remained dormant in most kernels for a while.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
Except that a lot of software likely is already broken in fun ways we currently don't know about. That is what makes it such a "fun" challenge. Supply chain attacks are one thing, but CVEs in already released software allowing other attackers are another.
As always, I know most of us work in IT, but things rarely are actually binary.
The whole (mistaken) belief that Linux and macOS didn't require AV was based on the execute bit being present, something Microsoft fixed back in XP by making downloaded files as such and preventing them from being opened trivially.
If you have code execution, you can attack the OS.
Indeed, when one installs dependencies all over the Internet, or even better, key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion on how to install them, attackers have the work done for them.
To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
Of course, why didn't anyone think of that ? I bet if someone started to ship software that has no errors they'll make a huge amount of money, especially from all the people that are security-minded !
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
I think we've concluded already that user isolation is not safe and shouldn't be trusted, that's why we've invested to hard into namespacing(containers). users should only have what they need if you really care about security and don't want to tolerate the overhead of virtualization based security.
> this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
Do you install system-wide software at all? How do you configure it?
That's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
- you heavily compromise a single user <- exploit not relevant
- you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
- you somewhat compromise a lot of users via infra compromise <- where this hurts
Yes, you are very special and smart. Good for you!
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
Personally I'm choosing to keep my home server behind a VPN and to enable Lockdown Mode on my phone and laptop for a while until the dust settles. As well as just limiting the software installed to trusted projects only.
VM isolation would still be safe even with these kernel exploits.
you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.
I'm personally waiting to be downgraded to simply being called "lazy".
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
LLMs have been used to scan binary blobs for exploits already. What would be more effective is a system designed with multiple layers of security so any one exploit is largely useless.
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.
I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.
This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.
Rivers caught on fire for a hundred years before the EPA was formed.
New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.
> we're entering a more hardened era of software
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).
I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
>there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did
Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.
Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
One. "Copy Fail 2" and "Dirty Frag" are the same thing.
To be fair, to some extent that’s up to us. Time to get cleaning, I guess.
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
Thank you for reminding us all that you AI bros are still the most obnoxious people there are.
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
Yeah.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
I'm thinking of projects like curl [0]
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
We need to do better than this.
[0] https://curl.se/docs/governance.html
[1] https://lwn.net/Articles/1034966/
>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
The sad truth about open source in 2026 is that it does not serve the society the way it is advertised or did back in the 90s.
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
Isn't blaming AI for that similar to blaming C for buffer overflows?
More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.
And in the end it is a problem of processes and culture.
We are not in disagreement here. I'm not blaming AI, I'm blaming the culture around its use.
Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
curl ... | sudo bash
yolo!
Will need those animal bones if all the industrial control systems get turned against us
Nuclear might be airgapped but what about water, power…?
Right, yeah, instead you can run ancient versions of everything and encounter a whole different class of risks
That's not at all what OP is talking about.
Most people will avoid sticking things in their mouth by default. They don't wait for the microbial cultures to come back positive to say no.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
> They don't wait for the microbial cultures to come back positive to say no.
They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.
The billions of burgers served by fast food franchises with long histories of poisoning people would argue that delicious convenience overrides the hygiene instinct.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
Most people start out as kids that does exactly that.
That means going back to disabling Javascript or only allowing widely used, well-maintained Javascript libraries.
Indeed - one year ago we floated the idea it is better to write your code if you can, than get third parties. But it was a heresy at the time to consider LLMm filling the gaps.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
This replaces supply chain trust with the trust in the LLM and the provider you're using. Even if you exclude model devs from your threat model and are running the LLM yourself, it's still an uninterpretable black box that is trained on the web data which can be and is manipulated precisely to attack LLMs during training. So this approach still needs proper supply chain security.
There are a lot of libs you really can't justify implementing from scratch. Mathjs and node-mysql jump to mind. Poisoned chains build up from small dependencies, and clearly staying on top of your dependency chain should be a full time job - if anyone was willing to pay someone to do that full time.
I am feeling really uncomfortable sitting on a large React project.
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
Not enjoying npm at all.
I've been wanting a capability based security model for years. Argued about it here in fact. Capabilities are kind of an object pointer with associated permissions - like a unix file descriptor.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Note that capabilities would not help for those bugs we are discussing today.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
> Note that capabilities would not help for those bugs we are discussing today.
Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.
Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.
A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.
> (2) that would be may more context switches, so a performance drop
I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).
But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.
I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.
Have you heard of pledge in OpenBSD?
I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.
Yes. But its nowhere near as powerful as capabilities.
- Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.
- Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.
- You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.
I’m not trying to say use pledge/unveil to make capabilities, I’m saying use pledge/unveil to limit exposure.
To me it’s easier to get a program to let the system know what it needs vs. try to contain it from the outside.
Anyway, have a good one.
Realistically, most folks don't get paid to mitigate long term risks by deviation from the common (and more efficient) practice.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
I am so happy to go through another round of kernel RPMs after the freak out today!
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
Fortunately the issue isn’t fixed yet, so you don’t have to :)
Thinks might have to start considering server side technologies a bit more if at least being mindful of build processes.
It's not just client-side npm though. Rust has the same problem.
Edit: and, ofc, what we're discussing here is Linux packages.
I am feasting on Schadenfreude as SWEs industry grapples with the messes it made and an uncertain employability in the near future; AI is not 30 years away like when I started.
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Hey it's all web-scale though! Good job!
Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people. Yet, somehow the world goes on, these companies siphon up money, and all harms they cause are externalized.
> Considering the amount of money at stake, Software is a deeply, deeply unserious and careless industry, and a great many practitioners are also deeply unserious and careless people.
What else do you expect, given the economic incentives on one side, and the immaturity of the discipline on the other? Writing robust software requires time, money and competence, in a purely empirical approach, since we have no fundamental theory of software. The pressure is for quantity and features in minimum time. The approaches are incompatible, and economics win every time.
Well yeah; data breaches been a thing forever. Physical reality never opened a black hole in San Fran because someone committed a key to Github or a box of tapes destined for Iron Mountain vanished. A lot of the concerns are themselves social paranoias not real concerns.
Which is where the unserious emerges but in a subtle way; taking such unserious things so seriously is not serious behavior. It's anxious and paranoid, aloof and clueless behavior.
Secure in tech skills but unserious otherwise.
Lacking a broad set of skills will make office workers unable to grow a potato inherently paranoid about their job.
IT is (was?) one of the very few ways for us in third-world countries to pull ourselves out of poverty by our own bootstraps, since social mobility is quite limited if you lack the right connections. I'm pleased with you being so happy about it being taken away to make more money for billionaires.
My pet theory is that package managers will one day be seen like we see object-oriented programming today. As something that was once popular but that we've since grown out of. It's also a design flaw that I see in cargo/Rust. Having to import 3rd party packages with who-knows-what dependencies to do pretty much anything, from using async to parsing JSON, it's supply chain vulnerability baked into the language philosophy. npm is no better, but I'm mentioning Rust specifically because it's an otherwise security-conscious language.
Rust is quite bad on this, having to rely on external crates for error handling or macros is even worse than what async runtime to pick up.
Yes, I mean crates like anyerror and syn.
But you can't expect the language std to supply you with every package under the sun.
A stdlib doesn't have to provide everything under the sun in order to be helpful here.
Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.
That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.
I don't have an answer what the alternative is going to look like. But smarter people than me may find something. C/C++ are doing fine without package managers. Go at least has a more capable standard library than Rust. But I'm not sure if Go's import github approach is the answer.
One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.
In C and C++'s case, the batteries included is POSIX + Khronos.
>C/C++ are doing fine without package managers.
They're not either, every one of these projects contains a gigantic vendor/ folder full of unmaintained libraries, modified so much that keeping up with the latest changes is impossible so they're stuck with whatever version they copied back in 2009.
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
[0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
I'm reminded of another legendary HN thread:
https://news.ycombinator.com/item?id=35079
It may well have been your point, but that it's the exact same person makes this even better
Also hilarious to see Drew Houston responding a bit later on the same thread:
> we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.
> let me know if it's something you're interested in, or if you want to chat about it sometime.
>drew (at getdropbox.com)
I haven't switched to BSD but I've been thinking about it for a while. I just saw Vultr has both FreeBSD and OpenBSD!
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat. It's a speed bump, not a brick wall.
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
>Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
Is there anywhere that provides a good overview of the various OS protection technologies/approaches that exist and which OSes have implemented them?
So you have one example in hand and trash talked FreeBSD’s entire security team. Bold claims are fine but this is lazy.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
Ask yourself why Mythos was so easily able to develop a remote STACK buffer overflow vulnerability.
Define "so easily"?
They exploited a linear stack buffer overflow. Not a write-what-where or arb write. A linear stack buffer overflow in 2026! There are at least two distinct failures there:
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.
Well, as they're a FreeBSD dev, I would be surprised if they pointed anyone in a different direction.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
Darwin is its own thing really. There are parts from BSD, there are also parts from Mach and there are also unique parts.
FreeBSD is not a distro
What does the D in BSD stand for again?
Distribution. But it’s not a Linux distribution.
Distribution. Which is a different word than distro, with a different meaning. Like smart and smartass.
Only to be thrown out of the windows with a plain "curl | sh".
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
Has everyone here already forgotten about the WireGuard tire fire?
https://lwn.net/Articles/850098
https://news.ycombinator.com/item?id=26507507
tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.
Was this process problem fixed?
Also funny they never show Debian in those tests/videos.
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
> Debian can't start digesting them until they're already public
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
The key words there are "when they're actually coordinated". Debian doesn't own the Linux kernel, and the kernel developers don't bother with coordinated disclosure, so the happy path of coordinated disclosure only happens when reporters make the non-obvious choice of reporting vulnerabilities to people other than the maintainers.
The fact that the kernel security team has decided coordinating disclosure is someone else's problem so it happens inconsistently.
How so?
While I am sure FreeBSD is more secure than your average Linux distro, I sure hope they are using these new AI models to harden everything.
FreeBSD just slaps at the problem. OpenBSD solves it.
I kid, I kid...
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
I think the author was suggesting "wait a week" as a one-time wait for fixes to be written and patches distributed for these specific prematurely-disclosed vulnerabilities, not an on-going suggestion for delaying all updates. But otherwise I agree with you.
Yep, that was my intent.
Oh! Not GP but skimmed too quickly
I think you misunderstood the article. The proposal isn't wait a week after Software has been published before installing it. It's in the next seven days starting now, just don't, because you probably don't have patches for these vulnerabilities and even if you do there's probably more scary vulnerabilities about to be discovered.
I think it's even more specific.
From TFA:
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
well then let's wait a month or even two months. The point of the wait period is primarily to avoid the new installation of exploits, not the execution of already installed exploits.
Every dependency compromise that I can remember "in the past few months" were discovered in hours, if not minutes (litllm, axios, bitwarden CLI, Checkmarx docker images, Pytorch lightning, intercom/intercom-php). What's more, the discovery of these compromises did not at all rely on whether the compromises were actively used.
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
A popular package has more exposure. When the artefact is published, the entire world can see it. Hopefully some people check the diff between versions. But without any delays then you could be hit by exploits nobody has seen yet.
This is why cooldowns have space for patches.
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
Even better, only use company vetted repos, everyone is forbidded to install directly from the Internet repos.
This naturally doesn't work outside corporations.
More a case for something like this from Show HN three months ago
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Does that really scale well? Thanks to cascading dependencies, even a medium sized project can import hundreds of dependencies. Can a developer really review them all to figure out if they are safe and that there's not security fix that was fixed in a newer version of the package?
Yes, that is what is required. Every dependency needs an internal owner and reviewer. Every change needs to be reviewed and brought into the internal repository.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
I love the sibling response from @jp...
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
So you get security updates late too? Many vulnerabilities are in the wild for years before being noticed, and patched.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Presumably npm exempts security updates from its minimum release age, but even if it doesn't, I think the times where you need an important security update are relatively rare enough that handling the real cases on a case-by-case basis with whitelisting is fine. Outside of Next.js's React2Shell vulnerability last year, I'm not sure I've ever had a security update of a dependency written in a memory-safe language (ie. not C/C++) which I've installed through npm/PyPI/Cargo that patched a security vulnerability that had been making my application exploitable to others in practice. Almost all security vulnerabilities I've personally seen flagged through npm are about things I only use at build-time and are only relevant if a user can create and pass an arbitrary object to the function, which is rarely the case. Most security vulnerabilities I've encountered and fixed in working on web apps were things like XSS, SQL injections, and improperly enforced permissions, and they nearly always happened in the application's own code rather than inside a dependency.
> exempts security updates from its minimum release age
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
At least with our Renovate config, all dependencies have a 7 day cooldown, but marked security updates are immediate.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
You could still have security bumps happening (like dependabot).
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
Additionally, use only internal repos.
You'll also find your CI build times and flakey failures can be cut down massively by doing this.
Can someone help me understand the copyfail thing and how it relates to NPM packages?
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
> And the advice isn't just "update your kernel" because we are still finding new related issues?
The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.
NPM supply-chain attacks spread really quickly.
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
npm can run on linux.
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.
pnpm has this, I think others may also have something similar.
https://pnpm.io/settings#minimumreleaseage
pnpm has added a new setting, minimumReleaseAge, enabled by default, just to try to mitigate these issues.
What’s interesting here is that the exploit chain itself isn’t especially novel anymore — page cache corruption has become a recurring pattern (Dirty Pipe, Copy Fail, Dirty Frag). The worrying part is how quickly public patches are now being reverse-engineered into weaponized exploits.
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
For ios, rebooting your phone is extremely effective at removing exploits. The boot chain attestation stuff can verify the system is in a known state. If you are ultra paranoid you could enable lockdown mode which preemptively disables the entrypoints for exploits. So far I don't believe there has been any exploit which works with lockdown mode enabled.
If you are already exploited though, I doubt it helps
Getting persistent root is actually quite difficult on mobile operating systems. iOS famously so, but unless you're running a custom ROM other than Graphene, Android has some solid protections as well.
Regular phone reboots are a security measure at this point.
It does though, the exploit exists in memory. When you reboot the phone the memory is reset, if it's modified system files, the checksums won't pass and your phone will refuse to boot. Requiring it to be wiped and reinstalled.
These days most exploits can not persist through a reboot due to secureboot and other bootchain attestations. In the boot process, everything loaded gets checksummed and compared to signed signatures from Apple, but this only helps at load time, not while the phone is running. Of course if the phone is not patched, the exploit could be reloaded, but this would require revising a malicious website or reopening a malicious bit of media.
the lottery of either getting a new supply-chain attack or the fixes from Mythos with every single update
It really pisses me off that responsible disclosure timelines are being ignored.
In this case, no insiders broke the embargo. It was reverse engineered from the patch by an unrelated third party and a proof of concept immediately came out of it. At that point, it's kinda fair game.
I assume that while Mythos may be really good at finding vulnerabilities, lighter models may still do a pretty good job of explaining/exploiting the vulnerability if given the patch which fixes it.
if you don't already consider responsible disclosure a quaint idea, you may want to grow warm on it
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
Less a gentleman's agreement and more of a question of economic incentives going away. Companies aren't paying out bounties at the rates they used to (possibly because they've realized there's little financial incentive to do so for most findings) and simultaneously they're being inundated with AI slop findings that somehow have to still be triaged and evaluated.
The dirty frag repo says:
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
> Due to external factors, the embargo has been broken, so no patch exists for any distribution.
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
Presumably:
* https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
* https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
If the fix commit is public, so is the issue being fixed.
With copy.fail the security patch wasn't listed as such so there wasn't a lot of attention on the issue as it remained dormant in most kernels for a while.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
"If it ain't broke, don't fix it" is its own area of risk that people often ignore
Except that a lot of software likely is already broken in fun ways we currently don't know about. That is what makes it such a "fun" challenge. Supply chain attacks are one thing, but CVEs in already released software allowing other attackers are another.
As always, I know most of us work in IT, but things rarely are actually binary.
Remember the whole discussion when UNIX was supposed to not need anti-virus and talking down PCs?
Behaviours matter more than OS security primitives.
The whole (mistaken) belief that Linux and macOS didn't require AV was based on the execute bit being present, something Microsoft fixed back in XP by making downloaded files as such and preventing them from being opened trivially.
If you have code execution, you can attack the OS.
Indeed, when one installs dependencies all over the Internet, or even better, key projects use "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" as default suggestion on how to install them, attackers have the work done for them.
To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.
The post is about Linux vulnerabilities, but given the recent supply chain attacks, I'd be especially careful with Homebrew: https://x.com/i/status/2052106143271354859
Often convenience and security are at odds, but `export HOMEBREW_NO_AUTO_UPDATE=1` is more convenient and more secure.
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
The proof of concept code is out before patches are available for any distro.
The scary part is how many teams still have builds implicitly depending on “whatever was latest 5 minutes ago”.
Containerization improved reproducibility in some ways, but in practice a lot of CI pipelines still behave like live dependency roulette.
I got rid of half of my VSCode extensions a couple days ago, its too risky.
Those things scare the crap out of me…
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
I dislike FUD like this :/
Maybe the new software should not have any errors. I know, I have higher expectations than the average commercial software customer.
Of course, why didn't anyone think of that ? I bet if someone started to ship software that has no errors they'll make a huge amount of money, especially from all the people that are security-minded !
Please grow a brain.
> Copy Fail 2: Electric Boogaloo
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
"we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2". Problem solved
I still can’t believe people are ok with software updates every day. Looking at you Claude code
It's a two-edged sword. You're damned if you do and damned if you don't update.
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
I doubt your “distroless” container is any safer for this vulnerability .
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
I agree that de facto the biggest security flaw in Linux is "okay I'm tired of getting interrupted all day assisting you, I know you're competent, I'll put you on the sudoers list."
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
I think we've concluded already that user isolation is not safe and shouldn't be trusted, that's why we've invested to hard into namespacing(containers). users should only have what they need if you really care about security and don't want to tolerate the overhead of virtualization based security.
> this, this is something I don't understand there are a billion ways to gain root once you control the user that regulary uses sudo.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
Do you install system-wide software at all? How do you configure it?
That's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
nixos comes to mind, rootless runpod, qubesos.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
Would you mind sharing the relevant config?
Yes, you are very special and smart. Good for you!
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
right, a bigger issue is multitenant systems, which are common in academia (I manage several such systems for various experiments). Now, we generally trust the users to not be malicious, but most don't get sudo, because physicists tend to think they know what they're doing when they don't really (except for me, of course).
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
Perhaps, but it makes a huge difference if you're running the vulnerable code in a container or as a different user.
It seems like this round of vulns is going to be significant. What is the right response?
Personally I'm choosing to keep my home server behind a VPN and to enable Lockdown Mode on my phone and laptop for a while until the dust settles. As well as just limiting the software installed to trusted projects only.
VM isolation would still be safe even with these kernel exploits.
I've been doing alot of that lately
I do a bit wonder what happens as standard practice becomes to lag more and more and more. Who is there left that's looking, that'd finding out?
I think there’s already a big market of supply chain security companies that are proactively scanning dependencies for this sort of thing.
They’re always racing to be the first one to write an article about a case.
you raise a really good point. if everyone is doing this at exactly the same lag then it will eventually start hitting groups in sync at the exact same time
100% doing this, sadly
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Write only code is such a bad bad idea. No one is reviewing 20k loc PRS with 15 new dependencies in an afternoon. Sorry it's just not happening I don't care how many years you have been a software engineer. Yet that's the new thing and how we all are supposed to work or else we are all Luddites.
I'm personally waiting to be downgraded to simply being called "lazy".
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
slopcode is a pejorative that means nothing to me. if you have an actual criticism to make, then do it
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
So no external libraries for anything? Billions of lines of code that duplicate the same thing n-times across an organization?
And the benefit is the obscurity of "no one will know how to exploit them"?
No, thanks.
LLMs have been used to scan binary blobs for exploits already. What would be more effective is a system designed with multiple layers of security so any one exploit is largely useless.
Next: the back doors are written by the LLM!
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
I have had none of those issues on Fedora 44, FWIW.
ditto. my upgrade from 43 - 44 went very smooth
If this is still the same install that you've been using since 38, you might find a clean install resolves some issues (whether or not your upgrade got botched). Also helps me get rid of software I installed that I don't use anymore, which I feel is relevant to this article. But part of why I love Silverblue so much is I don't have to worry about upgrades getting botched and fwiw as well, I haven't noticed any of those bugs on 44 across several very different machines.
I had a day 1 crashloop with KWin on the 2nd desktop, but on day 2 some package update fixed it. Honestly it isn't the first time Fedora upgrades have messed something up for me either but I do think it's more stable than the average Ubuntu release, not that I've upgraded ubuntu in like 5 yrs.
Fedora 44 here, no issues.