TanStack NPM Packages Compromised

(github.com)

257 points | by varunsharma07 2 hours ago ago

63 comments

  • cube00 26 minutes ago

    Please be careful when revoking tokens. It looks like the payload installs a dead-man's switch at ~/.local/bin/gh-token-monitor.sh as a systemd user service (Linux) / LaunchAgent com.user.gh-token-monitor(macOS). It polls api.github.com/user with the stolen token every 60s, and if the token is revoked (HTTP 40x), it runs rm -rf ~/.

    https://github.com/TanStack/router/issues/7383#issuecomment-...

    • meander_water 7 minutes ago

      I don't understand why people were voting this comment down in the issue page

    • bpavuk 14 minutes ago

      if so, then this is actual terrorism of the software world!!

      • embedding-shape 2 minutes ago

        Only if the goal is to actually spread fear in a civilian population. It's not clear what the motivation is here besides "the worm spreads itself lol".

    • fragmede 16 minutes ago

      One should always have had backups configured, but if this is what gets people to setup backups, so much the better.

  • jonchurch_ 38 minutes ago

    It is unfortunate, but this is evidence (IMO) that Trusted Publishing is still ~~not secure~~ not enough by itself to securely publish from CI, as an attacker inside your CI pipeline or with stolen repo admin creds can easily publish. This isnt new information, TP is not meant to guarantee against this, but migrating to TP away from local publish w/ 2fa introduces this class of attack via compomise of CI. (edit: changed "still not secure" to "still not enough by itself" bc that is the point I want to make)

    Going to Trusted Publishing / pipeline publishing removes the second factor that typically gates npm publish when working locally.

    The story here, while it is evolving, seems to be that the attacker compromised the CI/CD pipeline, and because there is no second factor on the npm publish, they were able to steal the OIDC token and complete a publish.

    Interesting, but unrelated I suppose, is that the publish job failed. So the payload that was in the malicious commit must have had a script that was able to publish itself w/ the OIDC token from the workflow.

    What I want is CI publishing to still have a second factor outside of Github, while still relying on the long lived token-less Trusted Publisher model. AKA, what I want is staged publishing, so someone must go and use 2fa to promote an artifact to published on the npm side.

    Otherwise, if a publish can happen only within the Github trust model, anyone who pwns either a repo admin token or gets malicious code into your pipeline can trivially complete a publish. With a true second factor outside the Github context, they can still do a lot of damage to your repo or plant malicious code, but at least they would not be able to publish without getting your second factor for the registry.

    • donmcronald 24 minutes ago

      I'd like to have touch to sign from a YubiKey or similar. The whole idea of trusting the cloud to manage credentials on your behalf seems like a mistake.

    • captn3m0 36 minutes ago

      The astral blog recently pointed out how they do release gates (manual approvals on release workflows) even with trusted publishing. And sadly, all of the documentation for trusted publishing (NPM/PyPi/Rubygems) doesn't even mention this possibility, let alone defaulting to it.

      • jonchurch_ 30 minutes ago

        I have not read that blog post. But unfortunately (and I'd love to be wrong!) it doesn't matter for if a repo admin's token gets exfiled, because if you put your gates within Github, an admin repo token is sufficient to defang all of them from the API without 2fa challenge.

        That is why I want 2fa before publish at the registry, because with my gh cli token as a repo admin, an attacker can disable all the Github branch protection, rewrite my workflows, disable the required reviewers on environments (which is one method people use for 2fa for releases, have workflows run in a GH environment whcih requires approval and prevents self review), enable self review, etc etc.

        Its what I call a "fox in the hen house" problem, where you have your security gates within the same trust model as you expect to get stolen (in this case, having repo admin token exfiled from my local machine)

        • captn3m0 26 minutes ago

          https://docs.github.com/en/actions/how-tos/deploy/configure-... is the feature they use.

          > We impose tag protection rules that prevent release tags from being created until a release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.

          > https://astral.sh/blog/open-source-security-at-astral

          From what I understand, you need a website login, and not a stolen API token to approve a deployment.

          But I agree in principle - The registry should be able to enforce web-2fa. But the defaults can be safer as well.

          • jonchurch_ 21 minutes ago

            I tested approving a deployment via API last week w/ my gh cli token (well, had claude do it while I watched). Again, I really want to be wrong about this, but my testing showed that it is indeed trivial to use the default token from my gh cli to approve via API. (repo admin scope, which I have bc I am admin on said repo)

            Nothing in this link [1] proves what I said, but it is the test repo I was just conducting this on, and it was an approval gated GHA job that I had claude approve using my GH cli token

            I also had claude use the same token to first reconfigure the enviornment to enable self-approves (I had configured it off manually before testing). It also put it back to self approve disabled when it was done hehe

            [1] https://github.com/jonchurch/deploy-env-test/actions/runs/25...

    • herpdyderp 25 minutes ago

      I was always confused at why people claimed trusted publishing would make any difference to this kind of supply chain attack.

  • gajus an hour ago

    Reminder to secure your npm environments.

    https://gajus.com/blog/3-pnpm-settings-to-protect-yourself-f...

    Just a handful of settings to save a whole lot of trouble.

    • arcza 11 minutes ago

      Wild claim that setting the minimum age to 7 days will result in me "never" getting a supply chain npm vuln.

    • Narretz 40 minutes ago

      Isn't this article wrong about npm minumum release age. 1. The config is min-release-age. 2. For some reason they have chosen to make it days instead of minutes: https://docs.npmjs.com/cli/v11/using-npm/config#min-release-...

      Completely unforced fragmentation of the dependency manager space imo

      • bakugo 37 minutes ago

        This confused me too, until I realized that the article is about pnpm, not npm (pnpm reads .npmrc for some reason, despite not having the same options as npm)

        On a related note, it seems to be impossible to find the documentation of min-release-age by googling it. Very annoying.

    • rvz 39 minutes ago

      And absolutely pin, pin, pin, ALL your dependencies.

      If I see a package version dependency that looks like this: ^1.0.0 or even this: "*", then stop reading, pin it to a secure version immediately.

      • AgentME 35 minutes ago

        Npm's package-lock.json already handles pinning everything to exact versions, including subdependencies. Pinning exact versions in package.json doesn't affect your subdependencies.

      • captn3m0 31 minutes ago

        I've been collecting things you can't pin:

        - Python inline dependencies in PEP-0723, which you can pin with a==1.0, but can't be hash-pinned afaik.

        - The bin package manager lets you pin binaries, but they aren't hash-pinned either.

        - The pants build tool suggests vendoring a get-pants.sh script[0] but it downloads the latest. Even if you pass it a version, it doesn't do any checks on the version number and just installs it to ~/.local/bin

        [0]: https://github.com/pantsbuild/setup/blob/gh-pages/get-pants....

      • jonchurch_ 35 minutes ago

        its so wild to have seen this advice reverse course over the past year.

        it used to be that projects that pinned deps were called out as being less secure due to not being able to receive updates without a publish.

        different times, different threat model I suppose

  • chrisweekly 26 minutes ago

    Postinstall scripts are deadly. Everyone should be using pnpm.

    Crazy that an "orphan" commit pushed to a FORK(!) could trigger this (in npm clients). IMO GitHub deserves much of the blame here. A malicious fork's commits are reachable via GitHub's shared object storage at a URI indistinguishable from the legit repo. That is absolutely bonkers.

    • fabian2k 24 minutes ago

      Once you run your app with the updated dependencies, that code is executed anyway. And root or non-root doesn't matter, the important stuff is available as the user running the application anyway.

  • getcrunk 5 minutes ago

    I think we are at the point where everyone really needs to run each project in its own vm.

    Given the recent lpe vulns docker 100% won’t cut it.

    And containers were never meant primarily as a security boundary anyways

  • fabian2k an hour ago

    At least it was only online for 1-2 hours at most, and it didn't affect react-query. But still a bunch of quite well-known packages.

    This doesn't really feel sustainable, you're rolling the dice every time the dependencies are updated.

  • nathanmills 22 minutes ago

    TanStack? Jia Tan? Who is falling for this???

    • darepublic 5 minutes ago

      its a cult in react web dev circles. Just be glad that you never had to encounter devs who insist that everything must be on "tan" stack.

  • varunsharma07 2 hours ago

    The Mini Shai-Hulud worm is actively compromising legitimate npm packages by hijacking CI/CD pipelines and stealing developer secrets. StepSecurity's OSS Package Security Feed first detected the attack in official @tanstack packages and is tracking its spread across the ecosystem in real time.

    • janice1999 an hour ago

      How did you guys detect it? Do you use it internally or do you monitor popular packages?

  • captn3m0 17 minutes ago

    1. _Multiple third-party companies_ can detect these obviously malicious packages in almost-real-time

    2. NPM still not only publishes them, but also keeps distributing them for anything beyond 5 minutes.

    Microsoft/GitHub/NPM can only keep repeating "security is our top priority" so many times. But NPM still doesn't detect these simple attacks, and we keep having this every week.

  • bpavuk 15 minutes ago
  • j-bos 6 minutes ago

    > it installs that commit's declared dependencies (which include bun) and then runs its prepare lifecycle script

    Again? How have lifecycle scripts not instantly been defaulted off? Yes breaking things is bad, but come on, this keeps happening, the fix is easy, and if an *javascript* build relies of dependendlcy of dependency's pulled build time script, then it's worth paying in braincells or tokens to digure it out and fix the biold process, or lately uncover an exploit chain. This isn't even a compiled language.

  • ChoosesBarbecue an hour ago

    > Please be careful when revoking tokens. It looks like the payload installs a dead-man's switch at ~/.local/bin/gh-token-monitor.sh as a systemd user service (Linux) / LaunchAgent com.user.gh-token-monitor(macOS). It polls api.github.com/user with the stolen token every 60s, and if the token is revoked (HTTP 40x), it runs rm -rf ~/. (It looks like it might also have a bunch of persistence mechanisms. I haven't studied these closely.)

    Jesus, that's vindictive.

    • mediaman 29 minutes ago

      I could imagine this might also be to try cover its tracks. If it gets 40x it means it's been found, time to nuke everything it can.

  • sn0n 41 minutes ago

    As Theo goes live…

  • rvz 43 minutes ago

    Once again, Shai-Hulud wrecking havock in the Javascript and Typescript ecosystems via NPM.

    One of the worst ecosystems that has been brought into the software industry and it is almost always via NPM. Not even Cargo (Rust) or go mod (Golang) get as many attacks because at least with the latter, they encourage you to use the standard library.

    Both Javascript and Typescript have none and want you to import hundreds of libraries, increasing the risk of a supply chain attack.

    At this point, JS and TS are considered harmful.

    • robertjpayne 24 minutes ago

      I don't really buy this. NPM is targeted because it's the largest attack surface with the biggest payoff for a successful attack.

      Other ecosystems package managers are really no different in a lot of ways.

      NPM's biggest fault is just it allows post/pre install scripts by default without user intervention.

    • AlotOfReading 15 minutes ago

      I wonder whether NPM has surpassed the costs of the billion dollar mistake, null references. NPM hasn't been around as long, but the industry is much bigger today than it was when systems languages were dominant.

    • squidsoup 30 minutes ago

      If cargo was as popular as npm, the same issues would surface.

    • skydhash 29 minutes ago

      The Standard C library is also very small. Even though there’s POSIX, for anything that’s not system programming, you will be using libraries.

      The difference is that the usual C libraries don’t split the project into small molecules for no good reasons. You have to be as big as GTK to start splitting library in my opinion.

  • slopinthebag an hour ago

    My decision to abandon the JS ecosystem and language entirely continues to pay off. What a mess...

    I am, however, concerned that this will pwn my workplace. We don't use Tanstack but this seems self-propagating and I doubt all of our dependencies are doing enough to prevent it.

    • nine_k an hour ago

      Abandon NPM in exchange for what? Cargo? Go get? Pip install?

      Every package manager that does not analyze and run tests on the packages being uploaded (like Linux distros do) is vulnerable.

      • ljm 44 minutes ago

        The community decided it's too much effort to vet code before publishing it so here we are.

        (I'm not being stupid, even ten years ago there were arguments on HN about whether you should audit your dependencies)

        I landed on the 'yes, you should know what code you are getting involved with' side.

      • devttyeu 42 minutes ago

        Cargo is spiritually based on NPM so it's not much better.

        Go Get is closer to always locking dependencies unless you explicitly upgrade them with a go get, so it's much much better in my view.

        Yes, you can lock deps in NPM/Cargo/etc. but that's not the default. It is the default in Go.

        In Go projects my policy for upgrading dependencies includes running full AI audit of all code changed across all dependencies, comes out to ~$200 in tokens every time but it gives those warm 'not likely to get pwned' vibes. And it comes with a nice report of likely breaking changes etc.

        • nine_k 36 minutes ago

          > comes out to ~$200 in tokens every time

          BTW a curated mirror of <whatever ecosystem> packages, where every package is guaranteed to have been analyzed and tested, could be an easy sell now. Also relatively easy to create, with the help of AI. A $200 every time is less pleasant than, say, $100/mo for the entire org.

          Docker does something vaguely similar for Docker images, for free though.

          • AgentME 33 minutes ago

            People are already scanning npm constantly. You can limit yourself to pre-scanned packages by setting npm's minimum release age setting to 1 or 2 days (a timeframe that all the recent high-profile malicious package versions were unpublished within).

            • nine_k 30 minutes ago

              Note to self: the test suite for vetting a package should include setting the system date some time in the future, to check if an exploit is trying to sleep long enough to defeat the age limit.

        • voxl 34 minutes ago

          It's insane to me you spend $200 on a report you likely rarely read in detail or double check for correctness, yet you're doing it to feel good about security.

          • devttyeu 9 minutes ago

            If it runs in a harness that will alert me when something dodgy is detected I'm fine to stay at that level.

            I don't read it in detail because reading in detail is precisely what I delegate to the harness. The alternative is that I delegate all this trust to package managers and the maintainers which quite clearly is a bad idea.

            Whether the $$ pricetag is worth it is.. relative. Also in Go you don't update all that often, really when something either breaks or there is a legitimate security reason to do so, which in deep systems software is quite infrequent.

            Funnily enough for frontend NPM code our policy was to never ever upgrade and run with locked dependencies, running few years old JS deps. For internal dashboards it was perfectly fine, never missed a feature and never had a supply chain close call.

      • vsgherzi an hour ago

        Even linux was subjected to an attack in xz utils. Granted it is much harder and they have a much better auditing problem (something npm should learn from). There really isn't a silver bullet here unfortunately. The industry as a whole needs to get more serious about this.

        • nine_k 43 minutes ago

          There's no silver bullet, but getting an exploit into xz took extraordinary effort, a long time, and bespoke code, because it needed to slip under the radar of actual humans reading the code. A shai hulud-style attack won't work with any reasonable Linux distro, like it does with npm.

      • TZubiri 9 minutes ago

        Just writing the actual code that you are being paid to write

      • slopinthebag 20 minutes ago

        Both Cargo and Go's package manager are a lot better. Can you name comparable security incidents they've had in the last 5 years?

        Idk about Python, I refuse to use that language for other reasons.

      • jadbox 43 minutes ago

        Exactly, the only real way to escape this madness is if we move back to "Standard Libs" where your project only depends on 1-3 core libraries. For example, .NET and Java are almost entire 'kitchen sink' ecosystems. Arguably for simple projects, Go has a fairly large standard lib.

    • Havoc an hour ago

      Yeah it's a dumpster fire, but I also don't think the other major ecosystems like say python's pypi are any safer structurally

    • bakugo an hour ago

      I highly recommend enforcing a minimum dependency release age of at least a week across all package managers used at your workplace. Most package managers support it now, and it will save you from the vast majority of these attacks.

      https://news.ycombinator.com/item?id=47582632

      • AgentME 31 minutes ago

        Highly recommend using the minimum release age setting, though I think a week is probably overkill. Did any of the recent supply-chain attacks have a malicious version up for more than a day?

  • ljm an hour ago

    So when do we call out NPM as an easy supply chain vector and also Microsoft's ownership of NPM and their prioritisation of AI at any cost.

    NPM is the windows of package managers right now.

    • DrewADesign 42 minutes ago

      People have for years. The real question is do people enjoy not putting any thought into their super convenient JavaScript stack too much to actually do anything about it. Delaying updating to new packages assuming the vulnerability will be discovered in two days or whatever is putting a knee brace on a leg that needs to be amputated. Sooner or later there will be a vulnerability good enough to not be caught in a couple days, or a zero-day damaging enough that not updating immediately is a huge risk. Assuming they won’t be in anything critical enough to disastrously compromise your stack is wishful thinking at its finest.

      • svachalek 27 minutes ago

        The part that always gets me is I tend to only install a few packages like React and maybe some kind of data access layer. But you let that recurse down a few levels and suddenly you've installed a thousand packages, some of them hopelessly obsolete, some of them for patently stupid things that are 1 line of code, etc, etc. I.E. you can't choose to be thoughtful if the main entry points into the language are all built on a pile of garbage.

    • nine_k an hour ago

      Now that npm supports --before, yarn supports npmMinimumAge, and pnpm supports minimumReleaseAge, it's quite possible to stay safe and avoid acciasional bleeding-edge upgrades. Stay a couple months into the past, give testers time to look at newer releases and vet their safety (or report an exploit attempt).

      • Narretz 37 minutes ago

        --before doesn't save you globally, only min-release-age does, which is in npm since March iirc.