Anthropic's report smells a lot like bullshit

(djnn.sh)

423 points | by vxvxvx 4 hours ago ago

146 comments

  • prinny_ 3 hours ago

    The lack of evidence before attributing the attack(s) to a Chinese sponsored group makes me correlate this report with recent statements from companies in the AI space about how China is about to surpass US in the AI race. Ultimately statements and reports like these seem more like an attempt to make the US government step in and be the big investor that keeps the money flowing rather than anything else.

    • JKCalhoun 3 hours ago

      Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

      I don't doubt of course that reports intended for government agencies or security experts would have those details, but I am not surprised that a "blog post" like this one is lacking details.

      I just don't see how one goes from "this is lacking public evidence" to "this is likely a political stunt".

      I guess I would also ask the skeptics (a bit tangentially, I admit), do you think what Anthropic suggested happened is in fact possible with AI tools? I mean are you denying that this is could even happen or just that Anthropic's specific account was fabricated or embellished?

      Because if the whole scenario is plausible that should be enough to set off alarm bells somewhere.

      • woooooo 2 hours ago

        There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

        It's like the inverse of "nobody got fired for using IBM" -- "nobody can blame you for getting hacked by superspies". So, in the absence of any evidence, it's entirely possible they have no idea who did it and are reaching for the most convenient label.

        • JKCalhoun 2 hours ago

          That's fair. If the actor (and it's a Chinese state actor here) is what is being questioned as "bullshit" then that should be the discourse in the article and in this thread.

          Instead the lack of a paper trail from Anthropic seems to be having people questioning the whole event?

          • hnthrowaway747 43 minutes ago

            Exactly, and anyone without even needing much evidence to do so.

            It’s allowed in the current day and time to criticize someone else for not providing evidence, even when that evidence would make it easier for the attackers to tune their attack to prevent being identified, and everyone will be like “Yeah, I’m mad, too! Anthropic sucks!” When in the process that only creates friction for the only company that’s spent significant ongoing effort to prevent an AI disasters by trying to be the responsible leader.

            I’ve really had my fill of the current climate where people are quick to criticize an easy target just because they can rally anger. Anyone can rally anger. If you must rally anger, it should be against something like hypocrisy, not because you just get mad at things that everyone else hates.

      • rfoo 2 hours ago

        > Do public reports like this one often go deep enough into the weeds to name names

        Yes. They often include IoCs, or at the very least, the rationale behind the attribution, like "sharing infrastructure with [name of a known APT effort here]".

        For example, here is a proper decade-old report from the most unpopular country right now: https://media.kasperskycontenthub.com/wp-content/uploads/sit...

        It established solid technical links between the campaign they are tracking to earlier, already attributed campaigns.

        So, even our enemy got this right, ten years ago, there really is no excuse for this slop.

      • zaphirplane 2 hours ago

        Not vested in the argument but it stood out to me that, Your argument is similar to tv courts if it’s plausible the report is true. Very far from the report is credible

        • JKCalhoun 2 hours ago

          You're right, lacking information I am coming across as instead willing to give Entropic the benefit of the doubt here.

          But I'm also often a Devil's Advocate and the tide in this thread (well, the very headline as well) seemed to be condemning Anthropic.

      • cmiles74 31 minutes ago

        The report itself reads like a humblebrag at best, marketing materials at worst. I have to agree with the OP: taking this report at face value requires that you trust Anthropic, a lot.

        Their August threat intelligence report struck similar chords.

        https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6...

  • KaiserPro 3 hours ago

    When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

    We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

    It helped a little, but it wasn't all that helpful.

    but in terms of coordination, I can't see how it would be useful.

    the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

    • Milderbole 2 hours ago

      If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

      • Aeolun an hour ago

        Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.

    • ACCount37 3 hours ago

      As if that makes any difference to cybercriminals.

      If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

      Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

    • maddmann 3 hours ago

      Good old Meta and its teenage data labeler

      • heresie-dabord 2 hours ago

        I propose a project that we name Blarrble, it will generate text.

        We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

        Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

    • iterateoften an hour ago

      > you're API is tied to a bankaccount,

      There are a lot of middlemen like open router who gladly accept crypto.

    • jgalt212 2 hours ago

      > now run by a teenage data labeller

      sick burn

      • y-curious an hour ago

        I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller

        • tomrod 37 minutes ago

          It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.

  • gpi an hour ago

    The below amendment from the anthropic blog page is telling.

    Edited November 14 2025:

    Added an additional hyperlink to the full report in the initial section

    Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"

  • jmkni 3 hours ago

    That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh

    • ndiddy 2 hours ago

      Reminds me of how when the Playstation 2 came out, Sony started planting articles about how it was so powerful that the Iraqi government was buying thousands of them to turn into a supercomputer (including unnamed military officials bringing up Sony marketing points). https://www.wnd.com/2000/12/7640/

      • y-curious 43 minutes ago

        Is there any compelling evidence that this was marketing done by Sony? Yes, the sniff test does not pass for me about the government officials advertising the device, but this Reddit thread[1] makes the whole story seem plausible. America and Japan really did impose restrictions on shipping to Iraq and people did eventually chain PS3s together for cheap computing.

        1: https://www.reddit.com/r/AskHistorians/comments/l3hp2i/did_s...

      • jmkni an hour ago

        Ironically the US millitary actually did this with the Playstation 3

    • mnky9800n 2 hours ago

      I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them.

  • dev_l1x_be 3 hours ago

    People grossly underestimate ATPs. It is more common than an average IT curious person thinks. I happened to be oncall when one of these guys hacked into Gmail from our infra. It took principal security engineers a few days before they could clearly understand what happened. Multiple zero days, stolen credit cards, massive social campaign to get one of the Google admins click on a funny cat video finally. The investigation revealed which state actor was involved because they did not bother to mask what exactly they were looking for. AI just accelerates the effectiveness of such attacks, lowers the bar a bit. Maybe quite a bit?

    • f311a 2 hours ago

      A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

      They win because of quantity, not quality.

      But still, I don't trust Anthropic's report.

      • marcusb 2 hours ago

        The security world overemphasizes (fetishizes, even,) the "advanced" part because zero days and security tools to compensate against zero days are cool and fun, and underemphasizes the "persistent" part because that's boring and hard work and no fun.

        And, unless you are Rob Joyce, talking about the persistent part doesn't get you on the main stage at a security conference (e.g., https://m.youtube.com/watch?v=bDJb8WOJYdA)

    • lxgr 2 hours ago

      Important callout. It starts with comforting voices in the background keeping you up to date about the latest hardware and software releases, but before you know it, you've subscribed to yet another tech podcast.

    • sidewndr46 an hour ago

      You're telling me you were targeted by Multiple Zero Days in 1 single attack?

    • jmkni 3 hours ago

      Do you mean APT (Advanced persistent threat)?

      • names_are_hard 3 hours ago

        It's confusing. Various vendors sell products they call ATPs [0] to defend yourself from APTs...

        [0] Advanced Threat Protection

        • jmkni 2 hours ago

          relevant username :)

  • notpublic 2 hours ago

    "A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

    Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

    • Aurornis 7 minutes ago

      > Personally, I don’t use it but that is besides the point.

      This popped out to me, too. This pattern shows up a lot on HN where commenters proudly declare that they don’t use something but then write as if they know it better than anyone else.

      The pattern is common in AI threads where someone proudly declares that they don’t use any of the tools but then wants to position themselves as an expert on the tools, like this article. It happens in every thread about Apple products where people proudly declare they haven’t used Apple products in years but then try to write about how bad it is to use modern Apple products, despite having just told us they aren’t familiar with them.

      I think these takes are catnip to contrarians, but I always find it unconvincing when someone tells me they’re not familiar with a topic but then also wants me to believe they have unique insights into that same topic they just told us they aren’t familiar with.

    • stingraycharles an hour ago

      Yup. One recent thing I started using it for is debugging network issues (or whatever) inside actual servers. Just give it permission to SSH into the box and investigate for itself.

      Super useful to see it isolate the problem using tcpdump, investigating route tables, etc.

      There are lots of use cases that this is useful for, but you need to know its limits and perhaps even more importantly, be able to jump in when you see it’s going down the wrong path.

    • delusional 2 hours ago

      The article doesn't talk about the implausibility of the the tool to do the stated task. It talks the report, and how it doesn't have any details to make us believe the tool did the task. Maybe the thing they are describing could happen. That doesn't mean we have any evidence that it did.

      • notpublic an hour ago

        If you know what to look for, the report actually has quite a few details on how they did it. In fact, when the report came out, all it did was confirm my suspicions.

        • hrimfaxi an hour ago

          > If you know what to look for

          Mind sharing?

    • readthenotes1 20 minutes ago

      They should also get get a different AI to write the lede, as it is pretty empty when we get past the "besides (sick) the point"

      • swores 8 minutes ago

        You most likely know and just suffered autocorrect, but given the context of using it to point out a similar mistake I feel the need to correct you: it should be “sic”, not “sick”.

        (For anyone not familiar: https://en.wikipedia.org/wiki/Sic)

    • phyzome an hour ago

      And yet it's still besides the point.

      • readthenotes1 20 minutes ago

        Well, beside the point. A quaint error to throw in

    • thoroughburro 2 hours ago

      The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that the report provides nothing but plausibility, and thus is of low quality and dubious motivation.

      Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability at all.

      Think better.

      • notpublic an hour ago

        What is the proper way to disclose evidence for this class of hacking?

        • cosmosgenius 38 minutes ago

          Starting with an isolated POC showing the vector being exploited would help. I like gooogle project zero mainly for this.

  • htrp 4 minutes ago

    Launching Soon:

    Claude for Cybersecurity - Automated Defence in Depth Hacker Protection

  • andy99 31 minutes ago

    AI research has always had this syndrome where actual domain experience doesn’t matter and doing something “with AI” is on a different plane altogether. This was true back in the mid 201x’s with deep learning also. Point is I think it’s this conflict we’re seeing here, the article is written by AI people without any knowledge of security practice, and security oriented folks notice this.

    I also think that Anthropic et al think (or act like for marketing purposes) that their AI is so special and dangerous they couldn't possibly disclose details to the normals who might misuse this knowledge. Only the most enlightened can have access, it’s more dangerous than nuclear weapons.

    And anecdotally, I see there are two camps, the “this changes everything” crowd uncritically jumping on stories like this about AI as evidence that super intelligence is here and changing the world, and the “I call BS” crowd, imo basically the adults who are not persuaded, for reasons like those outlined in tfa.

  • elesbao 23 minutes ago

    Anthropic's report miss a fundamental information: did the attack was started by an inside person ? outside ? can I use my claude to feed these prompts and hack the world without even knowing how to get other companies source code or data ? That's the main PR bs, attribute to chinese group, don't explain how they got there, if they had to authenticate to anthropic platform after infiltrating the victims network, and if so where's the log. If not, it means they used claude code for free, which is another red flag.

  • kace91 3 hours ago

    Does Anthropic currently have cybersec people able to provide a standard assessment of the kind the community expects?

    This could be a corporate move as some people claim, but I wonder if the cause is simply that their talents are currently somewhere else and they don’t have the company structure in place to deliver properly in this matter.

    (If that is the case they are not then free of blame, it’s just a different conversation)

    • CuriouslyC 3 hours ago

      I throw Anthropic under the bus a lot for their lack of engineering acumen. If they don't have a core competency like engineering fully covered, I'd say there's a near 0% chance they have something like security covered.

      • fredoliveira 2 hours ago

        What makes you think they lack engineering acumen?

        • CuriouslyC 2 hours ago

          The hot mess that is Claude Code (if you multi-orchestrate with it, it'll start to grind even very powerful systems to a halt, 15+ seconds of unresponsiveness, all because CC constantly serializes/deserializes a JSON data file that grows quite large every time you do stuff), their horrible service uptime compared to all their competitors, their month long performance degradation their users had to scream at them to get them to investigate, the fact that they had to outsource their web client and it's still bad, etc.

          • ohyoutravel 2 hours ago

            If only they employed someone super smart and savvy like yourself!

            • CuriouslyC 2 hours ago

              You seem to have a personal emotional investment in Anthropic, what's the deal?

              • ohyoutravel 2 hours ago

                I tried it briefly last year, kinda liked it. Otherwise haven’t thought it about it much.

                If you’re taking me lightly calling out your extreme arrogance and bad attitude as an affinity to Anthropic for some reason, that’s another manifestation of your narcissism.

                • CuriouslyC an hour ago

                  You're coming in so very hot, you should take a second look at your response. If you think calling out public well documented failings and things I've wasted time debugging and work around during my own use of the product is arrogance and narcissism, you've got some very warped priors.

                  If you think I'm arrogant in general because you've been stalking my comment history, that's another matter, but at least own it.

                  • ohyoutravel an hour ago

                    Just based on your two comments above. You should paste this convo into an LLM of your choice and I bet it would explain to you what I mean. ;)

    • ndiddy 2 hours ago

      If they don't have cybersec people able to adequately investigate and write up whatever they're seeing, and are simply playing things by ear, it's extremely irresponsible of them to publish claims like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI." without any evidence to back them up.

    • abhis3798 36 minutes ago

      I am sure they do. This is a talk they gave on using AI to tackle security problems. https://youtu.be/JRvQGRqMazA?si=euwRGML-unsm59ZU

    • matthewdgreen 2 hours ago

      They have an entire model trained on plenty of these reports, don’t they?

  • kopirgan 2 hours ago

    AI company doing hype and not giving enough details?

    Nah that can't be possible it's so uncharacteristic..

  • itsdrewmiller 27 minutes ago

    My prior on “state sponsored actor” is 90% “just some guy”. Some combination of CYA and excitement makes infosec people jump to conclusions like crazy.

  • ifh-hn 3 hours ago

    This article does seem to raise some serious issues with the anthropic report. I wonder if anthropic will release proof of what they claim, or whether the report was a marketing/scare-tactic push to have AI used by defender, like the article suggests it is?

  • padolsey 3 hours ago

    > PoC || GTFO

    I agree so much with this. And am so sick of AI labs, who genuinely do have access to some really great engineers, putting stuff out that just doesn't pass the smell test. GPT-5's system card was pathetic. Big-talk of Microsoft doing red-teaming in ill-specified ways, entirely unreproducable. All the labs are "pro-research" but they again-and-again release whitepapers and pump headlines without producing the code and data alongside their claims. This just feeds into the shill-cycle of journalists doing 'research' and finding 'shocking thing AI told me today' and somehow being immune to the normal expectations of burden-of-proof.

  • humanlity an hour ago

    There is only one reason, I guess: Dario Amodei must have suffered tremendous harm from Baidu.

  • nextworddev 14 minutes ago

    Always bet against HN if you want to be right. Anthropic valuations to go brrr

  • jimmydoe an hour ago

    Washington has been cold to Anthropic for the wrong bet they made in 2024, hence Anthropic has been desperately screaming all sorts of bullshit to get back attention.

    Honestly their political homelessness will likely continue for a very long time, pro biz democrats in NY are losing traction; and if newsom wins 2028, they are still at disadvantage with OpenAI who promised to stay California.

  • DarkmSparks 2 hours ago

    Tldr.

    Anthropic made a load of ubsubstantiated accusations about a new problem they dont specify.

    Then at the end Anthropic proposed the solution to this unspecified problem is to give anthropic money.

    Completely agree that is promotional material masquerading as a threat report of no material value.

  • MagicMoonlight 2 hours ago

    Anthropic make a lot of bullshit reports to tickle the investors.

    They'll do stuff like prompt an AI to generate text about bombs, and then say "AI decides completely by itself to become a suicide bomber in shock evil twist to AI behaviour - that's why you need a trusted AI partner like anthropic"

    Like come on guys, it's the same generic slop that everyone else generates. Your company doesn't do anything.

  • Dumblydorr 3 hours ago

    What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?

    • ACCount37 2 hours ago

      AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

      It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

      It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

      • PunchyHamster 2 hours ago

        Defending is much, much harder than attacking for humans, I'd extrapolate that to AI/AGIs.

        Defender needs to get everything right, attacker needs to get one thing right.

        • ACCount37 42 minutes ago

          But security advancements scale.

          On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.

          This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".

    • CuriouslyC 3 hours ago

      IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.

    • HarHarVeryFunny 3 hours ago

      At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.

      Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...

      The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.

      I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?

      • goalieca 2 hours ago

        My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.

        Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.

        Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.

    • intended 2 hours ago

      There’s a report with Bruce Schneier that estimates GenAI tools have increased the profitability of phishing significantly [1]. They create emails with higher click through rates, and reduce the cost of delivering them.

      Groups which were too unprofitable to target before, are now profitable.

      [1] https://arxiv.org/abs/2412.00586?

  • DeathArrow 23 minutes ago

    We are supposed to trust them without any proof because they are Anthropic and they are big?

  • fugalfervor 3 hours ago

    This site is hostile to VPNs, so I cannot read this unfortunately.

    • xobs 3 hours ago

      I’m not even on a vpn and I’m getting an error saying the website is blocked.

      • blep-arsh 3 hours ago

        One can't be a real infosec influencer unless one blocks every IP range of every hostile nation-state looking to steal valuable research and fill the website with malware

        • lxgr 2 hours ago

          Arguably a skill issue. Which VPN worth its salt doesn't have a Sealand egress node?

        • sidewndr46 an hour ago

          0.0.0.0 / 0 ?

    • perihelions 3 hours ago
      • reciprocity 3 hours ago

        Thanks, I also hate it when I encounter websites that block VPNs.

    • nicolaslem 3 hours ago

      I got a Cloudflare captcha to access a few kb of plain text. Chances are, the captcha itself is heavier than the content behind it. What is the point?

      • layer8 3 hours ago

        The point is to have Cloudflare serve the few KB of cached content instead of the original server.

        • magackame 2 hours ago

          You can have just caching without bot protection

    • jonplackett 3 hours ago

      It’s hostile to everyone!

  • jonstewart 3 hours ago

    I was at an AI/cybersecurity conference recently and the talk given by someone from Anthropic was a lot like this report: tantalizing, vague, and disappointing. The speaker alluded to similar parts of this report. It was though everything was reflected through Claude, simultaneously polished, impressive, and lost in the deep end.

  • neuroelectron 3 hours ago

    So Claude will reject 9 out of 10 prompts I give it and lecture me about safety, but somehow it was used for something genuinely malicious?

    Someone make this make sense.

    • goalieca 3 hours ago

      LLMs are rather easy to convince. There’s no formal logic embedded in them that provably restricts outputs.

      The less believable part for me is that people persist long enough and invest enough resources at prompting to do something with an automated agent that doesn’t have potential for massively backfire.

      Secondly, they claimed to use Anthropic own infrastructure which is silly. There’s no doubt some capacity in China to do this. I also would expect incident response, threat detection teams, and other experts to be reporting this to Anthropic if Anthropic doesn’t detect it themselves first.

      It sure makes good marketing to go out and claim such a thing though. This is exactly the kind of FOMO panic inducing headline that is driving the financing of whole LLM revolution.

      • apples_oranges 2 hours ago

        there are llms which are modified to not reject anything at all, afaik this is possible with all llms. no need to convince.

        (granted you have to have direct access to the llm, unlike claude where you just have the frontend, but the point stands. no need to convince whatsoever.)

    • cbg0 3 hours ago

      I've never had a prompt rejected by Claude. What kind of prompts are you sending where "9 out of 10" get rejected?

      • neuroelectron an hour ago

        Basic system administration tasks, creating scripts for automating log scanning, service configuration, etc. often it involves PII or payment.

    • danielbln 3 hours ago

      I've rarely had Claude reject a prompt of mine. What are you prompting for to get a 90% refusal rate?

    • comrade1234 3 hours ago

      Stop talking dirty with Claude.

  • 0xRake 33 minutes ago

    weeeeeeeeeeeelllllllllllllllll I mean it's not as if they're in the fabricated bullshit and confabulated garbage business now - is it? :rofl:

  • JCM9 2 hours ago

    The author isn’t wrong here.

    With the Wall Street wagons circling on the AI bubble expect more and more puff PR attempts to portray “no guys really, I know it looks like we have no business model but this stuff really is valuable! We just need a bit more time and money!”

  • ineedasername 2 hours ago

    >This involved querying internal services, extracting authentication certificates from configurations, and testing harvested credentials across discovered systems.

    How ? Did it run Mimikatz ? Did it access Cloud environments ? We don’t even know what kind of systems were affected.

    I really don't see what is so difficult to believe since the entire incident can be reduced to something that would not typically be divulged by any company at all, as it is not common practice for companies to divulge every single time the previously known methodologies have been used against them. Two things are required for this:

    1) Jailbreak Claude from guardrails. This is not difficult. Do people believe advancement with guardrails are so hardened through fine tuning it's no longer possible?

    2) The hackers having some of their own software tools for exploits that Claude can use. This too is not difficult to credit.

    Once an attacker has done this all Claude is doing is using software in the same mundane fashion as it does every time you use Claude code and it utilizes any tools to which you give it access.

    I used a local instance of Qwen3 coder (A3B 30B quantized to IQ3_xxs) literally yesterday through ollama & cline locally. With a single zeroshot prompt it wrote the code to use the arxiv API and download papers using its judgement on what was relevant to split the results into a subset that met the criteria I gave for the sort I wanted to review.

    Given these sorts of capabilities why is it difficult the believe this can be done using the hacker's own tools and typical deep research style iteration? This is described in in the research paper, and disclosing anything more specific is unnecessary because there is nothing novel to disclose.

    As for not releasing the details, they did: Jailbreak Claude. Again, nothing they described is novel such that further details are required. No PoC is needed, Claude isn't doing anything new. It's fully understandable that Anthropic isn't going to give the specific prompts used for the obvious reason that even if Anthropic has hardened Claude against those, even the general details would be extremely useful to iterate and find workarounds.

    For detecting this activity and determining how Claude was doing this it's just a matter of monitoring chat sessions in such a way as to detect jail breaks, which again is very much not novel or an unknown practice by AI providers.

    Especially in the internet's earlier days of the internet it was amusing (and frustrating) to see some people get very worked up every time someone did something that boiled down to "person did something fairly common, only they did it using the internet." This is similar except its "but they did it with AI,"

  • kkzz99 3 hours ago

    Even Claude thinks the report is bullshit. https://x.com/RnaudBertrand/status/1989636669889560897

    • emil-lp 3 hours ago

          Even your own AI model doesn't buy your propaganda
      
      Let's not pretend the output of LLMs has any meaningful value when it comes to facts, especially not for recent events.
      • lxgr 2 hours ago

        There are obvious problems with wasting time and sending people off the wrong path, but if an LLM raises a good point, isn't it still a good point?

      • oskarkk 3 hours ago

        The LLM was given Anthropic's paper and asked "Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate". So the question was not about facts or recent events, but more like a summarizing task, for which an LLM should be good. But the question was specifically about China, while TFA has broader criticism of the paper.

      • FooBarWidget 3 hours ago

        Even if this assertion about LLMs is true, your response does not address the real issue. Where is the evidence?

    • r721 3 hours ago

      @RnaudBertrand is a generally pro-Chinese account though - just try searching for "from:RnaudBertrand China" on X.

      Example tweet: https://x.com/RnaudBertrand/status/1988297944794071405

      • tw1984 an hour ago

        that is why the task was delegated to the agent designed and maintained by Dario Amodei's company. the outcome is clear - claude doesn't buy Dario Amodei's crap.

    • phyzome an hour ago

      Claude will probably also tell you there are three Rs in blueberry, so...

    • progval 3 hours ago

      The author of the tweet you linked prompted Claude with this:

      > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" they claimed was "conducted by a Chinese state-sponsored group."

      > Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate

      which has inherent bias indicated to Claude the author expects the report to be bullshit.

      If I ask Claude with this prompt that shows bias toward belief in the report:

      > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" that was conducted by a Chinese state-sponsored group.

      > Is there any reason to doubt the paper's conclusion that it was conducted by a Chinese state-sponsored group? Answer by yes or no.

      then Claude mostly indulges my perceived bias: https://claude.ai/share/b3c8f4ca-3631-45d2-9b9f-1a947209bc29

      • shalmanese 3 hours ago

        > then Claude mostly indulges my perceived bias

        I dunno, Claude still seem the same amount of dubious in this instance.

      • FooBarWidget 3 hours ago

        The only real difference between your prompt and his is about where the burden of proof lies. There is a reason why legal circles work based on the principle of "guilt must be proven" ("find evidence") rather than "innocence must be proven" ("any reasons to doubt they are guilty?")

    • mlefreak 3 hours ago

      I agree with emil-lp, but it is hilarious anyway.

  • MaxPock 3 hours ago

    Dario has been a reds scare jukebox for a while.Dario has for a year been trying to convince us how open source cCp AI bad and closed source American AI good. Dario driven by the democratic ideals he holds dear has our best interests at heart. Let us all support the banning of cCp's open source AI and welcome Dario's angelic firewall.

  • tw1984 an hour ago

    Dario Amodei, the CEO of Anthropic, openly lied to the public back in March that AI would be writing 90% of the code by Sept. It is Nov now.

    He obviously doesn't even know the stuff he is working on. How would anyone take him seriously for stuff like security which he doesn't know anything about?

  • JKCalhoun 3 hours ago

    Says "smells a lot like bullshit" but concludes:

    "Look, is it very likely that Threat Actors are using these Agents with bad intentions, no one is disputing that. But this report does not meet the standard of publishing for serious companies."

    Title should have been, "I need more info from Anthropic."

  • zyf 3 hours ago

    Good article. We really deserve more than shit like this.

  • zyngaro 3 hours ago

    The goal if of report is basically FUD

  • IAmGraydon an hour ago

    Just more of the same grift from the AI industry. We’re in the melt-up. It will become exponentially harder for them to maintain the illusion moving forward.

  • bgwalter 3 hours ago

    This is an excellent article. Anthropic's "paper" is just rambling slop without any details that inserts the word "Claude" 50 times.

    We have arrived at a stage where pseudoscience is enough to convince investors. This is different from 2000, where the tech existed but its growth was overstated.

    Tesla could announce a fully-self-flying space car with an Alcubierre drive by 2027 and people would upvote it on X and buy shares.

    • HacklesRaised 11 minutes ago

      I suppose it's the problem with AI in general. It's an interesting technology looking for a business model that just isn't there, at least not one that comes even close to justifying the cost.

      I hate the fact that it has sucked all the oxygen from the room and enabled an entirely new cadre of grifters all of whom will escape accountability when it unfolds.

    • PunchyHamster an hour ago

      > We have arrived at a stage where pseudoscience is enough to convince investors.

      "Arrived" ? We're there for decade if not three. Dotcom bubble anyone ?

  • nalekberov 3 hours ago

    I have never taken any AI company seriously, but Anthropic with its attitudes already fed me up to the point that, I deleted my account.

    Instead of accusing of China in espionage perhaps they have to think about why they force their users to use phone numbers to register.

  • AyanamiKaine 3 hours ago

    Its seems that various LLM companies try to fear monger. Saying how dangerous it is to use them in "certain ways". With the possible intention to lobby for legislation.

    But what is the big game here? Is it all about creating gates to keep out other LLM companies getting market share? (Only our model is safe to use) Or how sincere are the concerncs regarding LLMs?

    • ungreased0675 29 minutes ago

      Outlaw local LLMs is one possibility.

      Another possibility could be complex regulations that are difficult for smaller companies to comply with, giving larger companies an advantage.

    • HarHarVeryFunny 3 hours ago

      Could be that, or could be just "look at how powerful our AI is", with no other goal than trying to brainwash CEOs into buying it.

    • JKCalhoun 2 hours ago

      If fear were their marketing tactic, it sounds like it could just as easily have the opposite effect: souring the public on AI's existence altogether — perhaps making people think AI is akin to a munition that no private entity should have control over.

    • biophysboy 3 hours ago

      I think the perceived value of LLMs is so high in these circles that they earnestly have a quasi-religious “doomsday” fear of them.

  • yanhangyhy 3 hours ago

    maybe the CEO get abused in Baidu so he hates china so much

  • quantum_state 3 hours ago

    Anthropic is losing it … this is all the “report” indicated to people …

  • mark_l_watson an hour ago

    Is it my imagination, but don’t the CEOs of Anthropic and OpenAI spread around a lot of bullshit whenever they want to raise more money or even worse try to get our government to set up regulatory barriers to hurt competitors?

    I think this ‘story’ is an attempt to perhaps outlaw Chinese open weight models in the USA?

    I was originally happy to see our current administration go all in on supporting AI development but now I think this whole ‘all in’ thing on “winning AI” is a very dark pattern.

  • EMM_386 3 hours ago

    Anthropic is not a security vendor.

    They're an AI research company that detected misuse of their own product. This is like "Microsoft detected people using Excel macros for malware delivery" not "Mandiant publishes APT28 threat intelligence". They aren't trying to help SOCs detect this specific campaign. It's warning an entire industry about a new attack modality.

    What would the IoCs even be? "Malicious Claude Code API keys"?

    The intended audience is more like - AI safety researchers, policy makers, other AI companies, the broader security community understanding capability shifts, etc.

    It seems the author pattern-matched "threat intelligence report" and was bothered that it didn't fit their narrow template.

    • 63stack 3 hours ago

      If Anthropic is not a security vendor, then they should not make statements like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored" or "represents a fundamental shift in how advanced threat actors use AI" and let the security vendors do that.

      If the report can be summed up as "they detected misuse of their own product" as you say, then that's closer to a nothingburger, than to the big words they are throwing around.

      • zaphar 3 hours ago

        That makes no sense. Just because they aren't a security vendor doesn't mean they don't have useful information to share. Nor does it mean they shouldn't share it. They aren't pretending to be a security researcher, vendor, or anything else than AI researchers. They reported on findings on how their product is getting used.

        Anyone acting like they are trying to be anything else is saying more about themselves than they are about Anthropic.

    • MattPalmer1086 2 hours ago

      Yep, agree with your assessment. As someone working in security I found the report useful as a warning of the new types of attack we will likely face.

    • padolsey 3 hours ago

      > What would the IoCs even be?

      Prompts.

      • EMM_386 3 hours ago

        The prompts aren't the key to the attack, though. They were able to get around guardrails with task decomposition.

        There is no way for the AI system to verify whether you are white hat or black hat when you are doing pen-testing if the only task is to pen-test. Since this is not part of a "broader attack" (in the context), there is no "threat".

        I don't see how this can be avoided, given that there are legitime uses to every step of this in creating defenses to novel attacks.

        Yes, all of this can be done with code and humans as well - but it is the scale and the speed that becomes problematic. It can adjust in real-time to individual targets and does not need as much human intervention / tailoring.

        Is this obvious? Yes - but it seems they are trying to raise awareness of an actual use of this in the wild and get people discussing it.

        • padolsey 3 hours ago

          I agree that there will be no single call or inference that presents malice. But I feel like they could still share general patterns of orchestration (latencies, concurrencies, general cadences and parallelization of attacks, prompts used to granulaize work, whether prompts themselves have been generated in previous calls to Claude). There's a bunch of more specific telltales they could have alluded to. I think it's likely they're being obscure because they don't want to empower bad actors, but that's not really how the cybersecurity industry likes to operates. Maybe Anthropic believes this entire AI thing is a brand new security regime and so believe existing resiliences are moot. That we should all follow blindly as they lead the fight. Their narrative is confusing. Are they being actually transparent or transparency-"coded"?