The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies. What weird times to be alive!
> The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies.
"We" have been mainstream (?) talking about AI killing since (at least) the first Terminator movie in 1984. The geeks/nerds have much earlier: Frank Herbert talked about humans outsourcing their thinking and being 'enslaved' in Dune with the Butlerian Jihad in 1965. Isaac Asimov's Three Laws of Robotics are from 1942.
Magical thinking is rarely constructive as an argument, but as a fig leaf, it might keep opposition talking for long enough to force through a fait accompli.
Only a matter of time until Department of War starts blaming AI for its errors. I predict that it will soon replace "I don't remember that" as the standard excuse.
"You're right, my fears about potentially starting WW3, millions of innocent people being killed and crashing the global economy were over blown...Now I have all the details, I think your plan sounds wonderful! Should we go ahead with that military operation right away?"
I truly believe they are all psychopaths and the rest of it is theatre for the masses. A good guy vs bad guy narrative is easy to sell and distracts people from the fact that human life is not valued by any of them.
> A good guy vs bad guy narrative is easy to sell and distracts people from the fact that human life is not valued by any of them.
Iran slaughtered 30k people in a matter of days for the crime of "protesting". No tears shed for the Mullahs here, IMHO Israel and the US are doing the world a service here by finally cleaning up the last terrorist regime keeping the region in a constant state of aggression. Note that before and after Oct 7th, it only was Iranian backed forces stirring shit (Houthis, Hezbollah, Gaza's Hamas), while everyone else stayed put.
If you don’t think the current US administration would gladly slaughter 30k people in a matter of days for protesting if they thought they could could get away with it, you’re not paying attention.
Meanwhile the US is lifting sanctions on Russian oil while Russia bombs Ukraine. Turns out peace or democracy is not what they care about, it is regional dominance for Israel.
What has blown my mind is how surprised people seem to be by all of this, it's like, they never imagined these people were capable of doing this...remember when it was "just jokes..." ?
Claude may have just bombed an elementary school, meanwhile Dario is whining that Altman and Trump, two well-known psychos, didn’t play fair for a military deal. Anthropic is the last bastion of the sanctimonious neolib and hopefully this war marks the end of that failing ideology.
Anthropic did the deal with Palantir and was begging the government to use their technology to “fight authoritarianism”, are you insinuating that they shouldn’t be held morally accountable for these business decisions?
Your moral outrage is misplaced and is clueless in the face of reality.
Blame Palantir if you want to vent; Dario is literally putting Anthropic's future at risk by not kowtowing to DoW. Also, when Anthropic and Palantir finalized their partnership in 2024, many Anthropic employees raised concerns, which the company addressed by holding AMA meetings.
Anthropic and Google (to a certain extent) are far better when it comes to principles in AI-usage in the context of "Realpolitik" than OpenAI and xAI, both of whom have zero scruples as is personified by their CEOs.
It’s possible for other business leaders to be more evil than you and to still be on the wrong side of history. I’m not even saying Dario is ill-intentioned — just too propagandized to fully comprehend the moral implication of doing deals with war profiteers and throwing in with the U.S. empire.
Anthropic is selling a model and not applications using that model. The latter is not in their hands but they have drawn two specific red lines over which they are willing to go against the mighty DoW.
If you think that no AI model should be allowed to be used by the Military, then you are living in a clueless la-la land. There are perfectly justified Military/Law-Enforcement uses of AI. What we can demand are controls/oversight by a Human over its usage in a lawful manner. Anthropic has done its part by drawing two red lines and cannot be expected to do more.
It is companies like Palantir who build applications for warfare using Anthropic's (and others) models, enabling features like "shortening the kill chain", "enabling decision compression" etc. who need major oversight.
Should we really buy the many months of switching difficulty argument?
Surely the main API surface is a HTTP API like ChatCompletions? If it's the exact shape of Anthropic's API, the difference is surely minor. There are likely up to 2 API surfaces, that's it. If the OpenAI model APIs are more flexible (esp. with the new 1M context of GPT-5.4), then it should have little difficulty adapting. Then there is LiteLLM and similar that make it even easier, half of their tooling should be using something that abstracts like that anyway. Yes it needs evals and prompt engineering work to optimise it, but they should be used to that by now. Presumably they could even clean-room fine-tune an OpenAI model to match the same Claude shape with low loss. So I don't buy it.
It’s not the syntax of the API that’s the issue, it’s the behaviour and performance of the model. You can create code, images, and video with just about any model, but there’s reasons people prefer Claude Code or Sora for particular tasks
As is pointed out in my links, they are using Palantir's solution which Palantir has built around Claude AI (including custom agents/chatbots/etc.)
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).
In some sense, every American taxpayer helped bomb Iran. It's important to remember that there is a lot of evidence that contributing to crime makes you also an accomplice and that funding violence is tantamount to performing it oneself.
Absolutely. That is the problem with war and capital punishment. I don't take sufficiently strong action (rebellion, etc) to do my utmost to stop it, and am therefore complicit and receive a fractional share of the moral bounty of every life snuffed out unjustly.
The fact that either exists is testimony to the banal evil that exists in us all.
The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies. What weird times to be alive!
> The irony is how quickly we had shifted from AI will help in curing cancer and other diseases to using AI to destroy and kill our enemies.
"We" have been mainstream (?) talking about AI killing since (at least) the first Terminator movie in 1984. The geeks/nerds have much earlier: Frank Herbert talked about humans outsourcing their thinking and being 'enslaved' in Dune with the Butlerian Jihad in 1965. Isaac Asimov's Three Laws of Robotics are from 1942.
Magical thinking is rarely constructive as an argument, but as a fig leaf, it might keep opposition talking for long enough to force through a fait accompli.
It’s almost like both are goals for self preservation
So weird right?
AI data centers will definitely be targeted in advanced warfare.
They already are first-level targets in modern cyber and kinetic warfare.
Quick reminder of the "Slaughterbots" video from 9(!) years ago, when the content was still sci-fi. Well, we're catching up...
https://www.youtube.com/watch?v=9fa9lVwHHqg
Some guy was like what should I bomb in Iran and Claude replied the nuclear sites.
https://archive.ph/HELuu
Only a matter of time until Department of War starts blaming AI for its errors. I predict that it will soon replace "I don't remember that" as the standard excuse.
"You're right, my fears about potentially starting WW3, millions of innocent people being killed and crashing the global economy were over blown...Now I have all the details, I think your plan sounds wonderful! Should we go ahead with that military operation right away?"
I truly believe they are all psychopaths and the rest of it is theatre for the masses. A good guy vs bad guy narrative is easy to sell and distracts people from the fact that human life is not valued by any of them.
Agree, people pick sides but all these despots suck. Innocent children and the peasants pay the highest price.
> A good guy vs bad guy narrative is easy to sell and distracts people from the fact that human life is not valued by any of them.
Iran slaughtered 30k people in a matter of days for the crime of "protesting". No tears shed for the Mullahs here, IMHO Israel and the US are doing the world a service here by finally cleaning up the last terrorist regime keeping the region in a constant state of aggression. Note that before and after Oct 7th, it only was Iranian backed forces stirring shit (Houthis, Hezbollah, Gaza's Hamas), while everyone else stayed put.
If you don’t think the current US administration would gladly slaughter 30k people in a matter of days for protesting if they thought they could could get away with it, you’re not paying attention.
Meanwhile the US is lifting sanctions on Russian oil while Russia bombs Ukraine. Turns out peace or democracy is not what they care about, it is regional dominance for Israel.
What has blown my mind is how surprised people seem to be by all of this, it's like, they never imagined these people were capable of doing this...remember when it was "just jokes..." ?
[dead]
Claude may have just bombed an elementary school, meanwhile Dario is whining that Altman and Trump, two well-known psychos, didn’t play fair for a military deal. Anthropic is the last bastion of the sanctimonious neolib and hopefully this war marks the end of that failing ideology.
https://news.ycombinator.com/item?id=47286420
Anthropic did the deal with Palantir and was begging the government to use their technology to “fight authoritarianism”, are you insinuating that they shouldn’t be held morally accountable for these business decisions?
Your moral outrage is misplaced and is clueless in the face of reality.
Blame Palantir if you want to vent; Dario is literally putting Anthropic's future at risk by not kowtowing to DoW. Also, when Anthropic and Palantir finalized their partnership in 2024, many Anthropic employees raised concerns, which the company addressed by holding AMA meetings.
Anthropic and Google (to a certain extent) are far better when it comes to principles in AI-usage in the context of "Realpolitik" than OpenAI and xAI, both of whom have zero scruples as is personified by their CEOs.
Palantir partnership is at heart of Anthropic, Pentagon rift - https://www.semafor.com/article/02/17/2026/palantir-partners...
Palantir CEO’s rant about the Anthropic-Pentagon feud threatening his company was about a lot more than a dirty word - https://fortune.com/2026/03/05/palantir-ceo-alex-karp-anthro...
Anthropic-Palantir Partnership at Risk After Pentagon Ruling - https://archive.ph/EWmay#selection-993.0-993.60
It’s possible for other business leaders to be more evil than you and to still be on the wrong side of history. I’m not even saying Dario is ill-intentioned — just too propagandized to fully comprehend the moral implication of doing deals with war profiteers and throwing in with the U.S. empire.
You are making no sense.
Anthropic is selling a model and not applications using that model. The latter is not in their hands but they have drawn two specific red lines over which they are willing to go against the mighty DoW.
If you think that no AI model should be allowed to be used by the Military, then you are living in a clueless la-la land. There are perfectly justified Military/Law-Enforcement uses of AI. What we can demand are controls/oversight by a Human over its usage in a lawful manner. Anthropic has done its part by drawing two red lines and cannot be expected to do more.
It is companies like Palantir who build applications for warfare using Anthropic's (and others) models, enabling features like "shortening the kill chain", "enabling decision compression" etc. who need major oversight.
False insinuation.
It is actually Palantir using Claude AI in its "Maven Smart System" for real-time battlefield analysis which is being used by the US Military.
More details at - https://news.ycombinator.com/item?id=47275936
Also see Palantir’s Double Conflict of Interest in the War Against Iran - https://bylinetimes.com/2026/03/05/palantirs-double-conflict...
But they did use Claude AI. Several commentators (eg Michael Burry ) have claimed that Palantir could have difficulty switching AI engines easily.
Should we really buy the many months of switching difficulty argument? Surely the main API surface is a HTTP API like ChatCompletions? If it's the exact shape of Anthropic's API, the difference is surely minor. There are likely up to 2 API surfaces, that's it. If the OpenAI model APIs are more flexible (esp. with the new 1M context of GPT-5.4), then it should have little difficulty adapting. Then there is LiteLLM and similar that make it even easier, half of their tooling should be using something that abstracts like that anyway. Yes it needs evals and prompt engineering work to optimise it, but they should be used to that by now. Presumably they could even clean-room fine-tune an OpenAI model to match the same Claude shape with low loss. So I don't buy it.
It’s not the syntax of the API that’s the issue, it’s the behaviour and performance of the model. You can create code, images, and video with just about any model, but there’s reasons people prefer Claude Code or Sora for particular tasks
As is pointed out in my links, they are using Palantir's solution which Palantir has built around Claude AI (including custom agents/chatbots/etc.)
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).
In some sense, every American taxpayer helped bomb Iran. It's important to remember that there is a lot of evidence that contributing to crime makes you also an accomplice and that funding violence is tantamount to performing it oneself.
Absolutely. That is the problem with war and capital punishment. I don't take sufficiently strong action (rebellion, etc) to do my utmost to stop it, and am therefore complicit and receive a fractional share of the moral bounty of every life snuffed out unjustly.
The fact that either exists is testimony to the banal evil that exists in us all.