It is really interesting that many people in this world seem to refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them (if they were in the wrong place at the wrong time of course). I don't share this opinion and would be interested to hear about what is the thought process of people that come to support that. Other than financial aspect (people who actually benefit financially), Is it something like our enemies will use it so we should too ? Does this mean not using that against your enemies that does not use it or what ?
Anthropic and AI alignment research isn't about making AI that are DnD-style "good alignment", but making AI that have outcomes that are aligned with the goals that the designers intended for them. The chatbot AI model and goals are not the same model and goals for a defense AI.
The goals for a chatbot assistant are to be useful, correct, and not insult people. The goals for a defense AI are to extract correct features, provide useful guidance, and not kill the wrong people. If you are working in defense you already have a belief that your work is morally correct: most of those justifications are either that your work will kill bad people more effectively, and so save friendly lives, or will pick who to kill most correctly, and so save innocent lives. Having an AI that is better aligned towards those goals are better.
You may disagree that working in defense is ever morally justified! But Palantir dont't share those beliefs, and want to do as good of a job as they can, and so want the most aligned AI model they can.
They train us to drop fire on people but won't let us write "fuck" on the side of an airplane because it is obscene. (Col. Kurtz - Apocalypse Now)
Which, when you unpack it, is even more interesting. If you do embrace the emotional aspect of war you end up with situations like the my lai massacre. Does AI have the ability to prevent war crimes while engaging in "legal" killings feels like an interesting philosophical question.
Going by recent events I think the convention is to drone strike them and their entire family anyway, and then tally them up as a confirmed dead terrorist.
That's very misleading. Terrible war keeps on happening all the time, just not so much in the US and Europe for the last 70 years.
Yes, since WWII things have been relatively peaceful, but the key term there is relatively. As we speak a pretty awful war is happening in Gaza, in the last twenty years we've had multiple wars with pretty severe casualties, and if you go a little farther back you get to things like Vietnam.
It's true that specifically atomic bombs haven't been used.
> "Acts like firebombing of Tokyo or bombing of Dresden or atomic bombing don't happen now."
For the time being... Humanity's leaders are increasingly as insane (sometimes more-so) as their worshipers. I feel it's only a matter of time before atrocities and crimes against humanity skyrocket again. :(
> Acts like firebombing of Tokyo or bombing of Dresden or atomic bombing don't happen now
We still raze cities and drop incendiaries. America hasn’t gone to war with a near-peer nonnuclear power like Japan since WWII. To the extent we were faced with the prospect in the Cold War, both we and the Soviets were committed to MAD, i.e. using nukes. (Do you think unilateral disarmament in the Cold War would have lead to peace?)
There has been no militarily useful technology that was voluntarily abandoned. Just constrained. You can’t constrain a technology you don’t bother understanding.
> during the firebombing of Tokyo the US murdered 100,000 civilians
Are you arguing there was a war in which firebombing would have been useful but someone decided it was too mean?
Since WWII we invented better high explosives and stand-off precision weapons. If there were a strategic case for firebombing in a future war, have no delusions: it will happen. (Last year, incendiary weapons were used in “ in the Gaza Strip, Lebanon, Ukraine, and Syria” [1].)
What? Who argued war hasn’t changed with technology?
> civilians weren't murdered on the same scale
War wasn’t conducted on the same scale.
> why you are conflating the use of certain types of weapons and willingly allowing enormous collateral damage
I’m not. Nobody in this thread is. The point is the weapons are still stockpiled and used. We have never agreed to ban a useful military technology. Just contained or surpassed it.
AI will be used by militaries as long as it’s useful, even if it causes collateral damage. We will obviously try to reduce collateral damage. But in part because that makes the weapon more useful.
One thing that changed is that everything is instantly reported through numerous channels, and globally: traditional broadcast media as well as independent reporters using Internet channels.
I know what you mean, but I don't have an answer myself.
Really the collateral damage in Ukraine is still ongoing, not in Tokyo for quite some time.
So it's tragically possible that Ukraine could end up worse than Tokyo by the time hostilities finally cease.
Maybe with Tokyo a closer equivalent might be if Ukraine attacked Moscow using a comparable approach, with a degree of disregard for collateral damage figured in. Although Russian strategy already seems to target any part of Kiev that can be hit, civilian or not.
Plus no two things like this are really on the same scale and it's never a direct comparison, but there's some common undercurrent that is either predatory or vengeful which sometimes can grow until it can't get much worse.
So what about prehistoric tribes, even pre-humans, who surely had occasionally completely massacred victim tribes from time to time, not much differently than pack animals have always been known to do.
Total extermination like that could be rapidly completed with no weapons of mass destruction or even gunpowder.
Isn't there some possibility that this tendency has been retained evolutionarily or culturally to some extent today, even though most people would say that's just the opposite of "humanity".
Passed down in an unbroken chain in some way?
Disclaimer: when I was a teenager I worked one summer with a German machinist who had survived the bombing of Dresden. Ironically the project we were on was components for the the most advanced projectile of its caliber, yet to come. Both of us would have liked to build something else, but most opportunities across-the-board affecting all ages had already evaporated due to inflation of the 1970's, and the runaway years hadn't even gotten there yet.
See the big picture at the top? Clearly it's some kind of mall damaged by senseless and cruel Russian strike that cannot have any other purpose but to terrorize population of Kharkiv into submission.
The place should look familiar, only now you can see destroyed MLRS vehicle (there were two, but the second one got evaporated: https://t.me/aleksandr_skif/3150)
>Isn't there some possibility that this tendency has been retained evolutionarily or culturally to some extent today
Sure, but there is an opposite tendency too and it's not going anywhere barring catastrophic changes like famine due to global warming.
> refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them
Liability and regulatory scrutiny are factors. They’re liable about offensive speech but military use cases are an effective shield against liability given that deaths are expected.
I get where you’re coming from but think it comes down to context. Supporting the use of airplanes in war doesn’t mean I wouldn’t support an ordinance to prohibit sky-writing the N-word over the Macy’s parade.
> many people in this world seem to refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them
Same reason we don’t excuse bad manners because someone is a soldier. It’s easy to be polite, and the damage is done at home. It’s harder to accept subordination to a foreign power that pursued an obvious military direction. And the damage of your own system will hopefully happen elsewhere.
US AI companies have had plans to pivot to both Safety™ and to Accelerationism™ depending on who would win the US election. The election results are in: Accelerationism™ won.
Jokes aside; All frontier model AI companies were closely associated with US big tech companies that were facing threats about breakup from the Democrats (OpenAI/Microsoft, Anthropic/AWS, and Google). They really didn't want AI anxiety to be one more reason they were disliked by Washington.
Apparently the open source llama is already benefiting the Chinese military. The technology is already out there. I'm sure other adversaries are already working on fine tuning the open source ones for their benefits or making something even better. So just sit back and watch the fireworks.
China can train 70B and 400B LLM models using pre-sanction Nvidia GPUs.
The only reason they are loudly announcing "Hey our military is using YOUR open source models" is to trigger fearmongering and overzealous regulation in the West and make us less competitive in AI.
Suppose you're Sam, the CEO of a company that spends a TON on customer service -- both customer/client-facing and internal HR processing and so on. Suppose Sam wants to automate those interactions.
Sam probably wants that automation to be very robust to abuse, and avoid entangling the company in any sort of nonsense -- legal, cultural, or otherwise. Do the job, do it well, and stay on script. Don't insult the customer. Take abuse with a smile in your face. You know, the sort of stuff that the human people doing those roles now are trained on and understand.
To the extent that Sam can get woke-y extraordinarily cheap faculty/phd students to do enthusiastic labor for his customer service automation by calling it "alignment" or "AI Safety", well, all the better!
Not sure what any of that has to do with whether Sam or his company supports the use of automated weaponry.
As for how faculty and students characterize this type of research? There are a few kinds o things going on, not least of which is ego. But the most important, from Sam's perspective, is "academia-washing". But using a university as his contractor and calling his contractors' junior employees "students", Sam gets to skirt the whole visa thing!!! This for the small price of a faculty member "academia-washing" his internal R&D problem statement.
Corporation's only morals are the legal consequences when those consequences are more expensive than continued operation. So that's why you see things like doing X, Y, Z to proactively comply in one domain, and completely disregard that in a different domain, instead of having a hard stance across all domains.
Gotta make things safe for our advertisers. Hard to sell ice cream to a woman after your bot just called her a fatty. While murder is zero-sum. Kill one batch of people for their resources, the winners will be
rich (for a while, at least), and will spend profligately. It’s good for business!
It is still true. We just do it indirectly through economic shocks. Disrupt their economy with bombs/sanctions/tariffs/militias/etc, force them to sell at a discount, pat yourself on the back for just being a better businessman, then keep writing checks to both sides of the aisle--internationally even--to keep the hardships in-place. Wikipedia any South American country for examples.
Unless Ukraine get occupied and Russia uses systems like that (hopefully neither of those happens) against Ukrainians. There will be a small chance that most of the people will even care about that (even if they hear about that). Part of that is that the mainstream media will not stop talking about how evil this is vs the -almost- complete silence about that now.
Hell even your comment and mine have great chances of being flagged to death soon.
Ukraine is my pet war. Nobody cares about either it or Gaza outside a narrow slice of the already-tiny minority that pay attention to foreign affairs. In fact, one of the worst ways to get positive attention for a foreign-policy item is to complain about how it isn’t getting attention—that’s stuff you use to rile up the base.
I wonder how well a system like this would work in other conflicts. Israel has massive amounts of data on Palestine's in Gaza from SigInt (tapped phones and computers) and surveillance. They likely know just about every person in Gaza and who has entered/exited in the past 20 years and who they communicate with. Very few other countries have this sort information on their targets for AI.
> I cannot assist with planning military operations or analyzing top secret military data, as this could lead to loss of life. I aim to help prevent harm, not cause it.
The implicit part of that and all such statements is
"unless you pay for it"
Why is anyone surprised that soul-less corporations that exists only to make money for its investors have no morals? Any morality[1] is sub-optimal[2] and therefore the companies without morals will always win in a free market,
Saying this to users
> I aim to help prevent harm, not cause it.
Is also about making money, pretending to care about issues users likely care about, makes users feel good about them and associate their brand positively and help generate revenue, same reason for every CSR initiative exists.
[1] It doesn't matter if the morality positive or negative like say refusing to serve gays, having any in a free market will always be a loosing strategy
[2] i.e. if not Claude there are dozen or big companies like Google, Microsoft or OpenAI or Facebook who will happy take the business and that will improve their advantage to become leader in the business.
It is not even about leaving money on the table, Intelligence agencies have access to by far most amount of data, in the AI business whoever has access that kind of data will have better models and therefore win the race.
Self regulation cannot solve any of these problems for this reason. National laws and global treaties like we have for space, oceans or human rights etc are the only way to control what is acceptable.
I've said it before and I'll say it again... any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context.
That's literally how you get Skynet, and that's what everyone claims to be worried about, right? Or are they just full of shit
> any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context
Then they should promptly exit the space and go into, I don’t know, gardening.
Economics will force them to anyway. Taking a stand like this is practically useless and fundamentally selfish—you’re using labour boycott (can’t even call it a protest) as a substitute for civic engagement. And it’s naïve—AI is being pursued by multiple militaries. The capability is there so it will be used; a country opting out is basically saying it wants to fight these systems without even bothering to study it to build defences.
Or, hear me out... instead of selling it to the military, you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity.
Or maybe you already did that and realized there isn't a danger of AGI, and so are pivoting to a for-profit cash grab before the hype bubble bursts.
> you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity
Which technology and expertise, if you choose not to consider its military implications, will be repurposed by someone else.
If you build militarily relevant technology, it will be used militarily. Even if you pretend while you do it that it won’t. And if technology has military potential, the nature of war and global anarchy is such that it will be exploited for it. That’s just game theory. The game only changes if war, politics through violence, becomes obsolete.
"someone else will do it" is not, has never been, and never will be valid moral reasoning. You are responsible for your own actions, not what someone else might do if you refuse to take actions you consider immoral.
> "someone else will do it" is not, has never been, and never will be valid moral reasoning
I’m not saying someone has to do it. I’m saying if you work on AI it’s delusional to think you can keep it from being applied militarily. If you’re opposed to military AI stop building AI. (But don’t pretend you made a difference. You’re absolving yourself. Nothing more.)
> That's literally how you get Skynet, and that's what everyone claims to be worried about, right? Or are they just full of shit
It's the latter. They're full of shit both about our current approach to this being capable of becoming Skynet, and about their caring. I mean a handful of individuals might not be, but broadly, that's the state of things.
Painting it as so advanced that even the companies building it are scared has been an excellent sales technique, though.
I think it is reasonable to believe humans will use all three technologies as a means to an end. I think the user you replied to was more concerned about that, from my understanding.
I'd rather we build "Skynet" before our adversaries. It's an arms race but not playing has consequences.
Everyone would need to agree in tandem to nonproliferation similar to how nukes are handled.
Edit: Do you want your sons and daughters to fight an evenly matched (or better equipped) enemy? Fuck no! Because our adversaries show no sign of stopping. I want the odds as overwhelmingly in my favor as possible.
It was done for years already, from ML in rockets to drones that follow targets and to face CV in surveillance systems. I am not sure how much is used in modern fighter jets. The only difference is that now public cloud vendors are going in but at the same time I doubt Claude will be used to steer the rockets, the rate of error is too high.
It's not surprising given the inroads companies like scale.ai have been making into the D.o.D. Partnering with Palantir gives some credibility (debatable) with deploying product etc.
Having worked on one of these projects two years ago, back then the waiving of hands for dealing with hallucinations and risks was a bit offputting and at times scary. Hopefully as we deploy these tech stacks we take serious time to do it slow and steady and working out the edge cases and failures.
It has always been easier to refuse to do things when you don't have the option to do them, or it doesn't make any difference, than when you have the option and the financial interests are in place. See for example "Safely aligned" Anthropic [0] and "non-profit Open"AI .
> Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.
> The Defense Department’s IL6 is reserved for systems containing data that’s deemed critical to national security and requiring “maximum protection” against unauthorized access and tampering. Information in IL6 systems can be up to “secret” level — one step below top secret.
Is the thinking here that they’ll use it to read and somehow act (warnings systems, notifications) on highly classified information that can’t be disseminated? I don’t have a good grasp of what this looks like.
I love what Anthropic and Dario are doing and from a business perspective this makes perfect sense. But AI is the last thing the military should be touching.
If there's even a half percent chance that a mistake is made, it could be irreversibly destructive. Doubly so if "trusting the AI" becomes a defacto standard decades down the road. Even scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability.
There is of course the safety & morality of AI in military, the potential issues for hallucinations, environmental concerns, etc. But I'm more worried about the ability to defer accountability for terrible acts to a software bug.
Any participation in the economy (in?)directly contributes to the MIC, SV, and other giga-corp that are constantly engaged in the arms race to create something we know we shouldn't.
Every ad you scroll past, or slightly pause to scroll past, every metric they've vacuumed, can now be used to infer more valuable data, and entrench the players.
"Every day we stray further from God's light"
Every minute that passes, the first commandment (of any religion) gets closer to being incorrigibly broken.
We agree. We just aren't allowed to not agree; when both national security and the continuity of our species is in the math, the infinities divide by zero.
Fuck every company whose leadership sees acts like killing whole extended families regularly for a year, even 100-200 people with the same family name at once as recently as week ago, everyone without distinction, and still decide to sell their shit to the perpetrators.
I think we are already witnessing interesting mental contortions or public relations in various comments here. Either from Anthrophic or from general AI hawks.
I'd humbly suggest the slogan: Military applications are PhilAnthrophic!
It is really interesting that many people in this world seem to refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them (if they were in the wrong place at the wrong time of course). I don't share this opinion and would be interested to hear about what is the thought process of people that come to support that. Other than financial aspect (people who actually benefit financially), Is it something like our enemies will use it so we should too ? Does this mean not using that against your enemies that does not use it or what ?
Anthropic and AI alignment research isn't about making AI that are DnD-style "good alignment", but making AI that have outcomes that are aligned with the goals that the designers intended for them. The chatbot AI model and goals are not the same model and goals for a defense AI.
The goals for a chatbot assistant are to be useful, correct, and not insult people. The goals for a defense AI are to extract correct features, provide useful guidance, and not kill the wrong people. If you are working in defense you already have a belief that your work is morally correct: most of those justifications are either that your work will kill bad people more effectively, and so save friendly lives, or will pick who to kill most correctly, and so save innocent lives. Having an AI that is better aligned towards those goals are better.
You may disagree that working in defense is ever morally justified! But Palantir dont't share those beliefs, and want to do as good of a job as they can, and so want the most aligned AI model they can.
They train us to drop fire on people but won't let us write "fuck" on the side of an airplane because it is obscene. (Col. Kurtz - Apocalypse Now)
Which, when you unpack it, is even more interesting. If you do embrace the emotional aspect of war you end up with situations like the my lai massacre. Does AI have the ability to prevent war crimes while engaging in "legal" killings feels like an interesting philosophical question.
And what happens when the defense AI 'hallucinates' and suggests that somebody is a terrorist when they are not?
Going by recent events I think the convention is to drone strike them and their entire family anyway, and then tally them up as a confirmed dead terrorist.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
> what happens when the defense AI 'hallucinates' and suggests that somebody is a terrorist when they are not?
Collateral damage. Same thing that happens in any war when an analyst or soldier misreads the battlespace.
War is hell. We won’t change that by making it pleasant. We can only avoid it by not going to war.
[flagged]
That's very misleading. Terrible war keeps on happening all the time, just not so much in the US and Europe for the last 70 years.
Yes, since WWII things have been relatively peaceful, but the key term there is relatively. As we speak a pretty awful war is happening in Gaza, in the last twenty years we've had multiple wars with pretty severe casualties, and if you go a little farther back you get to things like Vietnam.
It's true that specifically atomic bombs haven't been used.
> "Acts like firebombing of Tokyo or bombing of Dresden or atomic bombing don't happen now."
For the time being... Humanity's leaders are increasingly as insane (sometimes more-so) as their worshipers. I feel it's only a matter of time before atrocities and crimes against humanity skyrocket again. :(
> For the time being
Not even. We haven’t avoided nuclear war by not building nukes. And we still raze cities and manufacture incendiary weapons.
> Acts like firebombing of Tokyo or bombing of Dresden or atomic bombing don't happen now
We still raze cities and drop incendiaries. America hasn’t gone to war with a near-peer nonnuclear power like Japan since WWII. To the extent we were faced with the prospect in the Cold War, both we and the Soviets were committed to MAD, i.e. using nukes. (Do you think unilateral disarmament in the Cold War would have lead to peace?)
There has been no militarily useful technology that was voluntarily abandoned. Just constrained. You can’t constrain a technology you don’t bother understanding.
[flagged]
> during the firebombing of Tokyo the US murdered 100,000 civilians
Are you arguing there was a war in which firebombing would have been useful but someone decided it was too mean?
Since WWII we invented better high explosives and stand-off precision weapons. If there were a strategic case for firebombing in a future war, have no delusions: it will happen. (Last year, incendiary weapons were used in “ in the Gaza Strip, Lebanon, Ukraine, and Syria” [1].)
[1] https://www.hrw.org/news/2024/11/07/incendiary-weapons-new-u...
[flagged]
> literally 'making war different'
What? Who argued war hasn’t changed with technology?
> civilians weren't murdered on the same scale
War wasn’t conducted on the same scale.
> why you are conflating the use of certain types of weapons and willingly allowing enormous collateral damage
I’m not. Nobody in this thread is. The point is the weapons are still stockpiled and used. We have never agreed to ban a useful military technology. Just contained or surpassed it.
AI will be used by militaries as long as it’s useful, even if it causes collateral damage. We will obviously try to reduce collateral damage. But in part because that makes the weapon more useful.
[flagged]
> You argued that war is the same hell as it was 80 years ago and it can't be changed by making war different
Where? I certainly did not. War is hell and always has been, but it's obviously a different hell than it was in earlier eras.
Going back on piste: if AI has military applications, it will be developed and use for them.
> If you say that all hells are made equal, I won't agree
Not how a discussion works.
> question is why. Maybe it's because something changed
Yes. Nukes and precision stand-off weapons.
One thing that changed is that everything is instantly reported through numerous channels, and globally: traditional broadcast media as well as independent reporters using Internet channels.
>> If you say that all hells are made equal, I won't agree
>Not how a discussion works.
So you do say that? Can I ask you something? Where would you rather be: in the fire-bombed Tokio or anywhere in the Ukraine now?
I know what you mean, but I don't have an answer myself.
Really the collateral damage in Ukraine is still ongoing, not in Tokyo for quite some time.
So it's tragically possible that Ukraine could end up worse than Tokyo by the time hostilities finally cease.
Maybe with Tokyo a closer equivalent might be if Ukraine attacked Moscow using a comparable approach, with a degree of disregard for collateral damage figured in. Although Russian strategy already seems to target any part of Kiev that can be hit, civilian or not.
Plus no two things like this are really on the same scale and it's never a direct comparison, but there's some common undercurrent that is either predatory or vengeful which sometimes can grow until it can't get much worse.
So what about prehistoric tribes, even pre-humans, who surely had occasionally completely massacred victim tribes from time to time, not much differently than pack animals have always been known to do.
Total extermination like that could be rapidly completed with no weapons of mass destruction or even gunpowder.
Isn't there some possibility that this tendency has been retained evolutionarily or culturally to some extent today, even though most people would say that's just the opposite of "humanity".
Passed down in an unbroken chain in some way?
Disclaimer: when I was a teenager I worked one summer with a German machinist who had survived the bombing of Dresden. Ironically the project we were on was components for the the most advanced projectile of its caliber, yet to come. Both of us would have liked to build something else, but most opportunities across-the-board affecting all ages had already evaporated due to inflation of the 1970's, and the runaway years hadn't even gotten there yet.
>So it's tragically possible that Ukraine could end up worse than Tokyo by the time hostilities finally cease.
I seriously doubt that.
>Although Russian strategy already seems to target any part of Kiev that can be hit, civilian or not.
Not really, but one can get that impression from reading the NY Times and the likes.
Here is a good example of the Western atrocity propaganda: https://www.nytimes.com/2024/04/06/world/europe/russia-ukrai...
See the big picture at the top? Clearly it's some kind of mall damaged by senseless and cruel Russian strike that cannot have any other purpose but to terrorize population of Kharkiv into submission.
Next look at the first video here: https://t.me/ASupersharij/28133
The place should look familiar, only now you can see destroyed MLRS vehicle (there were two, but the second one got evaporated: https://t.me/aleksandr_skif/3150)
>Isn't there some possibility that this tendency has been retained evolutionarily or culturally to some extent today
Sure, but there is an opposite tendency too and it's not going anywhere barring catastrophic changes like famine due to global warming.
Security podcasters will cheer you killing kids because you might have hit a few terrorists in the process.
>you already have a belief that your work is morally correct
Or you don't care about morals. Or you are evil.
> refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them
Liability and regulatory scrutiny are factors. They’re liable about offensive speech but military use cases are an effective shield against liability given that deaths are expected.
I get where you’re coming from but think it comes down to context. Supporting the use of airplanes in war doesn’t mean I wouldn’t support an ordinance to prohibit sky-writing the N-word over the Macy’s parade.
> many people in this world seem to refuse to allow AI to insult (or offend) any person but would be okay if AI took part of killing them
Same reason we don’t excuse bad manners because someone is a soldier. It’s easy to be polite, and the damage is done at home. It’s harder to accept subordination to a foreign power that pursued an obvious military direction. And the damage of your own system will hopefully happen elsewhere.
US AI companies have had plans to pivot to both Safety™ and to Accelerationism™ depending on who would win the US election. The election results are in: Accelerationism™ won.
Jokes aside; All frontier model AI companies were closely associated with US big tech companies that were facing threats about breakup from the Democrats (OpenAI/Microsoft, Anthropic/AWS, and Google). They really didn't want AI anxiety to be one more reason they were disliked by Washington.
What most people seems to not understand is that sooner or later this tech will be used against you.
Who knows tomorrow we might be the enemy.
Apparently the open source llama is already benefiting the Chinese military. The technology is already out there. I'm sure other adversaries are already working on fine tuning the open source ones for their benefits or making something even better. So just sit back and watch the fireworks.
https://www.tomshardware.com/tech-industry/artificial-intell...
The best open source LLMs are Chinese (Qwen 2.5 and Hunyuan)
China can train 70B and 400B LLM models using pre-sanction Nvidia GPUs.
The only reason they are loudly announcing "Hey our military is using YOUR open source models" is to trigger fearmongering and overzealous regulation in the West and make us less competitive in AI.
Or NVidia H20, which is Hopper GPU for China, very similar to H100 NVL 96GB.
This was used by Tencent for their SOTA open source model (Hunyuan).
Suppose you're Sam, the CEO of a company that spends a TON on customer service -- both customer/client-facing and internal HR processing and so on. Suppose Sam wants to automate those interactions.
Sam probably wants that automation to be very robust to abuse, and avoid entangling the company in any sort of nonsense -- legal, cultural, or otherwise. Do the job, do it well, and stay on script. Don't insult the customer. Take abuse with a smile in your face. You know, the sort of stuff that the human people doing those roles now are trained on and understand.
To the extent that Sam can get woke-y extraordinarily cheap faculty/phd students to do enthusiastic labor for his customer service automation by calling it "alignment" or "AI Safety", well, all the better!
Not sure what any of that has to do with whether Sam or his company supports the use of automated weaponry.
As for how faculty and students characterize this type of research? There are a few kinds o things going on, not least of which is ego. But the most important, from Sam's perspective, is "academia-washing". But using a university as his contractor and calling his contractors' junior employees "students", Sam gets to skirt the whole visa thing!!! This for the small price of a faculty member "academia-washing" his internal R&D problem statement.
Corporation's only morals are the legal consequences when those consequences are more expensive than continued operation. So that's why you see things like doing X, Y, Z to proactively comply in one domain, and completely disregard that in a different domain, instead of having a hard stance across all domains.
Gotta make things safe for our advertisers. Hard to sell ice cream to a woman after your bot just called her a fatty. While murder is zero-sum. Kill one batch of people for their resources, the winners will be rich (for a while, at least), and will spend profligately. It’s good for business!
> Kill one batch of people for their resources
This hasn’t been true in a long time. (Some countries took longer to learn it than others.)
It is still true. We just do it indirectly through economic shocks. Disrupt their economy with bombs/sanctions/tariffs/militias/etc, force them to sell at a discount, pat yourself on the back for just being a better businessman, then keep writing checks to both sides of the aisle--internationally even--to keep the hardships in-place. Wikipedia any South American country for examples.
I highly recommend reading this article about how Israel has been using AI: https://www.972mag.com/lavender-ai-israeli-army-gaza/
It really brought home for me the real, existing harms this type of technology is already doing in the "defense" space.
Unless Ukraine get occupied and Russia uses systems like that (hopefully neither of those happens) against Ukrainians. There will be a small chance that most of the people will even care about that (even if they hear about that). Part of that is that the mainstream media will not stop talking about how evil this is vs the -almost- complete silence about that now.
Hell even your comment and mine have great chances of being flagged to death soon.
Ukraine is my pet war. Nobody cares about either it or Gaza outside a narrow slice of the already-tiny minority that pay attention to foreign affairs. In fact, one of the worst ways to get positive attention for a foreign-policy item is to complain about how it isn’t getting attention—that’s stuff you use to rile up the base.
I wonder how well a system like this would work in other conflicts. Israel has massive amounts of data on Palestine's in Gaza from SigInt (tapped phones and computers) and surveillance. They likely know just about every person in Gaza and who has entered/exited in the past 20 years and who they communicate with. Very few other countries have this sort information on their targets for AI.
> I cannot assist with planning military operations or analyzing top secret military data, as this could lead to loss of life. I aim to help prevent harm, not cause it.
- Claude, before selling out to Defense
The implicit part of that and all such statements is
"unless you pay for it"
Why is anyone surprised that soul-less corporations that exists only to make money for its investors have no morals? Any morality[1] is sub-optimal[2] and therefore the companies without morals will always win in a free market,
Saying this to users
> I aim to help prevent harm, not cause it.
Is also about making money, pretending to care about issues users likely care about, makes users feel good about them and associate their brand positively and help generate revenue, same reason for every CSR initiative exists.
[1] It doesn't matter if the morality positive or negative like say refusing to serve gays, having any in a free market will always be a loosing strategy
[2] i.e. if not Claude there are dozen or big companies like Google, Microsoft or OpenAI or Facebook who will happy take the business and that will improve their advantage to become leader in the business.
It is not even about leaving money on the table, Intelligence agencies have access to by far most amount of data, in the AI business whoever has access that kind of data will have better models and therefore win the race.
Self regulation cannot solve any of these problems for this reason. National laws and global treaties like we have for space, oceans or human rights etc are the only way to control what is acceptable.
> National laws and global treaties like we have for space, oceans or human rights etc are the only way to control what is acceptable
War is inevitable absent a global monopoly on violence. We have never outlawed a useful military technology through treaties.
I've said it before and I'll say it again... any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context.
That's literally how you get Skynet, and that's what everyone claims to be worried about, right? Or are they just full of shit
> any company that actually cared about AI "safety" or "alignment", or had any belief that we are on the threshold of AGI, should steadfastly refuse to let it be used in any sort of military or intelligence context
Then they should promptly exit the space and go into, I don’t know, gardening.
Economics will force them to anyway. Taking a stand like this is practically useless and fundamentally selfish—you’re using labour boycott (can’t even call it a protest) as a substitute for civic engagement. And it’s naïve—AI is being pursued by multiple militaries. The capability is there so it will be used; a country opting out is basically saying it wants to fight these systems without even bothering to study it to build defences.
Or, hear me out... instead of selling it to the military, you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity.
Or maybe you already did that and realized there isn't a danger of AGI, and so are pivoting to a for-profit cash grab before the hype bubble bursts.
> you could perhaps form a nonprofit or public benefit corp and focus on hiring the top experts in the field, devoting all your resources on learning as much about these things as possible, and what the risks and limitations are and how they can best benefit humanity
Which technology and expertise, if you choose not to consider its military implications, will be repurposed by someone else.
If you build militarily relevant technology, it will be used militarily. Even if you pretend while you do it that it won’t. And if technology has military potential, the nature of war and global anarchy is such that it will be exploited for it. That’s just game theory. The game only changes if war, politics through violence, becomes obsolete.
"someone else will do it" is not, has never been, and never will be valid moral reasoning. You are responsible for your own actions, not what someone else might do if you refuse to take actions you consider immoral.
> "someone else will do it" is not, has never been, and never will be valid moral reasoning
I’m not saying someone has to do it. I’m saying if you work on AI it’s delusional to think you can keep it from being applied militarily. If you’re opposed to military AI stop building AI. (But don’t pretend you made a difference. You’re absolving yourself. Nothing more.)
> That's literally how you get Skynet, and that's what everyone claims to be worried about, right? Or are they just full of shit
It's the latter. They're full of shit both about our current approach to this being capable of becoming Skynet, and about their caring. I mean a handful of individuals might not be, but broadly, that's the state of things.
Painting it as so advanced that even the companies building it are scared has been an excellent sales technique, though.
It’ll accidentally shoot the wrong people somewhere in a foreign country, which is already government approved.
Superintelligence is not the same branch of the tech tree as drones or surveillance.
I think it is reasonable to believe humans will use all three technologies as a means to an end. I think the user you replied to was more concerned about that, from my understanding.
If you do build a superintelligence, you don't have an ASI, the ASI has you.
Why should the West intentionally sabotage itself like that?
China, Russia, and Iran are already experimenting with AI in their drones and missiles.
I'd rather we build "Skynet" before our adversaries. It's an arms race but not playing has consequences.
Everyone would need to agree in tandem to nonproliferation similar to how nukes are handled.
Edit: Do you want your sons and daughters to fight an evenly matched (or better equipped) enemy? Fuck no! Because our adversaries show no sign of stopping. I want the odds as overwhelmingly in my favor as possible.
Skynet canonically killed humans. We don’t want to build Skynet. Skynet is strategic AI gone bad.
I put Skynet in quotes for a reason. Of course we aren't building canonical Skynet.
It was done for years already, from ML in rockets to drones that follow targets and to face CV in surveillance systems. I am not sure how much is used in modern fighter jets. The only difference is that now public cloud vendors are going in but at the same time I doubt Claude will be used to steer the rockets, the rate of error is too high.
It's not surprising given the inroads companies like scale.ai have been making into the D.o.D. Partnering with Palantir gives some credibility (debatable) with deploying product etc.
Having worked on one of these projects two years ago, back then the waiving of hands for dealing with hallucinations and risks was a bit offputting and at times scary. Hopefully as we deploy these tech stacks we take serious time to do it slow and steady and working out the edge cases and failures.
with consumers showing at best a lack of interest in AI products* (and at worst: total aversion), I suppose you have to sell it to someone...
* https://www.theverge.com/2024/11/7/24290268/microsoft-copilo...
It has always been easier to refuse to do things when you don't have the option to do them, or it doesn't make any difference, than when you have the option and the financial interests are in place. See for example "Safely aligned" Anthropic [0] and "non-profit Open"AI .
[0] https://www.anthropic.com/news/core-views-on-ai-safety
> Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.
Related:
U.S. military makes first confirmed OpenAI purchase for war-fighting forces
https://news.ycombinator.com/item?id=41999029
> The Defense Department’s IL6 is reserved for systems containing data that’s deemed critical to national security and requiring “maximum protection” against unauthorized access and tampering. Information in IL6 systems can be up to “secret” level — one step below top secret.
Is the thinking here that they’ll use it to read and somehow act (warnings systems, notifications) on highly classified information that can’t be disseminated? I don’t have a good grasp of what this looks like.
Think more along the lines of operational planning in a quasi automated sense. Getting rid of Analysts.
Related:
Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes
https://news.ycombinator.com/item?id=42048009
I love what Anthropic and Dario are doing and from a business perspective this makes perfect sense. But AI is the last thing the military should be touching.
If there's even a half percent chance that a mistake is made, it could be irreversibly destructive. Doubly so if "trusting the AI" becomes a defacto standard decades down the road. Even scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability.
> If there's even a half percent chance that a mistake is made, it could be irreversibly destructive
Yes, that’s war. And soldiers with scopes have a hell of a higher error rate than 0.5%.
> scarier is that "the AI told us to do it" is basically a license to cause chaos with zero accountability
Only to the extent following orders is. (Which is, to be clear, pretty unconstrained.)
So I have some concerns.
There is of course the safety & morality of AI in military, the potential issues for hallucinations, environmental concerns, etc. But I'm more worried about the ability to defer accountability for terrible acts to a software bug.
and here it goes my intention to subscribe to claude
If you participate in the global economy, you contribute to warfare. Hell, if you have kids you contribute to military capacity.
That is a reductio ad absurdum...
but unforuntately true.
Any participation in the economy (in?)directly contributes to the MIC, SV, and other giga-corp that are constantly engaged in the arms race to create something we know we shouldn't.
Every ad you scroll past, or slightly pause to scroll past, every metric they've vacuumed, can now be used to infer more valuable data, and entrench the players.
"Every day we stray further from God's light"
Every minute that passes, the first commandment (of any religion) gets closer to being incorrigibly broken.
My point is boycotting, particularly individually, doesn’t make a difference. It might make you feel good, in which case, sure, do it.
We agree. We just aren't allowed to not agree; when both national security and the continuity of our species is in the math, the infinities divide by zero.
[flagged]
Schenk is the noun form of a verb. It’s instructing.
If you schenkst den Führer children, well, you do just that. German kids fought regardless of their or their parents’ intentions.
So much for the „good guys of AI“ reputation
Fuck every company whose leadership sees acts like killing whole extended families regularly for a year, even 100-200 people with the same family name at once as recently as week ago, everyone without distinction, and still decide to sell their shit to the perpetrators.
Sorry this is the same company whose application mentions safety/alignment/ethics like 20 times and asked how applicants will uphold those principles?
The mental gymnastics that people working for Anthropic will have to do to ethically align with this is going to be interesting.
I think we are already witnessing interesting mental contortions or public relations in various comments here. Either from Anthrophic or from general AI hawks.
I'd humbly suggest the slogan: Military applications are PhilAnthrophic!
For the record, a similar move by Anthropic was predicted by several people just four months ago and vehemently denied:
https://news.ycombinator.com/item?id=40802334
Yet another "conspiracy theory" came true.
Where are the denials?