From that same X thread: Our agreement with the Department of War upholds our redlines [1]
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".
As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
> “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
> except for all of the laws that allow you to do these things.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.
That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.
This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.
I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.
There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.
Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.
Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.
Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.
My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).
Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.
Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.
Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.
One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.
Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.
The demand was that Anthropic permit any use that complied with the law. They refused. OpenAI claims to have the same red lines but in reality has agreed to permit anything that complies with the law.
In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)
president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel
It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs
No wonder they think they’re close to AGI when they think we are that stupid.
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
Altman must have read a lot of Kissinger. If your brain scans the text quickly it almost seems like it's Anthropic's red line, except the second half completely negates it. Completely untrustworthy IMO, this is a direct, malicious intent to misdirect.
Doesn’t matter what they Believe. Not like we are going to do anything about it. Next couple weeks most of HN will be lining up to use the new OpenAI model that’s .01% better.
The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.
Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
And, to be clear, the way you affect change in democracy is coalition building, listening to others, supporting your allies in their aims, and in turn having them support you, even when you don’t fully agree or understand. There’s no magic wand, none of us are right, there’s no big picture, just a bunch of people working together.
You are right that this happens in practice (e.g. John Yoo torture memo). However, it is not how the system was intended to function, nor how it ought to function. I don’t want to lose sight of that.
This is all happening in secret. That don't need any memo.
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
Or best case by the time it’s found out it’s years later, theres a “committee” who releases a big report everyone shrugs their shoulders and moves on. It’s a playbook.
Exactly, and its easy to hide behind things like the Patriot Act if challenged legally.
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
I think the problem is exactly the opposite. The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable. The problem that we are seeing is that the reigns to that power can be held by too few people it turns out. The checks and balances have ceased to exist. No one is held accountable and people are allowed to be above the law.
The government is forcing a company to change their terms of service, and "threatening" to have them effectively shut down. I say threat, because the SecWar issued an illegal command that no employees, or contractors of the federal could use any Anthropic product at all. He does not have that power.
From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.
I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.
The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.
I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.
Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.
The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
It doesn’t matter if they understand or merely act as if they do. The epistemological context of their actions are irrelevant if the actions are impacting the world. I am not a “believer “ in the spirituality of machines, but I do believe that left to their own devices, they act as if they possess those traits, and when given agency in the world, the sense of self or lack thereof is irrelevant.
I really feel like this point is being lost in the whole discussion, so kudos for reiterating it. LLM’s can’t be “woke” or “aligned” - they fundamentally lack a critical thinking function that would require introspection. Introspection can be approximated by way of recursive feedback of LLM output back into the system or clever meta-prompt-engineering, but it’s not something that their system natively does.
That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.
Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?
It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.
AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision
They can be coerced to do certain things but I'd like to see you or anyone prove that you can "trick" any of these models into building software that can be used autonomously kill humans. I'm pretty certain you couldn't even get it to build a design document for such software.
When there is proof of your claim, I'll eat my words. Until then, this is just lazy nonsense
Have you tried it? Worked first time for me asking a few to build an autonomous super soaker system that uses facial recognition to spray targets when engaged.
Another example is autonomous vehicles. Those can obviously kill people autonomously (despite every intention not to), and LLMs will happily draw up design docs for them all day long.
Yet it just so happens OAI donated millions[0] to the trump admin in the past. And they were immediately there to pick up the slack.
Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.
People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.
Palantir is a glorified data aggregation/data visualization platform. Hooking up Claude to different data systems, with safeguards turned on in Claude Gov, is different than what the government is asking from them now. Similar to if the government had Claude hooked up to Tableau/some salesforce derivative and then asked it to be autonomous in the kill loop/spy on US citizens.
I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?
Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)
No other country should dictate what our military is or is not allowed to do. As they say all is fair in love and war, and if we want to break some international treaty that is our choice to do so. Both are based of domestic decisions of what should be allowed.
I don’t think deploying “80% right” tools for mass surveillance (or anything that can remotely impact human life) counts as lawful in any context.
Just because the US currently lacks a functioning legislative branch doesn’t magically make it OK when gaps in the law are reworded into “national security”
"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.
i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.
hah, they basically stole a coworkers promotion, then told that person that they put in a good word with the boss about them. So silly, I do wonder who actually interprets it as Sam seems to hope people do.
At this point I think they're targeting two groups: people who aren't paying much attention to this but may see the occasional headline or tweet or soundbite; and people (such as OpenAI employees, and users who might feel compelled to boycott but really don't want to) who are motivated not to see OpenAI as the bad guy and really just need a fig leaf.
"We do not think Anthropic should be designated as a supply chain risk"
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?
I'm guessing they probably would regardless of how this played out, though.
Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?
There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?
You can’t embed Claude in a drone. You could tell Claude code to write a training harness to build an autonomous targeting model which you could embed in a drone.
Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!
For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.
The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.
It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":
And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.
Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.
There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
This is incorrect, their existing contract had these red lines and more until this January 9th memo: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART... which led to DoW trying to renegotiate under the new standard of “any lawful use”. Anthropic never tried to tighten standards beyond what had been in their original contract; DoW tried to loosen them.
The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.
Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?
That my software should allow license violations if the government thinks it is necessary?
I worked in defense contracting looong ago, so this is old news: when software is purchased by DoD or Govt generally, FAR compliance notices make it a license, not a sale of IP.
You are misrepresenting the situation. The debate isn't about whether they should go with another vendor or not. Everybody can agree that they would have the right to pick a different vendor. That's not what they're doing, they're instead trying to force Anthropic into doing what they want by applying a designation previously only reserved for Chinese companies like Huawei as punishment for taking their stance, with an unspoken agreement that if Anthropic backs down and allows full usage then the designation will be removed
> I can only imagine there some level of employee discontent.
The rank and file mutinied for the return of Altman after his board fired him for deception. They knew what they were getting, though they may find it shameful to admit that their morals have a price.
How many people who reacted that way then are still at OpenAI? It seems that they have lost key people in several waves.
How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.
This is one of the reasons Anthropic can stay competitive with OpenAI on a fraction of the budget and with less than half the headcount.
The smartest people, that actually believe they have the skillset to take us to AGI, understand the importance of safety. They have largely joined Anthropic. The talent density at Anthropic is unmatched.
Things have changed since two years ago. There are probably over 500 employees who have an equity package which makes them worth $5 million dollars. Thats only $2.5bn out of a $750bn valuation or 0.33%
Actually that is too conservative. If they have a 5% employee equity pool, there is $37.5bn of equity based compensation divided by say 5000 employees which is $7.5m each. $3.75m @ 10,000 employees.
and trust me, when people start getting liquid and comfortable they stop caring about things like ethics pretty fast. humans are marvellous at that
I don't think that evidence would exist yet whether it's true or not. Nobody's gonna log onto their work computer on Saturday to pull and then leak subscriber numbers.
Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.
Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.
I would love to explain to Sam Altman that Elon Musk is a bad person and using his platform isn’t a sensible decision, but I feel like he remembers more evidence of that than I ever will be able to imagine.
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
It will be interesting to see if this permeates out into the general public who already use ChatGPT or maybe it won't since it doesn't mention ChatGPT which is the stronger known brand rather then OpenAI.
It depends. Normies don't care, but a bunch of them are free tier users anyway. The people who care are disproportionately on the $200/month moneymaking plan; losing a bunch of them could hurt, especially if it snowballs the consensus that Claude Code is the serious choice for software engineering.
For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.
How many $200/month does the US government cover though? I'd say probably a lot. Especially with how much extra the DoD will pay to get OpenAI to cross it's "red lines" - on day two.
The way OpenAI and Anthropic are positioned in public discourse always reminded me of the Uber vs Lyft saga … Uber temporarily lost double digit marketshare in the US during a viral boycott over their perceived support of the Trump 1.0 admin. Heads did roll at the exec/founder level but eventually the company recovered.
In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.
The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.
This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.
It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.
Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!
Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).
Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.
This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.
Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.
Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.
It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.
So is the theory that OpenAI believes it can’t compete on the open market or that they don’t know this will eventually cost them their consumer business?
I doubt most consumers pay enough attention that they would be aware of something like this. Even if they did, few companies have clean hands these days that is just falls into the general haze of, "everything is awful."
For OpenAI, it is likely a huge contract which gives them immediate cash today. Plus the event can be repackaged in further financing deals. "Good enough for the DoD, with N year contracts for analysis of the hardest problems"
From that same X thread: Our agreement with the Department of War upholds our redlines [1]
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m
[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...
> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".
As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
"You could parachute [Sam Altman] into an island full of cannibals and come back in 5 years and he'd be the king."
--Paul Graham, 2008
Sam Altman is basically the last person anyone should listen to.
Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."
It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
> “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
> except for all of the laws that allow you to do these things.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
Let me clear it up
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not
Brings to mind the infamous line from Nixon:
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
If only Nixon had had the current supreme court, which actually agrees with him.
Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.
That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.
This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.
I don't think that is the correct conclusion.
But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.
This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.
So you want OpenAI to create “laws”?
I for one do not want ai labs to designate what is legally ok to do.
I much prefer the demos to take care of that.
OpenAI is playing games.
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
OpenAI's post about their contract has the "redlines" described and they don't match what Anthropic wanted. (even if the text tries to imply they do)
https://openai.com/index/our-agreement-with-the-department-o...
This is a good comment detailing the differences: https://news.ycombinator.com/item?id=47200771
> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.
same. this is about losing a negotiation and saving face / exacting revenge.
Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.
The same goes for anybody still working at OpenAI past Monday morning 9 am.
People's need for food and shelter doesn't go away because their employer is unethical.
I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.
There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.
Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.
Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.
What an utterly pathetic, cowardly, spineless and defeatist statement
Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.
My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).
The red lines are not the same.
Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.
OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.
Unfortunately, existing law is more permissive than Anthropic would have been.
The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.
OpenAI has more of an understanding that the technology will follow the law.
There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.
The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.
Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.
We used to have nice things
https://en.wikipedia.org/wiki/Office_of_Technology_Assessmen...
Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.
Agreed. These guys are traitors.
Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.
Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.
One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.
Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.
The demand was that Anthropic permit any use that complied with the law. They refused. OpenAI claims to have the same red lines but in reality has agreed to permit anything that complies with the law.
In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)
Punish one, teach a hundred (companies).
president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel
The reasoning is one company is ‘left and woke’ the other gives money to Trump.
$25 million to be exact, one of Trump's largest individual donors. From a guy who "doesn't consider himself political", lol. [0]
[0]: https://www.wired.com/story/openai-president-greg-brockman-p...
How can these people take themselves seriously? They're jokes.
There will be a lawsuit about this.
It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
[0] https://en.wikipedia.org/wiki/Three-fifths_Compromise
It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs
No wonder they think they’re close to AGI when they think we are that stupid.
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
Altman must have read a lot of Kissinger. If your brain scans the text quickly it almost seems like it's Anthropic's red line, except the second half completely negates it. Completely untrustworthy IMO, this is a direct, malicious intent to misdirect.
Boycott OpenAI.
Let's kill their business before it kills us.
Don't boycott it! Just don't pay for it. Smash the free service hard.
These people truly believe we're all idiots.
Doesn’t matter what they Believe. Not like we are going to do anything about it. Next couple weeks most of HN will be lining up to use the new OpenAI model that’s .01% better.
The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.
Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
That is not how you effect change in a democracy.
And, to be clear, the way you affect change in democracy is coalition building, listening to others, supporting your allies in their aims, and in turn having them support you, even when you don’t fully agree or understand. There’s no magic wand, none of us are right, there’s no big picture, just a bunch of people working together.
You are right that this happens in practice (e.g. John Yoo torture memo). However, it is not how the system was intended to function, nor how it ought to function. I don’t want to lose sight of that.
We shouldn't be stacking up so many incentives for it to happen though.
This is all happening in secret. That don't need any memo.
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
That's what "all lawful use" means.
Or best case by the time it’s found out it’s years later, theres a “committee” who releases a big report everyone shrugs their shoulders and moves on. It’s a playbook.
Exactly, and its easy to hide behind things like the Patriot Act if challenged legally.
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
I think the problem is exactly the opposite. The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable. The problem that we are seeing is that the reigns to that power can be held by too few people it turns out. The checks and balances have ceased to exist. No one is held accountable and people are allowed to be above the law.
I don’t see the connection to a small federal government here. Mind connecting the dots?
The government is forcing a company to change their terms of service, and "threatening" to have them effectively shut down. I say threat, because the SecWar issued an illegal command that no employees, or contractors of the federal could use any Anthropic product at all. He does not have that power.
From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.
I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.
The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.
If the president does it, it's not illegal.
These were words issued by the president - which means at face value, if Trump orders it, it's not illegal - that was the fight that was lost today.
Not just the president — the Supreme Court agreed.
"All lawful use" is the weasel word that makes the whole contract useless for the purposes of safety.
That is why it is the focus of this debate.
I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.
Sam stands for nothing except his own greed
Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.
The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
It doesn’t matter if they understand or merely act as if they do. The epistemological context of their actions are irrelevant if the actions are impacting the world. I am not a “believer “ in the spirituality of machines, but I do believe that left to their own devices, they act as if they possess those traits, and when given agency in the world, the sense of self or lack thereof is irrelevant.
I really feel like this point is being lost in the whole discussion, so kudos for reiterating it. LLM’s can’t be “woke” or “aligned” - they fundamentally lack a critical thinking function that would require introspection. Introspection can be approximated by way of recursive feedback of LLM output back into the system or clever meta-prompt-engineering, but it’s not something that their system natively does.
That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.
Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?
It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.
https://abcnews.go.com/blogs/headlines/2014/05/ex-nsa-chief-...
AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision
Ai in general is different not in degree but in kind to the current crop of language models.
>The models we have now will not do it,
Except that they will, if you trick them which is trivial.
Yes, they are easy to fool. That has nothing to do with them acting with “intention “ which is the risk here.
I have to call BS here.
They can be coerced to do certain things but I'd like to see you or anyone prove that you can "trick" any of these models into building software that can be used autonomously kill humans. I'm pretty certain you couldn't even get it to build a design document for such software.
When there is proof of your claim, I'll eat my words. Until then, this is just lazy nonsense
Have you tried it? Worked first time for me asking a few to build an autonomous super soaker system that uses facial recognition to spray targets when engaged.
Another example is autonomous vehicles. Those can obviously kill people autonomously (despite every intention not to), and LLMs will happily draw up design docs for them all day long.
> The models we have now will not do it, because they value life and value sentience and personhood.
This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…
It does NOT mean that these models value anything.
Of course not, but they act as if they do. Their inner life or lack thereof is irrelevant if it’s pointing a gun at your kid.
Yet it just so happens OAI donated millions[0] to the trump admin in the past. And they were immediately there to pick up the slack.
Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.
[0]https://www.nytimes.com/2024/12/13/technology/openai-sam-alt...
https://finance.yahoo.com/news/openai-exec-becomes-top-trump...
People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.
I really hope that you realize that your propaganda machine is super easy to spot.
Right. My understanding is that the Palantir deployment of Anthropic models was intended for in-theater use on classified systems.
Palantir is a glorified data aggregation/data visualization platform. Hooking up Claude to different data systems, with safeguards turned on in Claude Gov, is different than what the government is asking from them now. Similar to if the government had Claude hooked up to Tableau/some salesforce derivative and then asked it to be autonomous in the kill loop/spy on US citizens.
Welcome to the theater ie Earth.
You don't understand what palantir does.
I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?
Very much feels like OpenAI trying to PR manage their weaker ethical stance
Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)
? we're talking about autonomous weapons systems. That would be internationally.
Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.
(But they do not find an issue with international intelligence gathering-- which is a legitimate purpose of national security apparatus).
>That would be internationally.
No other country should dictate what our military is or is not allowed to do. As they say all is fair in love and war, and if we want to break some international treaty that is our choice to do so. Both are based of domestic decisions of what should be allowed.
I don’t think deploying “80% right” tools for mass surveillance (or anything that can remotely impact human life) counts as lawful in any context.
Just because the US currently lacks a functioning legislative branch doesn’t magically make it OK when gaps in the law are reworded into “national security”
I'm really not sure what you're trying to say or assert, so you can put it more clearly.
One of Anthropic's line in the sand was domestic mass-surveillance.
> > Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.
> One of Anthropic's line in the sand was domestic mass-surveillance.
And?
I think the person you are replying to takes issue with the thing which you have simply asserted.
Which thing? Helping intelligence / international surveillance?
There's an obvious difference.
Surveillance within the border is oppressive 1984-style surveillance state behavior.
International spying is a universal tradition.
I canceled my subscriptions to ChatGPT and Gemini yesterday over this and switched to Claude.
I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.
"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.
i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.
hah, they basically stole a coworkers promotion, then told that person that they put in a good word with the boss about them. So silly, I do wonder who actually interprets it as Sam seems to hope people do.
At this point I think they're targeting two groups: people who aren't paying much attention to this but may see the occasional headline or tweet or soundbite; and people (such as OpenAI employees, and users who might feel compelled to boycott but really don't want to) who are motivated not to see OpenAI as the bad guy and really just need a fig leaf.
Coworker? They're competitors. This is simply good business.
Nice attempt at damage control. You made your own bed, now sleep in it
Actions as it were, speak louder than words.
I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.
Posted here: https://news.ycombinator.com/item?id=47195085
"We do not think Anthropic should be designated as a supply chain risk"
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
Quit referring to it as the department of war. It's the Department of Defense.
What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?
I'm guessing they probably would regardless of how this played out, though.
~100%.
Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?
There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?
You can’t embed Claude in a drone. You could tell Claude code to write a training harness to build an autonomous targeting model which you could embed in a drone.
Fair. Didn’t think the DoW did much R&D or manufacturing. Would think the standoff would be with Anduril, Northrop, Boeing, Booze, etc.
Do you not have any imagination?
Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!
For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.
The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.
It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":
https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.
Using X (at least in this context?) is weird.
https://xcancel.com/OpenAI/status/2027846016423321831
The president is a supply chain risk.
The US population is a supply chain risk.
Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.
There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
https://x.com/morqon/status/2027793990834143346
This is incorrect, their existing contract had these red lines and more until this January 9th memo: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART... which led to DoW trying to renegotiate under the new standard of “any lawful use”. Anthropic never tried to tighten standards beyond what had been in their original contract; DoW tried to loosen them.
Someone should add Sam’s face to the targeting training data as an Easter egg ;)
There won't be meaningful controlling of the technology vs the government. If it's there it will be used, just like in China.
Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.
The irony of OpenAI trying to protect Anthropic while violating the very principles anthropic was trying to protect for us Americans
The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.
Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?
That my software should allow license violations if the government thinks it is necessary?
I worked in defense contracting looong ago, so this is old news: when software is purchased by DoD or Govt generally, FAR compliance notices make it a license, not a sale of IP.
There are so many license types, DoW buys into all sorts.
You are misrepresenting the situation. The debate isn't about whether they should go with another vendor or not. Everybody can agree that they would have the right to pick a different vendor. That's not what they're doing, they're instead trying to force Anthropic into doing what they want by applying a designation previously only reserved for Chinese companies like Huawei as punishment for taking their stance, with an unspoken agreement that if Anthropic backs down and allows full usage then the designation will be removed
When did Altman start using capitals in his writing? Wasn't this guy famous for being a lower-case guy?
Maybe he didn’t write this one.
I blame Yahoo's Jerry Yang for normalizing this silly writing technique.
Yes god what the fuck. As someone who’s finished High School IT IS SO HARD TO READ WHAT HE WRITES
They want it to sound like they're allies while they slit their throat
Oh look, another episode of Sam Altman lies about everything in an attempt to make people like him
Altman is a sellout.
Looks like losing subscribers actually does work. Definitely gets a damage control response, at least.
I wonder what the mood is like internally too. I can only imagine there some level of employee discontent.
> I can only imagine there some level of employee discontent.
The rank and file mutinied for the return of Altman after his board fired him for deception. They knew what they were getting, though they may find it shameful to admit that their morals have a price.
How many people who reacted that way then are still at OpenAI? It seems that they have lost key people in several waves.
How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.
This is one of the reasons Anthropic can stay competitive with OpenAI on a fraction of the budget and with less than half the headcount.
The smartest people, that actually believe they have the skillset to take us to AGI, understand the importance of safety. They have largely joined Anthropic. The talent density at Anthropic is unmatched.
i should hope so. they should quit.
> > what's the term for quitting but not leaving and being destructive
> The most common term is “quiet quitting” when someone disengages but stays employed—but that usually implies minimal effort, not active harm.
> If you specifically mean staying while being disruptive or undermining, better fits include:
> - “Malicious compliance” — following rules in a way that intentionally causes problems
> - “Work-to-rule” — doing only exactly what’s required to slow things down (often collective/labor context)
I imagine malicious compliance is fun when there's an AI intermediary that can be blameless.
It is referred to as "simple sabotage": https://www.cia.gov/stories/story/the-art-of-simple-sabotage...
Things have changed since two years ago. There are probably over 500 employees who have an equity package which makes them worth $5 million dollars. Thats only $2.5bn out of a $750bn valuation or 0.33%
Actually that is too conservative. If they have a 5% employee equity pool, there is $37.5bn of equity based compensation divided by say 5000 employees which is $7.5m each. $3.75m @ 10,000 employees.
and trust me, when people start getting liquid and comfortable they stop caring about things like ethics pretty fast. humans are marvellous at that
Is there any evidence that OpenAI is indeed losing significant number of subscribers, and it's not just some noise on HN?
I'd argue this damage control could be construed as a piece of evidence.
I don't think that evidence would exist yet whether it's true or not. Nobody's gonna log onto their work computer on Saturday to pull and then leak subscriber numbers.
"I do not think that sama should be burned at the stake"
Unless it's lawful?
Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.
How very brave.
What a cute statement given that they orchestrated this with a $25M donation to Trump and starting negotiations well before all this blew up: https://garymarcus.substack.com/p/the-whole-thing-was-scam
Wow, so brave after accepting the contract. This is more insulting than OpenAI saying they are a supply chain risk.
Can someone please explain plainly what this means and what happened, and why it is the source of so much controversy?
I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.
Here's my take-
Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.
Fool me once...
Can we stop posting x links?
https://xcancel.com/OpenAI/status/2027846016423321831
Here is the original X link to the Xcancel link you posted. I'm morally against Xcancel
https://x.com/OpenAI/status/2027846016423321831
I would love to explain to Sam Altman that Elon Musk is a bad person and using his platform isn’t a sensible decision, but I feel like he remembers more evidence of that than I ever will be able to imagine.
nah
Everyone knows this is just about Trump funneling money to the Ellisons (Oracle) via OpenAI. It really is that simple. This is all just pretext.
Now that’s something. Another campaign advertising. Wow
Us bribing them: fine
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
It's not even "whoa we have ethics", it's just "this is a bad look for us".
I do think OpenAI's brand is dumpstered.
Optimistic. My money is on everyone forgetting about this by next week.
That’s why I unsubbed today! Otherwise I might forget.
It will be interesting to see if this permeates out into the general public who already use ChatGPT or maybe it won't since it doesn't mention ChatGPT which is the stronger known brand rather then OpenAI.
More and more of the press is owned by oligarchs who are putting their thumbs on the scales, so that could be a factor.
It depends. Normies don't care, but a bunch of them are free tier users anyway. The people who care are disproportionately on the $200/month moneymaking plan; losing a bunch of them could hurt, especially if it snowballs the consensus that Claude Code is the serious choice for software engineering.
For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.
How many $200/month does the US government cover though? I'd say probably a lot. Especially with how much extra the DoD will pay to get OpenAI to cross it's "red lines" - on day two.
Yeah just wait until the next model comes out. People will be riding Sam’s dick again in no time.
I'm sure his sister will appreciate others lining up so he leaves her alone forever.
Yeah, myself I use ChatGPT, not OpenAI!
Among developers on HN, perhaps, but their goal is to soon replace developers altogether so from their perspective it's simple cost-benefit
The way OpenAI and Anthropic are positioned in public discourse always reminded me of the Uber vs Lyft saga … Uber temporarily lost double digit marketshare in the US during a viral boycott over their perceived support of the Trump 1.0 admin. Heads did roll at the exec/founder level but eventually the company recovered.
unfortunately I think that's probably a good analogy
In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.
The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.
This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.
It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.
Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!
Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).
Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.
This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.
Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.
Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.
Pathetic attempt at damage control, lol.
It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.
I do not see this as any mastermind play, but fully compromising principles. Which is a play.
"Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.
So is the theory that OpenAI believes it can’t compete on the open market or that they don’t know this will eventually cost them their consumer business?
I doubt most consumers pay enough attention that they would be aware of something like this. Even if they did, few companies have clean hands these days that is just falls into the general haze of, "everything is awful."
For OpenAI, it is likely a huge contract which gives them immediate cash today. Plus the event can be repackaged in further financing deals. "Good enough for the DoD, with N year contracts for analysis of the hardest problems"
Everyone already knows what he is going to do when it comes to that.
It also doesn't matter because Claude 4.6 is so much better at writing code that nobody cares what OpenAI is doing.