4 comments

  • lgl 40 minutes ago

    > the next generation of Ai companies will be easily valued at 10T

    I'm not sure where this conclusion is coming from. We're very likely already in an AI bubble so I'm thinking that open/free models will eventually dilute the huge ridiculous valuations these companies have. Also the natural increase in consumer hardware power will eventually allow many people to just use local models instead both for privacy and cost reasons.

    And seeing as most models are essentially only improved versions of the previous ones with larger context and more training data, unless some new "Attention Is All You Need" paper comes out that will give us a big step into AGI territory, I'm really not seeing a new company reach $10T valuation by just releasing marginally better models every couple of months imho.

  • JacobArthurs 18 hours ago

    It's worth separating "refusing a contract" from "resisting oversight" though. Anthropic declining DoD terms is still just a procurement decision, not a power grab, even if the blowback (getting labeled a supply chain risk) makes it feel weightier. The scary version of your concern is whether regulatory frameworks can keep pace with $10T companies, and on that I think you're right that the window is closing faster than governments realize.

  • vessenes 21 hours ago

    Good thoughts. Welcome to the discourse.

    A couple things to put out there - first, the US has a fairly strong rule of law that the government cannot compel speech -- essentially, while speech can be blocked/stopped, it's a hard rule of the republic that we cannot force certain speech - this is the legal theory for canary statements, by the way -- make your statement "I have not been forced to remove any user from this system by a secret court" -- and when it's no longer true, you remove the statement.

    This speech concept extends to, say, software - a company can refuse to create software or tooling or what have you, if it chooses. What if a company has something deemed to be in the national security interest but does not wish to use it on behalf of the country? Traditionally we have both soft and hard power applied - soft - conversations, hearts and minds, perhaps threats, aimed at getting a company on board with the national goal.

    Hard: Nationalization. The US has typically reserved nationalization for bailout / reworking pernicious economic incentives, but we have had some wartime nationalizations in the past -- Google tells me Western Telegraph and Smith and Wesson -- and Truman nationalized like everything basically whenever he wanted before and during the Korean War.

    Nationalizing a valuable company like Anthropic which is research dominated is risky. You can't force research scientists to work; you could almost certainly find people to keep operating the inference. So you may get something today, and trigger a legendary set of Supreme Court cases, but you have no guarantee the goose will keep laying its golden eggs once Sec. Hegseth is in charge. I would guess this is going to be a very, very last resort even for the most aggressive of governments when there are credible alternatives in the economy. Under those terms, economics / market forces can do a lot of the work.

    Upshot - I predict this is Sturm und Drang, and we'll see Anthropic figure out how to keep its gov contracts while oAI continues to work its way in to more government work simultaneously.

  • watwut 19 hours ago

    > it opens up a precedent for Ai companies in the future to resist governmental oversight

    A company being allowed to NOT make business with goverment somehow makes oversight impossible? Make it make sense.

    USA is already basically controlled by oligarch. The road to there did not went through companies refusing business.