What a pointed title. That aside, I am rather surprised that a committee's investigation report is this light on what in my opinion are fundamental details, including the make-up of the committee, the members' respective duties and the course of the investigative process. Notwithstanding the potentially political raison d'etre of the report, is that customary for Congressional committees?
The gripe I have with this is that it is 1) an impermanent external resource that shows 2) the current, not the contemporary make-up of the commitee that's 3) subject to change at any time, and thus not a lasting appendix to the report. I guess I had expected more academic rigour from a congressional committee.
Dang, you really can make anything sound scary if you use the right language!
1. ChatGPT funnels your data to American Intelligence Agencies through backend
infrastructure subject to U.S. Government National Security Letters (NSLs) that allow for secret collection of customer data by the US Department of Defense.
2. ChatGPT covertly manipulates the results it presents to align with
US propaganda, as a result of the widely disseminated Propaganda Model and close ties between OpenAI's leadership and the US Government.
3. It is highly likely that OpenAI used unlawful model training
techniques to create its model, stealing from leading international news sources, academic institutions, and publishing houses.
4. OpenAI’s AI model appears to be powered by advanced chips manufactured by Taiwanese semiconductor giant TSMC and reportedly utilizes tens of thousands of chips that are manufactured by a Trade War adversary of America and subject to a 32% import duty.
Yeah the Chinese govt has far less incentive to mess with me personally than the US govt does. Its hard to convince people of this point of view I have found.
It's amusing to see the hypocrisy on display, though. The authors of the report seem to be seriously accusing DeepSeek of IP theft from OpenAI, which was built on... IP theft. LOL.
As someone in Europe, I sometimes wonder what’s worse: letting US companies use my data to target ads, or handing it to Chinese companies where I have no clue what’s being done with it. With one I at least get an open source model. The other is a big black box.
> With one I at least get an open source model. The other is a big black box.
It doesn't matter much as in both cases provider has access to you ins and outs. The only question is if you trust company operating the model. (yes, you can run local model, but it's not that capable)
Isn't this a bit of semantic lawyering? Open model weights are not the same as open source in a literal sense, but I'd go so far as to suggest that open model weights fulfill much of the intent / "soul" of the open source movement. Would you disagree with that notion?
> open model weights fulfill much of the intent / "soul" of the open source movement
Absolutely not. The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
A binary (and that's arguably what weights are) you can semi-freely download and distribute is just shareware – that's several steps away from actual open source.
There's nothing wrong with shareware, but calling it open source, or even just "source available" (i.e. open source with licensing/usage restrictions), when it isn't, is disingenuous.
> The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
That's not enough. The key point was trust. When executable can be verified by independent review and rebuild. It it cannot be rebuilt it can be virus, troyan, backdoor, etc. For LLMs there is no way to reproduce, thus no way to verify them. So, they cannot be trusted and we have to trust producers. It's not that important when models are just talking, but with tools use it can be a real damage.
Hm, I wouldn't say that that's the key point of open software. There are many open source projects that don't have reproducible builds (some don't even offer any binary builds), and conversely there is "source available" software with deterministic builds that's not freely licensed.
On top of that, I don't think it works quite that way for ML models. Even their creators, with access to all training data and training steps, are having a very hard time reasoning about what these things will do exactly for a given input without trying it out.
"Reproducible training runs" could at least show that there's not been any active adversarial RHLF, but seem prohibitively expensive in terms of resources.
Well, 'open source' is interpreted in different ways. I think the core idea is it can be trusted. You can get Linux distribution and recompile every component except for the proprietary drivers. With that being done by independent groups you can trust it enough to run bank's systems. The other options are like Windows where you have to trust Microsoft and their supply chain.
There are different variations, of course. Mostly related to the rights and permissions.
As for big models even their owners, having all the hardware and training data and code, cannot reproduce them. Model may have some undocumented functionality pretrained or added in post process, and it's almost impossible to detect without knowing the key phrase. It can be a harmless watermark or something else.
I saw the argument that the source code is the preferred base to make changes and modifications in software, but in the case of those large models, the weights themselves are the preferred way.
It's much easier and cheap to make a finetune or LoRA than to train from scratch to adapt it to your use case. So it's not quite like source vs binary in software.
It does not and I totally disagree with that. Unless we can see the code that goes into the model to stop of from telling me how to make cocaine, it's not the same sort of soul.
The US tends towards dictatorship; due process is an afterthought, people disappearing off the streets, citizens getting arrested at the border for nothing, tourists getting deported over minute issues such as an iffy hotel booking, and that's just off the top of my head from the last 2 days.
Sinophobic junk. You got shown up by a free and open model and wasted a gazillion dollars, good job. So yes let’s ban the competition and force Americans to use the junky ad riddled cheap clones.
It's important to distinguish the DeepSeek App from the open-weight models, which are released under very liberal licenses, and you have full control of where data fed to the model goes, e.g. stays in the USA.
Everybody is spying on everybody, it's free for all....if you want to be out of the reach, either stop using software for sensitive information and communication or start using fully encrypted products. Cryptography is the key.
As long as I can run it on my own cheap hardware I’ll be using it. Our contracts with some of our customers is that their data never leaves our servers.
It's shocking how much American soft power diminished in such a short period. White House documents used to mean something, had a certain weigh, whereas now some of them are simply ridiculous. This one in particular is not particularly bad even. Although we know who inspired it and that, given the fact that DeepSeek made their models available and OpenAI didn't, whatever is written should be taken with more than one grain of salt.
What's interesting is that most of this is applicable to proprietary US models when used by non-US users, too. "Stores data in the US"? Yes. "Complies with approved narratives"? Check. "Cooperates with intelligence services and the military"? Check. The only real solution here are open weights, and Deepseek is the strongest open-weights model to this day. Don't like it? Compete.
".. siphons data back to the People’s Republic of China (PRC)"
How does that work when I run the model myself?
Cry me a river, you tried to build a massive moat to force the rest of the world to suck you off for access and now you got caught with your pants down by a model that has been given out for free.
I wouldn't want to know how the US would use the discovery of cold fusion or a cure for all to make a profit for its elite instead of giving it out for the greater good.
What a pointed title. That aside, I am rather surprised that a committee's investigation report is this light on what in my opinion are fundamental details, including the make-up of the committee, the members' respective duties and the course of the investigative process. Notwithstanding the potentially political raison d'etre of the report, is that customary for Congressional committees?
Here's the full membership of the committee:
https://selectcommitteeontheccp.house.gov/members
The gripe I have with this is that it is 1) an impermanent external resource that shows 2) the current, not the contemporary make-up of the commitee that's 3) subject to change at any time, and thus not a lasting appendix to the report. I guess I had expected more academic rigour from a congressional committee.
You have too much faith in US leadership.
Dang, you really can make anything sound scary if you use the right language!
1. ChatGPT funnels your data to American Intelligence Agencies through backend infrastructure subject to U.S. Government National Security Letters (NSLs) that allow for secret collection of customer data by the US Department of Defense.
2. ChatGPT covertly manipulates the results it presents to align with US propaganda, as a result of the widely disseminated Propaganda Model and close ties between OpenAI's leadership and the US Government.
3. It is highly likely that OpenAI used unlawful model training techniques to create its model, stealing from leading international news sources, academic institutions, and publishing houses.
4. OpenAI’s AI model appears to be powered by advanced chips manufactured by Taiwanese semiconductor giant TSMC and reportedly utilizes tens of thousands of chips that are manufactured by a Trade War adversary of America and subject to a 32% import duty.
Yeah the Chinese govt has far less incentive to mess with me personally than the US govt does. Its hard to convince people of this point of view I have found.
As a non American, all of these don’t seem to be any worse than US based models?
[dead]
In journalism it's called a hit piece, and this one is particularly low-quality. Embarrassing.
The Sinophobia going through America is a form of insanity causing America to do enormous harm to itself.
From banning open source software to destroying the business of its largest and most profitable companies.
It's amusing to see the hypocrisy on display, though. The authors of the report seem to be seriously accusing DeepSeek of IP theft from OpenAI, which was built on... IP theft. LOL.
As someone in Europe, I sometimes wonder what’s worse: letting US companies use my data to target ads, or handing it to Chinese companies where I have no clue what’s being done with it. With one I at least get an open source model. The other is a big black box.
Both are bad. If Europe does not develop local alternatives to ChatGpt or DeepSeek, it will (slowly) lose its sovereingty.
Europe is developing local alternative models such as Mistral
> With one I at least get an open source model. The other is a big black box.
It doesn't matter much as in both cases provider has access to you ins and outs. The only question is if you trust company operating the model. (yes, you can run local model, but it's not that capable)
They're not open source. It's nice of Meta and Deepseek to offer up their models for download, but that doesn't make them open source.
Hard to be fully open source if you train on copyrighted material.
Anyway. Deepseek is the most open of the sota models.
Did they open their datasets already? Would be nice to have 'thinking' part.
Isn't this a bit of semantic lawyering? Open model weights are not the same as open source in a literal sense, but I'd go so far as to suggest that open model weights fulfill much of the intent / "soul" of the open source movement. Would you disagree with that notion?
> open model weights fulfill much of the intent / "soul" of the open source movement
Absolutely not. The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
A binary (and that's arguably what weights are) you can semi-freely download and distribute is just shareware – that's several steps away from actual open source.
There's nothing wrong with shareware, but calling it open source, or even just "source available" (i.e. open source with licensing/usage restrictions), when it isn't, is disingenuous.
> The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
That's not enough. The key point was trust. When executable can be verified by independent review and rebuild. It it cannot be rebuilt it can be virus, troyan, backdoor, etc. For LLMs there is no way to reproduce, thus no way to verify them. So, they cannot be trusted and we have to trust producers. It's not that important when models are just talking, but with tools use it can be a real damage.
Hm, I wouldn't say that that's the key point of open software. There are many open source projects that don't have reproducible builds (some don't even offer any binary builds), and conversely there is "source available" software with deterministic builds that's not freely licensed.
On top of that, I don't think it works quite that way for ML models. Even their creators, with access to all training data and training steps, are having a very hard time reasoning about what these things will do exactly for a given input without trying it out.
"Reproducible training runs" could at least show that there's not been any active adversarial RHLF, but seem prohibitively expensive in terms of resources.
Well, 'open source' is interpreted in different ways. I think the core idea is it can be trusted. You can get Linux distribution and recompile every component except for the proprietary drivers. With that being done by independent groups you can trust it enough to run bank's systems. The other options are like Windows where you have to trust Microsoft and their supply chain.
There are different variations, of course. Mostly related to the rights and permissions.
As for big models even their owners, having all the hardware and training data and code, cannot reproduce them. Model may have some undocumented functionality pretrained or added in post process, and it's almost impossible to detect without knowing the key phrase. It can be a harmless watermark or something else.
I saw the argument that the source code is the preferred base to make changes and modifications in software, but in the case of those large models, the weights themselves are the preferred way.
It's much easier and cheap to make a finetune or LoRA than to train from scratch to adapt it to your use case. So it's not quite like source vs binary in software.
Meta models do not, they have use restrictions. At least deepseek does not.
It does not and I totally disagree with that. Unless we can see the code that goes into the model to stop of from telling me how to make cocaine, it's not the same sort of soul.
US is capitalistic liberal democracy and China is one party capitalistic dictatorship. Make your choice.
The US tends towards dictatorship; due process is an afterthought, people disappearing off the streets, citizens getting arrested at the border for nothing, tourists getting deported over minute issues such as an iffy hotel booking, and that's just off the top of my head from the last 2 days.
You make it seem so binary. If you do enough research on the US you might change your mind. YES, I would still choose the US.
Sinophobic junk. You got shown up by a free and open model and wasted a gazillion dollars, good job. So yes let’s ban the competition and force Americans to use the junky ad riddled cheap clones.
Looks like Deepseek is having it's Tiktok moment!
It's important to distinguish the DeepSeek App from the open-weight models, which are released under very liberal licenses, and you have full control of where data fed to the model goes, e.g. stays in the USA.
Everybody is spying on everybody, it's free for all....if you want to be out of the reach, either stop using software for sensitive information and communication or start using fully encrypted products. Cryptography is the key.
As long as I can run it on my own cheap hardware I’ll be using it. Our contracts with some of our customers is that their data never leaves our servers.
A minimum production setup for V3/R1 is 16x H100s…I guess that’s up to you whether that qualifies as cheap.
It's shocking how much American soft power diminished in such a short period. White House documents used to mean something, had a certain weigh, whereas now some of them are simply ridiculous. This one in particular is not particularly bad even. Although we know who inspired it and that, given the fact that DeepSeek made their models available and OpenAI didn't, whatever is written should be taken with more than one grain of salt.
What's interesting is that most of this is applicable to proprietary US models when used by non-US users, too. "Stores data in the US"? Yes. "Complies with approved narratives"? Check. "Cooperates with intelligence services and the military"? Check. The only real solution here are open weights, and Deepseek is the strongest open-weights model to this day. Don't like it? Compete.
[dead]
It's interesting that they call out NVIDIA specifically as an enabler. MAGA going to war against NVIDIA now?
Wohoo. Better token throughput for europeans...
Get your models before they're gone: https://huggingface.co/collections/deepseek-ai/deepseek-r1-6...
Summary: https://selectcommitteeontheccp.house.gov/media/reports/deep...
".. siphons data back to the People’s Republic of China (PRC)"
How does that work when I run the model myself?
Cry me a river, you tried to build a massive moat to force the rest of the world to suck you off for access and now you got caught with your pants down by a model that has been given out for free.
I wouldn't want to know how the US would use the discovery of cold fusion or a cure for all to make a profit for its elite instead of giving it out for the greater good.
America and Americans by large and “greater good” do not mix well. It’s a very individualist society aka “care about myself” society.
You can say that about any country. Pull the crank hard enough and you could say the same about Liechtenstein.
[flagged]
[dead]
[dead]
[dead]