To people claiming a physical raid is pointless from the point of gathering data :
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
This vindicates the pro-AI censorship crowd I guess.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
> and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?
Carriage services have long been exempt from liability for the services they carry, as long as they follow other laws like lawful intercept, so that criminals can be detected.
Sorry but I feel this needs to be said: DUHHHHHHHH!!!!!!!!!
Also I need you to understand that the person who creates the child porn is the ultimate villain, transferring it across a carriage service or unrelated OS is only a crime if they can detect and prevent it. In this case, Grok is being used as an automated, turnkey child porn creation system. The OS, following your logic, would only be at fault if Grok is so thoroughly bad it cannot be removed through other means and OS level functions were required to block it. Ditto, its very possible that Grok might find its way onto an internet filter, if the outcome of this investigation leads to its blacklisting but the US government continues to permit it to seed the planet with indecent images of young people. In which case a router might be taken as evidence against an ISP that failed to implement the ban.
Sorry again, but this is just so blindingly obvious: DERRRRRRRRRRRRRR!!!!!!!!!
I am doing my best to act in keeping with the requirements of this website, unfortunately you have just made some statements so patently ridiculous, that its a moral imperative that they be immediately and thoroughly be ridiculed. Ridicule, is the only possible response because there's no argument or supposition behind these statements, only a complete leaden lack of thought, foresight or understanding.
If you want to come up with something better than the worlds worst combination non sequitur/whataboutism, I will do my best to take it seriously. Until then, you should reflect on why you made such an overwhelmingly dense statement. Duh.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?
Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang
framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att
det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid
avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it.
If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo
* locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.
I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
Marginal note:Mode of expression
(2) A defamatory libel may be expressed directly or by insinuation or irony
(a) in words legibly marked on any substance; or
(b) by any object signifying a defamatory libel otherwise than by words.”
It doesn't have to be an assertion, or even a written statement.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
"I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
An assassination market, at least the one we discussed, works like this - One or more people put up a bounty paid out on the death of someone. Anyone can submit a (sealed) description of the death. On death, the descriptions are opened — the one closest to the actual circumstances is paid the bounty.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
To people claiming a physical raid is pointless from the point of gathering data :
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
This vindicates the pro-AI censorship crowd I guess.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.
This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
> and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
I'm of two minds about this.
One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.
On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.
The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.
It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
How does that mesh with all the safe harbour provisions we've depended on to make the modern internet, though?
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
boot taste good
adobe must be shaking in their pants
Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?
This is like comparing the danger of a machine gun to that of a block of lead.
Carriage services have long been exempt from liability for the services they carry, as long as they follow other laws like lawful intercept, so that criminals can be detected.
Sorry but I feel this needs to be said: DUHHHHHHHH!!!!!!!!!
Also I need you to understand that the person who creates the child porn is the ultimate villain, transferring it across a carriage service or unrelated OS is only a crime if they can detect and prevent it. In this case, Grok is being used as an automated, turnkey child porn creation system. The OS, following your logic, would only be at fault if Grok is so thoroughly bad it cannot be removed through other means and OS level functions were required to block it. Ditto, its very possible that Grok might find its way onto an internet filter, if the outcome of this investigation leads to its blacklisting but the US government continues to permit it to seed the planet with indecent images of young people. In which case a router might be taken as evidence against an ISP that failed to implement the ban.
Sorry again, but this is just so blindingly obvious: DERRRRRRRRRRRRRR!!!!!!!!!
I am doing my best to act in keeping with the requirements of this website, unfortunately you have just made some statements so patently ridiculous, that its a moral imperative that they be immediately and thoroughly be ridiculed. Ridicule, is the only possible response because there's no argument or supposition behind these statements, only a complete leaden lack of thought, foresight or understanding.
If you want to come up with something better than the worlds worst combination non sequitur/whataboutism, I will do my best to take it seriously. Until then, you should reflect on why you made such an overwhelmingly dense statement. Duh.
Don't forget polaroid in that.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
Lmao they literally made a broad accessible CSAM maker.
>Car manufacturers literally made a broadly accessible baby killer
It would be an interesting idea that people would have get a "drivers license" before they are allowed to use an AI.
Car manufacturers are required to add features to make it less likely that cars kill babies.
What would happen if Volvo made a special baby-killing model with extra spikes?
Tesla did, the main reason, why there are no Cybertrucks in europe. They are not allowed, because they are to dangerous.
typical newgen carti fan take
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
> External analysts said Grok was generating a CSAM image every minute!!
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?
Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
Wouldn't surprise me, but they would have to be very incompetent to say that outside of attorney-client privledge convo.
Otoh it is musk.
They could shut it off out of a sense of decency and respect, wtf kind of defense is this?
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
This sort of thing will be great for the SpaceX IPO :/
Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
>I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms
I think we are getting very close the the EU's own great firewall.
There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?
- fine harvesting mechanism? Keep as-is.
- true user protection? Blacklist.
Or the companies could obey the law
In an ideal world they'd just have an RSS feed on their site and people, journalists, would subscribe to it. Voilà!
This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
Public institutions can use any system they want and make the public responsible for reading it.
Another discussion: https://news.ycombinator.com/item?id=46872894
I remember encountering questionable hentai material (by accident) back in the Twitter days. But back then twitter was a leftist darling
I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?
But it doesn’t. Group has always had Aggressive filters on sexual content just like every other generative AI tool.
People who have found exploits, just like other generative AI tool.
Define leftist for back in the twitter days? I used twitter early in release. Don’t recall it being a faction specific platform.
Did you report it or just let it continue doing harm?
Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.
Finally, someone is taking action against the CSAM machine operating seemingly without penalty.
It's also a massive problem on Meta. Hopefully this action isn't just a one-off.
I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
No abuse of a real minor is needed.
As good as Australia's little boobie laws.
https://www.theregister.com/2010/01/28/australian_censors/
> CSAM does not have a universal definition.
Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.
> In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.
No corroboration found on web. Quite the contrary, in fact:
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
https://rm.coe.int/factsheet-sweden-the-protection-of-childr...
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
> Are you from Sweden?
No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.
> Why do you think the definition was clear across the world and not changed "before AI"?
I didn't say it was clear. I said there was no disagreement.
And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
" I too doubt there's material disagreement between judicial definitions. "
Sources? Sorry , your gut feeling does not matter. Esspecially if you are not a lawyer
I have no gut feeling here. I've seen no disagreeing judicial definitions of CSAM.
Feel free to share any you've seen.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
Thanks. For a moment I slipped and fell for the "AI" con trick :)
> - in any current law.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
> It has been since at least 2012 here in Sweden. That case went to our highest court
This one?
"Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.
> and they decided a manga drawing was CSAM
No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.
In Swedish:
https://www.regeringen.se/contentassets/5f881006d4d346b199ca...
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
Where do these people come from???
The lady doth protest too much, methinks.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.
> You cant use your common law experience to interpret the law in other countries.
That interpretation wasn't mine. It came from the Court of Europe doc I linked to. Feel free to let them know its wrong.
So aggressive and rude, and over... CSAM? Weird.
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
> Are you implying that it's not abuse to "undress" a child using AI?
Not at all. I am saying just it is not CSAM.
> You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools.
Its terrible. And when "AI"s are found spreading deepfakes around schools, do let us know.
Why do you want to keep insisting that undressing children is not CSAM? It's a weird hill to die on..
CSAM: Child Sexual Abuse Material.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
> When you undress a child with AI,
I guess you mean pasting a naked body on a photo of a child.
> especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated.
In which country is that?
Here in UK, I've never heard of anyone jailed for doing that. Whereas many are for making actual child sexual abuse material.
It doesn't mention grok?
Sure does. Twice. E.g.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
CTRL-F "grok": 0/0 found
You're using an "AI" browser? :)
I found 8 mentions.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
Full marks to France for addressing its higher than average rate of unemployment.
/i
> when notified, doing nothing about it
When notified, he immediately:
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:
https://www.bbc.com/news/articles/c98p1r4e6m8o
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
> Which is an entirely different legal liability.
In UK, it is entirely the same. Near zero.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
I thought this was about France
It was... until it diverted. https://news.ycombinator.com/item?id=46870196
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
“Sorry I broke the law. Oops for reals tho.”
Kiddie porn but only for the paying accounts!
The other LLMs probably don't have the training data in the first place.
Er...
"Study uncovers presence of CSAM in popular AI training dataset"
https://www.theregister.com/2023/12/20/csam_laion_dataset/.
I suppose those are the offices from SpaceX now that they merged.
So France is raiding offices of US military contractor?
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States
Even if it is, being affiliated with the US military doesn't make you immune to local laws.
https://www.the-independent.com/news/world/americas/crime/us...
That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.
I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.
if pictures are speech, then either CSAM is speech, or you have to justify an exception to the general rule.
CSAM is banned speech.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
If libeling real people is a harm to those people, then altering photos of real children is certainly also a harm to those children.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
It doesn't have to be an assertion, or even a written statement.You're quoting Canadian law.
In the US it varies by state but generally requires:
A false statement of fact (not opinion, hyperbole, or pure insinuation without a provably false factual core).
Publication to a third party.
Fault
Harm to reputation
----
In the US it is required that it is written (or in a fixed form). If it's not written (fixed), it's slander, not libel.
The relevant jurisdiction isn't the US either.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
Really? By what US definition of CSAM?
https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.
Very different charges however.
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
I like your username, by the way.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
In some shady corners of the internet I still see advertisements for child porn through Telegram, so they must be doing a shit job at it
"I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
An assassination market, at least the one we discussed, works like this - One or more people put up a bounty paid out on the death of someone. Anyone can submit a (sealed) description of the death. On death, the descriptions are opened — the one closest to the actual circumstances is paid the bounty.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
The issue is still not really speech.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
They were downvoted for completely misunderstanding the comment they replied to.
I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".
This is precisely the point of the comment you are replying to: a balance has to be found and enforced.
I wouldn't equate the two.
There's someone who was being held responsible for what was in encrypted chats.
Then there's someone who published depictions of sexual abuse and minors.
Worlds apart.
Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
>but I do really like a heterogenous cultural situation
Why isn't that a major red flag exactly?
Hi there - author here. Care to add some specifics? I can imagine lots of complaints about this statement, but I don't know which (if any) you have.