I think the author is right that harms caused by incorrect content aren't—and shouldn't be—the fault of section 230, and are instead the fault of the original producers of the content.
I think the author is wrong in claiming that modern attention-optimizing recommendation algorithms are better than more primitive, poorer recommendation algorithms. Appearing to be more engaging/addictive does not imply more value. It's a measurement problem.
> Appearing to be more engaging/addictive does not imply more value.
for modern day businesses it sadly does. But that misalignment of how to define "quality" is a part of why we're in this real time divide of whether social media is good/bad to begin with.
Id like to propose something akin to the Ship of Theseus Paradox: lets call it the Ransom Letter Paradox.
At what point do newspaper clipping arranged together become the work of the arranger and not the individual newspapers. If I take one paragraph from the NYT and one paragraph from the WSJ am I the author or are the NYT and WSJ the author? If I take 16 words in a row from each and alternate, am I the author? If I alternate sentences am I the author?
At some point, there is a higher order "creation" of context between individually associated videos played together in a sequence. If I arrange one minute clips into an hour long video, I can say something the original authors never intended. If I, algorithmically, start following up videos with rebuttals, but only rebuttals that support my viewpoint, I am ADDING context by making suggestions. Sure people can click next, but in my ransom note example above, people can speed read and skip words as well. Current suggestion algorithms may not be purposely "trying to say something" but they effectively BECOME speakers almost accidently.
Ignoring that a well crafted sequences of videos can create new meaning leaves us with a disingenuous interpretation of what suggestion algorithms either are doing or can do. I'm not saying that google is purposely radicalizing children into lets say white nationalists, buuut there may be something akin to negligence going on, if they can always point to a black box algorithm, one with a mind of its own, as the culprit. Winter v. GP Putnam giving them some kind of amnesty from their own "suggestions" rubs me the wrong way. Designing systems to give people "more of what they want" rubs me the wrong way because it narrows horizons not broadens them. That lets me segue into again linking to my favorite internet article ever (which the bbc has somehow broken the link to so here is the real link, and an archive https://www.bbc.co.uk/blogs/adamcurtis/entries/78691781-c9b7...https://archive.ph/RoBjr ) Im not sure I have an answer, but current recommendation engines are the opposite of it.
If one treats the order of content as a message unto itself, then wouldn't an attempt to regulate or in some way restrict recommendation algorithms infringe upon freedom of speech? If I decide to tweak my site's recommendation algorithm to slightly more often show content in favor of a particular political party, isn't that my right?
Section 230 is about who's liable if speech either breaks the law or causes damages in some way unprotected by the First Amendment. That's the table-stakes of the discussion. It's a little silly to bring up the First Amendment given that context.
230 is about who is NOT liable. Platforms are NOT liable for what they don't moderate, just because they moderated other things. It protects them from imperfect moderation being used to claim endorsement.
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
"No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.
230 says you can moderate however you like and what you choose to leave up doesnt become your own speech through endorsement osmosis.
I agree with 230 to a point, but at some extreme it can be used to misrepresent speech as "someone elses." Similar to how the authors of newspapers wouldnt be the speaker of a ransom note because they contributed one letter or word, and it woild be absurd to claim otherwise.
If the arranger is the speaker, restrictions on free speech apply to their newly created context. Accountability applies.
The way the problem is phrased makes it reducible to the sorites problem.
There are better ways of formulating the question that avoid this paradox, such as "what are the necessary and sufficient conditions for editorial intervention to dominate an artifact?"
And for better or worse spreading white nationalist propaganda isn't illegal. It's not good, but we have the first amendment in this country because we don't want the government to decide who can speak.
I’m sure that the difficulty the New York Times editors have about summarizing laws related to online publishing shouldn’t make you wonder about what glaring mistakes are in their other reports about topics the newspaper wouldn’t be expected to know as deeply.
Related to that, there's the "Gell-Mann Amnesia" effect[1], where an expert can see numerous mistakes on his area of expertise being reported on the news, but somehow takes the rest as being accurate.
I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.
~~As an example, there are subreddits like /r/therewasanattempt or /r/interestingasfuck that ban users that post in /r/judaism or /r/israel (there used to be a subreddit /r/bannedforbeingjewish that tracked this but that was banned by reddit admins). This isn't the First Amendment, since it's just a ban based on identity instead of posting content.
According to the US legal system, discrimination based on religion is wrong. I should be able to fix this by complaining to reddit and creating actual knowledge of discrimination. In practice, because there is no contact mechanism for reddit, it's impossible for me to create actual knowledge.~~
edit: I still believe the above behaviour is morally wrong, but it isn't an accurate example of the actual knowledge standard as others have pointed out. I'm leaving it here for context on the follow-up comments.
The TechDirt article doesn't engage with this. It asserts that:
>> Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.
> None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable.
However, if you try to use Facebook's defamation form[1] and list the United States as an option:
> Facebook is not in a position to adjudicate the truth or falsity of statements made by third parties, and consistent with Section 230(c) of the Communications Decency Act, is not responsible for those statements. As a result, we are not liable to act on the content you want to report. If you believe content on Facebook violates our Community Standards (e.g., bullying, harassment, hate speech), please visit the Help Center to learn more about how to report it to us.
> This isn't the First Amendment, since it's just a ban based on identity instead of posting content.
This is not why the First Amendment does not apply. The First Amendment does not apply to private entities. It restricts the government…
> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
At least from a First Amendment standpoint, Reddit can do what it feels like to establish a religion, ban all religious discussion, tell reporters to go to hell, start a news feed, etc. There are other laws they do need to deal with of course.
They most certainly are allowed to discriminate based on religion. The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion:
There's no law that says you can't say, run a website that only allows atheists to participate, or non-Jews, or non-Christians, or non-Muslims, or whatever religion or religious classification that you want.
Discriminating based on race or ethnicity is a different topic entirely. You can choose your religion (it's like, just your opinion, man) but you can't choose your race/ethnicity. There's much more complicated laws in that area.
> The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion.
You've overlooked prohibitions on religious discrimination in public accommodations [1].
Reddit is not a place of public accommodation. It's a private web company. And furthermore, Reddit isn't the one doing the banning. It's Reddit users that are blocking members from their subreddit. The rest of Reddit is free to be browsed by said users.
If I create a Google chat group, and I only invite my church members to said group is Google violating anti-discrimination laws? No.
~~You're~~ the commenter 3 layers above is trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself.
> You're trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself
You are failing to take into account context. My comment had nothing whatsoever to do with anything about Reddit.
My comment is responding to a comment that asserted that the only laws regarding religious discrimination are for employers potentially discriminating against employees.
I provided an example of a law prohibiting religious discrimination in something other than employment.
There absolutely is such a law, the Civil Rights Act of 1964, specifically Title II. If you provide public accommodations you may not discriminate based on religion. You can't stick a "No Muslims allowed" sign on your restaurant because it's open to the public.
Reddit is publicly available even if they require registration, and neither Reddit nor subreddit mods may legally discriminate based on anything covered under the CRA.
Websites are public accommodations, this has been litigated over and over and over again. That why you can sue the owner of a website over ADA violations if they offer services or products to the public.
I don't know why you keep doubling down when you're verifiably, provably wrong. I understand you may not want this to be the case, but it is.
My understanding is that businesses cannot deny service based on protected class. E.g. Reddit couldn't put "Catholics are barred from using Reddit" in their TOS.
But subreddit bans are done by users of Reddit, not by Reddit itself. If someone on Xbox Live mutes the chat of Catholics and kicks them from the lobbies they're hosting, you can't go complain to Microsoft because these are the actions of a user not the company.
But Reddit isn't discriminating against certain races, ethnicities, or religions. Individual subreddit admins are discriminating on the basis of identity. This no different than creating a Discord server or IRC chat channel where you only let in your church friends. Reddit isn't refusing service on the basis of protected class. Reddit users are doing so.
The issue is that individual subreddit moderators each control hundreds of subreddits with millions of users. If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.
If one friend group is racist and you can't eat dinner at their house, that's qualitatively different than systemic discrimination by the restaurant industry.
In this case, Reddit's platform has enough systemic discrimination that you have to choose between full participation in front-page posts or participation in Jewish communities.
If you're talking about what your opinion of is morally right, or a healthy social media ecosystem I'm not really disagreeing with you - I don't think it's good for the subreddit mods to do this. But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws. This is incorrect.
> If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.
The impact is not what matters. What matters is that the banning is done by users, not by the company. Non-discrimination laws prohibit businesses from denying business to customers on the basis of protected class. It doesn't dictate what users of internet platforms do with their block button.
> that's qualitatively different than systemic discrimination by the restaurant industry.
Right, but a restaurant refusing a customer is a business denying a customer. If Discord or Reddit put "We don't do business with X race" in their ToS that's direct discrimination by Reddit. If subreddit moderators ban people because they do or don't belong to a protected class, that's an action taken by users. You're free to create your own /r/interestingasfuckforall that doesn't discriminate.
A bar can't turn away a customer for being Catholic. If a Catholic sits down at the bar, and the people next to him say "I don't want to sit next to a Catholic", and change seats to move way from a Catholic patron that's their prerogative. Subreddit bans are analogous to the latter.
> But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws.
I edited my original comment because others have pointed out it doesn't run afoul of antidiscrimination laws. You acknowledge that this behaviour is morally wrong, but don't say whether or not a platform should have a responsibility to prevent this behaviour. I believe they should
While the mechanism by which systemic discrimination occurs is different because it's individual users instead of the business, the impact is the same as businesses/public spaces discriminating against individuals.
This is because common social spaces are barred off to people of certain ethnicities and that means they can't fully engage in civic life.
Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their services. Attempts to sue the stores fail, because the store no longer exists by the time you gather the information necessary to sue it. While the mall itself does not promote racial discrimination, it is impossible for visible minorities to shop at the mall.
Should the mall have an obligation to prevent discrimination by its tenants?
I would say "yes". In your bar example, it is still possible for a Catholic to get a drink at the bar. In the mall example, it is impossible for a minority to shop.
On Reddit, if it is impossible for someone to be Jewish or Israeli because they are banned on sight from most subreddits, that should be illegal.
> Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their service
Again, this is already illegal because the stores are businesses and they can't deny service on the basis of protected class.
The issue at hand is that the government cannot compel individual users to interact with other users. How would this work? You try to mute someone on Xbox Live and you get a popup, "Sorry, you've muted too many Catholics in the last month, you can't mute this player." Likewise, would Reddit force the moderators to allow posts and comments from previously banned users? And what would prevent their content from just getting downvoted to oblivion and being automatically hidden anyways?
It's a similar situation because the laws aren't enforceable against the pop-up stores in the same way you can't sue an anonymous subreddit moderator from being discriminatory.
> Likewise, would Reddit force the moderators to allow posts and comments from previously banned users?
In Reddit's case, moderators are using tools that automatically ban users that have activity in specific subreddits. It's not like it's hidden bias, the bans are obviously because of a person's religion.
Correct, and what's to keep users of the subreddit from tagging posters who've posted in the Israel subreddit and down voting them until they're hidden? There's no effective way to force users to interact with other users they don't want to interact with.
Reddit can (and sometimes does) control who can be, and who is a moderator.
So your distinction doesn't exist: This is effectively the business Reddit engaged in discrimination.
Plus when I read Reddit I'm not interacting with a moderator, I'm interacting with a business.
An actual analogous example would be if individual people use a tool to block anyone Jewish from seeing their comments and replying to them. It would be pretty racist of course, but not illegal. A subreddit though, is not the personal playing area of a moderator.
Reddit bans subreddits whose moderators do not remove content that breaks the ToS. They do not require that communities refrain from banning certain people or content. Basically, you can only get sanctioned by reddit as a moderator for not banning and removing content from your subreddit.
> A subreddit though, is not the personal playing area of a moderator.
Oh, yes. Yes it is.
Many of the better communities have well organized moderation teams. But plenty do not. And the worst offenders display the blunt reality that a subreddit is indeed the play thing of the top mod.
None of this has to do with the First Amendment including the legal review you linked to.
The Unruh Civil Rights Act that is discussed does not extend the First Amendment as the First Amendment does not restrict the actions of businesses. The Unruh Civil Rights Act does not extend the First Amendment as it does not restrict the actions of Congress or other legislatures.
Freedom of Speech in the Amendment also has specific meaning and does not fully extend to businesses.
People need to understand that the only entity that can violate the constitution is the government. Citizens and companies are not restricted in their actions by the constitution, only the law.
False. See generally the state actor doctrine. Courts have ruled extensively in the context of criminal investigations and FedEx; railroads and drug testing; NCMEC and CSAM hashes; and informant hackers and criminal prosecution.
> I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.
This is misleading. It seems like you're predicating your entire argument on the idea that there is a version of Section 230 that would require platforms to act on user reports of discrimination. But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
Section 230 immunity doesn't depend on "actual knowledge." The law specifically provides immunity regardless of whether a platform has knowledge of illegal content. Providers can't be treated as publishers of third-party content, period.
It's not that "'actual knowledge' no longer matters," it's that it never mattered. Anti-discrimination law is usually for things like public accommodations, not online forums.
My point is that platforms should have more of a responsibility when they currently have none.
> But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
I understand that this is the purpose of the law, and I disagree with it. Section 230 has led to large platforms outsourcing most of their content to users because it shields the platform from legal liability. A user can post illegal content, engage in discrimination, harassment, etc.
> Anti-discrimination law is usually for things like public accommodations, not online forums.
Anti-discrimination law should be applicable to online forums. The average adult spends more than 2 hours a day on social media. Social media is now one of our main public accommodations.
If one of the most-used websites in the USA has an unofficial policy of discriminating against Jewish people that isn't covered by the current laws as that policy is enforced solely by users, that means the law isn't achieving its objectives of preventing discrimination.
I don't disagree with you. But you must distinguish between what the law does, and what it should do, in your view. Otherwise you are misleading people.
I'm not sure where you learned that the US legal system is against religious discrimination in private organizations, but it's not strictly true.
Many religious organizations in the US openly discriminate against people who are not their religion, from christian charities and businesses requiring staff to sign contracts that state they agree with/are members of the religion, to catholic hospitals openly discriminating against non-catholics based on their own "religious freedom to deny care". One way they can do this is an exemption to discrimination called bona fide occupational qualification suggesting that only certain people can do the job.
In a more broad sense, any private organization with limited membership (signing up vs allowing everyone) can discriminate. For example some country clubs discriminate based on race to this day. One reason for this is that the constitution guarantees "Freedom of Association" which includes the ability to have selective membership.
It's on a state-by-state basis.[1] In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate:[2]
> The California Supreme Court held that entering into an agreement with an online business is not necessary to establish standing under the Unruh Act. Writing for a unanimous court, Justice Liu emphasized that “a person suffers discrimination under the Act when the person presents himself or herself to a business with an intent to use its services but encounters an exclusionary policy or practice that prevents him or her from using those services,” and that “visiting a website with intent to use its services is, for purposes of standing, equivalent to presenting oneself for services at a brick-and-mortar store.”
I'm not sure what Catholic hospitals refuse non-Catholics care. My understanding is they refuse to provide medical treatments such as abortion that go against Catholic moral teachings, and this refusal is applied to everyone.
> In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate.
Yes.
But there is a legal distinction between a business website that offers goods/services to the public (like an online store), and a social media platform's moderation decisions or user-created communities.
Prager University v. Google LLC (2022)[1] - the court specifically held that YouTube's content moderation decisions didn't violate the Unruh Act. There's a clear distinction between access to services (where public accommodation laws may apply), and content moderation/curation decisions (protected by Section 230).
That's a good thing. We don't want Meta to be adjudicating defamation. Just look at the mess DMCA takedown notices are. When you tell companies to adjudicate something like copyright or defamation, they are just going to go with an "everybody accused is guilty" standard. (The only exception is large and well known accounts that bring in enough ad revenue to justify human involvement.) This will just turn into another mechanism to force censorship by false reporting.
I create recommender systems for a living. They are powerful and also potentially dangerous. But many people fall into the trap of thinking that just because a computer recommends something it’s objectively good.
It is math but it’s not “just math”. Pharmaceuticals is chemistry but it’s not “just chemistry”. And that is the framework I think we should be thinking about these with. Instagram doesn’t have a God-given right to flood teen girls’ feeds with anorexia-inducing media. The right is granted by people, and can be revoked.
> Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.
Let’s flag for a moment that this is a value judgement. The author is using “can’t” when they really mean “should not”. I also think it is a strawman to suggest anyone is requiring absolute certainty.
When dealing with baby food manufacturers, if their manufacturing process creates poisoned food, we hold the manufacturer liable. Someone might say it’s unfair to require that a food manufacturer guarantee none of their food will be poisoned, and yet we still have a functioning food industry.
> The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”
Sure. But “relevant” is fuzzy and not quantifiable. Computers like things to be quantifiable. So instead we might use a proxy like click. Click will lead to boosting clickbait content. So maybe you include text match. Now you boost websites that are keyword stuffing.
If you continue down the path of maximal engagement somewhere down the line you end up with some kind of cesspool of clickbait and ragebait. But choosing to maximize engagement was itself a choice, it’s not objectively more relevant.
I can't speak to the legality or meaning of section 230, but I can share my somewhat controversial opinions about how I think the internet should operate.
The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms. This is OK I guess because the author _can_ be held responsible.
If the author had been anonymous and the publisher could not accurately identify who should be responsible, then I would like to live in a society where the publisher _was_ held responsible. I don't think that's unreasonable.
Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility. I think big tech needs to be able to accurately identify the author of the harmful material, or take responsibility themselves.
Yeah, it's illegal to shout "fire" in a crowded theater, but if you hook up the fire-alarm to a web-api, the responsibility for the ensuing chaos disappears.
It is not illegal to shout "fire" in a crowded theater. That was from an argument about why people should be jailed for passing out fliers opposing the US draft during WWI.
It essentially is, you’ll get a disorderly conduct charge (the Wikipedia article confirms this). You’ll also be held liable for any damages caused by the ensuing panic.
You can contrive situations where it falls within the bounds of the law (for instance, if you do it as part of a play in a way that everyone understands it’s not real) but if you interpret it the way it’s meant to be interpreted it’s breaking a law pretty much anywhere.
>The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms.
I don't think this is true at all. If a publisher publishes a book that includes information that is not only incorrect, but actually harmful if followed, and represents it as true/safe, then they would be liable too.
>I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.
"We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs."
How odd, that's not the case for defamation, where publishers have a duty to investigate the truth of the statements they publish. What's going on in the ninth circuit?
>> Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility.
I sometimes suggest that the internet should start from strongly verifiable identity. You can strip identity in cases where it makes sense, but trying to establish identity is very hard. When people can be identified it make it possible to track them down and hold them accountable if they violate laws. People will generally behave better when they are not anonymous.
That take has been around since Facebook's real name policy as what seemed like a good idea at the time. It failed to make people behavior any better. Yet absolute idiots keep on thinking that if we just make the internet less free it will solve all of our problems and deliver us into a land of rainbows, unicorns, and gumdrops where it rains chocolate. For god's sake stop doing the work for a dystopia for free!
It isn’t surprising that they get details wrong. It’s the same NY Times that called the constitution “dangerous”(https://www.nytimes.com/2024/08/31/books/review/constitution...), fanning the flames of a kind of uncivil line of thinking that has unfortunately been more and more popular.
But this article itself makes mistakes - it does not seem to understand that the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says. The author makes an illogical claim that there is a category of speech that we want to illegitimize and shield platforms from. This is fundamentally opposed to the principles of free speech. Yes there is the tricky case of spam. But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.
Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
> the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says.
The First Amendment definitely is not about "free speech principles." It's the first of a short list of absolute restraints on the previous text, which is a description of US government, insisted upon by interests suspicious of federalization under that government. Free speech writ large is good and something to fight for, but the First Amendment is not an ideology, it is law.
The reason (imo) to talk about the First Amendment in terms of these giant social media platforms is simply because of their size, which was encouraged by friendly government acts such as Section 230 in the first place, without which they couldn't scale. Government encouragement and protection of these platforms gives the government some responsibility for them.
> But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.
> Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
This is all well and good, but maybe the place to rehash a debate about whether vaccines work is not in fact, the center town square. I would say that a person who has no idea about any of the underlying science and evidence, but is spreading doubt about it anyway (especially while benefiting financially), is not in fact, 'sharing their ideas', because they don't meet the minimum standard to actually have an opinion on the topic.
Just because they don't "meet the minimum standard" doesn't mean their view or opinion is irrelevant.
There are people sprouting crazy ideas in actual public town squares all the time, and they have done so forever. You don't have to go there and you don't have to listen.
> they don't meet the minimum standard to actually have an opinion on the topic
Who should judge that and why? I think that’s what makes free speech a basic right in functional democracies - there is no pre judging it. Challenging authority and science is important if we want to seek truth.
In the case of vaccines, for example, people were getting censored for discussing side effects. Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines. But not long ago it was labeled as a “conspiracy theory” and you would get banned on Twitter or Reddit for mentioning it.
> Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines.
Sure. But the MRNA vaccines were the first ones to be available and the Myocarditis risk from getting COVID-19 while unvaccinated is still multiple times higher than the risk from the MRNA vaccine. So, all that telling people about the risk of myocarditis does is dissuade some portion of people from getting the MRNA vaccine... which leads to more cases of myocarditis/deaths.
The main point the author is making is that algorithms represent the opinion of the corporation/website/app maker and opinions are free speech. That is, deciding what to prioritize/hide in your feed is but a mere manifestation of the business's opinion. Algorithms == Opinions.
This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!
In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:
the Court also held, however, that liability may result if the company omitted material facts about the company's inquiry into, or knowledge concerning, the statement of opinion, and those facts conflict with what a reasonable investor would understand as the basis of the statement when reading it.
That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.
In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).
I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)
> That is, a company can be held liable if it intentionally mislead its client
But only in a case where it has an obligation to tell the truth. The case you cited was about communication to investors, which is one of the very few times that legal obligation exists.
Furthermore, you would be hard pressed to show that an algorithm is intentionally misleading unless you can show that it has been explicitly designed to show a specific piece of information. And recommendation algorithms don't do that. They are designed to show the user what he wants. And if what he wants happens to be misinformation, that's what he will get.
Yeah, that's the motte-and-bailey argument about 230 that makes me more scrutinous of tech companies by the day
motte: "we curate content based on user preferences, and are hands off. We can't be responsible for every piece of (legal) content that is posted on your platform .
bailey: "our algorithm is ad-friendly, and we curate content or punish it based on how happy or mad it makes out adverts, the real customers for our service. So if adverts don't like hearing the word "suicide" we'll make creators who want to be paid self-censor".
if you want to take hands on what content is allowed on that granular a level, I don't see why 230 should protect you.
>I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents.
I'm sure they'd word it very carefully to prevent that, or limit it only to software defined as "social media".
This is the actual reason for s230 existing; without 230, applying editorial discretion could potentially make you liable (e.g. if a periodical uncritically published a libelous claim in its "letters to the editor"), so the idea was to allow some amount of curation/editorial discretion without also making them liable, lest all online forums become cesspools. Aiding monetization through advertising was definitely one reason for doing this.
We can certainly decide that we drew the line in the wrong place (it would be rather surprising if we got it perfectly right that early on), but the line was not drawn blindly.
I'd say the line was drawn badly. I'm not surprised it was drawn in a way to basically make companies the sorts of lazy moderators that are commonly complained about, all while profiting billions from it.
Loopholes would exist, but the spirit of 230 seemed to be that moderation of every uploaded piece of content was bound to not represent the platform. Enforcing private rules that represents the platforms will seems to go against that point.
Remember your history for one. Most boards at the time were small volunteer operations or side-jobs for an existing business. They weren't even revenue neutral, let alone positive. Getting another moderator depended upon a friend with free time on their hands who hung around there anyway. You have been downright spoiled by multimillion dollar AI backed moderation systems combined with large casts of minimum wage moderators. And you still think it is never good enough.
Your blatant ignorance of history shines further. Lazy moderation was the starting point. The courts fucked things up as they wont to do by making lazy moderation the only way to protect yourself from liability. There was no goddamned way that they could keep up instantly with all of the posts. Section 230 was basically the only constitutional section of a censorship bill and was designed to specifically 'allow moderation' instead of opposed to 'lazy moderator'. Not having Section 230 means lazy moderation only.
God, people's opinions on Section 230 have been so poisoned by propaganda from absolutely morons. The level of knowledge of how moderation works has gone backwards!
You say "spoiled", I say "ruined". Volunteer moderation of the commons is much different from a platform claiming to deny liability for 99% of content but choosing to more or less take the roles of moderation themselves. Especially with the talks of AI
My issue isn't with moderation quality so much as claiming to be a commons but in reality managing it as if you're a feudal lord. My point it that they WANT to try to moderate it all now, removing the point of why 230 shielded them.
And insults are unnecessary. My thoughts are my own from some 15 years of observing the landscape of social media dynamics change. Feel free to disagree but not sneer.
> It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.
Is that really how opinions work in US law? Isn’t an opinion something a human has? If google builds a machine that does something, is that protected as an opinion, even if no human at google ever looks at it? „Opinion“ sounds to me like it’s something a humans believes, not the approximation a computer generates.
Not a lawyer, but that’s how I understand it. In the Citizens United case the Supreme Court has given corporations free speech rights. Included in that right is the right to state an opinion. And when you develop an algorithm that promotes content you’re implicitly saying is “better” or “more relevant” or however the algorithm ranks it, that’s you providing an opinion via that algorithm.
Not a lawyer either, but I believe regulations on speech, as here, are scrutinized ("strictly") such that regulations are not allowed to discriminate based on content. Moderation is usually based on content. Whether something is fact or not, opinion or not, is content-based.
Citizens United held that criminalizing political films is illegal. Allowing Fahrenheit 9/11 to be advertised and performed but criminalizing Hillary: The Movie is exactly why Citizens United was correct and the illiberal leftists are wrong.
Section 230 discussions seem to always devolve into a bunch of CS people talking way outside their depth about “free speech”, and making universal claims about the First Amendment as if the literal reading of the amendment is all that matters. I wish some actual lawyers would weigh in here
Wrong-- section 230 is whatever some 80-year-old judge wants to interpret it as. Didn't you guys watch the LiveJournal court case? Those guys had to run away to Russia while Twitter, Reddit, Discord, etc. all are completely flooded with pirated content and it's cool.
The question that section 230, and the Communications Indecency Act in general, is the same one that plagued the court cases leading up to it, is to what degree the voluntary removal of some content implies an endorsement of other content.
Some material you can be required to remove by law, or required to suspend pending review for various MDCA safe harbor provisions. But when you remove content in excess of that, where does the liability end?
If you have a cat forum and you remove dog posts, are you also required to remove defamatory posts in general? If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?
I generally dislike section 230, as I feel like blanket immunity is too strong -- I'd prefer that judges and juries make this decision on a case-by-case basis. But the cost of litigating these cases could be prohibitive, especially for small or growing companies. It seems like this would lead to an equilibrium where there was no content moderation at all, or one where you could only act on user reports. Maybe this wouldn't even be so bad.
That is the entire point of 230. You can remove whatever you want for whatever reason, and what you leave up doesn't make you the speaker or endorser of that content.
Taken to the extreme it obviously leaves a window for a crazy abuse where you let people upload individual letters, then you remove letters of your choice to create new sentences and claim the contributors of the letters are the speakers, not the editor.
However, as far as I know, nobody is yet quite accused of that level of moderation to editorialize. Subreddits however ARE similar to that idea. Communities with strict points of view are allowed to purge anything not aligned with their community values. Taking away their protection basically eliminates the community from being able to exist.
> If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?
Twitter was sued for this, because they attached a note to a user's post. But note that this was not a user-generated community note. It was authored directly by Twitter.
Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content. I think even acting on user reports would still result in liability. The two court cases that stablished this are here:
> Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content
Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.
And
> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all
computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure
It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.
Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.
I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.
> Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Because Cubby did zero moderation.
> Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
What gives you the impression that this was because the moderation was "heavy handed"? The description in the Wikipedia page reads:
> The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in three ways: 1) by posting content guidelines for users; 2) by enforcing those guidelines with "Board Leaders"; and 3) by utilizing screening software designed to remove offensive language.
Posting civility rules and filtering profanity seems like pretty straightforward content moderation. This isn't "heavy handed moderation" this is extremely basic moderation.
These cases directly motivated Section 230:
> Some federal legislators noticed the contradiction in the two rulings,[4] while Internet enthusiasts found that expecting website operators to accept liability for the speech of third-party users was both untenable and likely to stifle the development of the Internet.[5] Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) co-authored legislation that would resolve the contradictory precedents on liability while enabling websites and platforms to host speech and exercise editorial control to moderate objectionable content without incurring unlimited liability by doing so.
What are you reading in that decision that suggests Prodigy was doing moderation beyond what we'd expect a typical internet forum to do?
This is the relevant section of your link:
> Plaintiffs further rely upon the following additional evidence
in support of their claim that PRODIGY is a publisher:
> (A)promulgation of "content guidelines" (the "Guidelines" found at
Plaintiffs' Exhibit F) in which, inter alia, users are requested
to refrain from posting notes that are "insulting" and are
advised that "notes that harass other members or are deemed to
be in bad taste or grossly repugnant to community standards, or
are deemed harmful to maintaining a harmonious online community,
will be removed when brought to PRODIGY's attention"; the
Guidelines all expressly state that although "Prodigy is
committed to open debate and discussion on the bulletin boards,
> (B) use of a software screening program which automatically
prescreens all bulletin board postings for offensive language;
> (C) the use of Board Leaders such as Epstien whose duties
include enforcement of the Guidelines, according to Jennifer
Ambrozek, the Manager of Prodigy's bulletin boards and the
person at PRODIGY responsible for supervising the Board Leaders
(see Plaintiffs' Exhibit R, Ambrozek deposition transcript, at
p. 191); and
> (D) testimony by Epstien as to a tool for Board \Leaders known
as an "emergency delete function" pursuant to which a Board
Leader could remove a note and send a previously prepared
message of explanation "ranging from solicitation, bad advice,
insulting, wrong topic, off topic, bad taste, etcetera."
(Epstien deposition Transcript, p. 52).
So they published content guidelines prohibiting harssment, they filtered out offensive languages (presumably slurs, maybe profanity), and the moderation team deleted offending content. This is... bog standard internet forum moderation.
"additional evidence" your quote says. Just before that, we have:
> In one article PRODIGY stated:
> "We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate."
The judge goes on to note that while Prodigy had since ceased its initial policy of direct editorial review of all content, they did not make an official announcement of this, so were still benefitting from the marketing perception that the content was vetted by Prodigy.
I don't know if I would have ruled the same way in that situation, and honestly, it was the NY Supreme Court, which is not even an appellate jurisdiction in NY, and was settled before any appeals could be heard, so it's not even clear that this would have stood.
A situation where each individual case was decided on its merits until a reasonable de facto standard could evolve I thing would have been more responsible and flexible than a blanked immunity standard which has led to all sorts of unfortunate dynamics that significantly damage the ability to have an online public square for discourse.
> Note that the issue of Section 230 does not come up even once in this history lesson.
To be perfectly pedantic, the history lesson ends in 1991 and the CDA was passed in 1996.
Also, I am not sure that this author really understands where things stand anymore. He calls the 3rd circuit TikTok ruling "batshit insane" and "deliberately ignores precedent". Well, it is entirely possible (likely even?) that it will get overruled. But that ruling is based on a Supreme Court ruling earlier this year (Moody vs. NetChoice). Throw your precedence out the window, the Supreme Court just changed things (or maybe we will find out that they didn't really mean it like that).
The First Amendment protects you from the consequences of your own speech (with something like 17 categories of exceptions).
Section 230 protects you from the consequences of publishing someone else's speech.
Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech. And the 3rd circuit TikTok ruling spends paragraphs discussing this. You can read it for yourself and decide if it makes sense.
I strongly suggest reading the actual 3rd circuit TikTok ruling as well as the actual Moody vs NetChoice Supreme Court ruling, both from the year 2024.
Things have changed. Hold on to your previous belief at your own peril.
Bingo. If you threaten, or promote harm/hate speech, you're not suddenly immune from the consequences of that. It's that the platform is (generally) immune from those same consequences.
It’s not just that. If I say something that people find derogatory, and they do not want to associate with me because of that, that’s THEIR first amendment right.
There can always be social consequences for speech that is legal under the first amendment.
Yes, this is really what I meant. The First Amendment does protect you from government retaliation for critical speech. It does not prevent you from becoming a pariah in your community or more widely for what you say. Just look at any celebrity who lost multi-million dollar sponsorship contracts over a tweet.
It's important to note that this article has its own biases; it's disclosed at the end that the author is on the board of Bluesky. But, largely, it raises very good points.
> This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).
The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
Realistically, the entire web ecosystem and thus a significant part of our economy rely on Section 230's protections for companies. IMO, regulation that provides users of large social networks with greater transparency and control into what their algorithms are showing to them personally would be a far more fruitful discussion.
Should every human have the right to understand that an algorithm has classified them in a certain way? Should we, as a society, have the right to understand to what extent any social media company is classifying certain people as receptive to content regarding, say, specific phobias, and showing them content that is classified to amplify those phobias? Should we have the right to understand, at least, exactly how a dial turned in a tech office impacts how children learn to see the world?
We can and should iterate on ways to answer these complex questions without throwing the ability for companies to moderate content out the window.
> The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.
Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.
People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.
I guess most people that think section 230 is excessive are not advocating for its complete removal, but more like for adding some requirements that platforms have to adhere in order to claim such immunity.
Sure, but I find that few people are able to articulate in any detail what those requirements are and explain how it will lead to a better ecosystem.
A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?
Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.
IIUC, most large ad providers allow you to see and tailor what they use in their algorithms (ex. [1]).
I think the big problem with "Should every human have the right to understand that an algorithm has classified them in a certain way" is just that they flat out can't. You cannot design a trash can that every human can understand but a bear can't. There is a level of complexity that your average person won't be able to follow.
Yes but it _appears_ that there are very different algorithms/classifications used for which ads to recommend vs what content to recommend. Opening up this insight/control for content recommendations (instead of just ads) would be a good start.
Yeah there’s some controls, but they are much less granular than what ad-tech exposes. I’ve just never really been sure why Google/meta/etc. choose to expose this information differently for ads vs content.
In fact, the causality is reversed; he's on the board due to his influence on us. Masnick wrote the Protocols not Platforms essay which inspired Dorsey to start the Bluesky project. Then Bluesky became the PBC, we launched, became independent, etc etc, and Masnick wasn't involved until the past year when we invited him to join our board.
I hope you view his writing and POV as independent from his work with us. On matters like 230 you can find archives of very consistent writing from well before joining.
TIL. I'll admit, I'm not avid reader of TechDirt, follower of Mike Masnick or care that much about Bluesky since I don't interact a ton with social media.
However, my initial feelings are correct. NYT article is bemoaning about Section 230, Mike seems to ignore why those feelings are coming up and burying there might be conflict of interest in caring since I guess BlueSky has algorithms it runs to help users? Again, admitting I know nothing about BlueSky. In any case, I don't think consistent PoV should bypass disclosure of that.
His arguments about why Section 230 should be left intact are solid and I agree with some of them. I also think he misses the point that letting algorithms go insane with 100% Section 230 protection may not be best idea. Whether or not Section can be reformed without destroying the internet or if First Amendment gets involved here, I personally don't know.
BlueSky’s big selling point is no algorithm by default. It’s default timeline is users you follow in descending chronological order. You can use an algorithm (called a feed) if you like. They provide a few, but also are open about their protocol and allow anyone who wants to write an algorithm to do so, and any user who wants to opt in to using it can.
Sure, but with other forms of media, the publisher is liable as well about bonkers content with certain exceptions.
This is what most of Section 230 fight is about. Some people, myself included, would say "No, Facebook is selecting content that doesn't involve user choice, they are drifting into publisher territory and thus should not be 100% immune to liability."
EDIT: I forgot, Section 230 has also been used by Online Ad Publishers to hide their lack of moderation with scam ads.
Reading the law, it sure seems to be aimed at protecting services like ISPs, web hosts, CDNs/caches, email hosts, et c, not organizations promoting and amplifying specific content they’ve allowed users to post. It’s never seemed to me that applying 230 to, say, the Facebook feed or maybe even to Google ads is definitely required by or in the spirit of the law, but more like something we just accidentally ended up doing.
I have had reasonable success with Youtube's built-in algorithm-massaging features. It mostly always respects "Do Not Recommend Channel", and understands "Not Interested" after a couple repetitions.
I seem to remember part of TikTok’s allure being that it learned your tastes well from implicit feedback, no active feedback required. We around here probably tend to enjoy the idea of training our own recommenders, but it’s not clear to me that the bulk of users even want to be bothered with a simple thumbs-up/thumbs-down.
If they want to make money, don’t they need you to stick around? As a side effect of making money, then, aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?
That doesn’t work as well for stuff that you wish other people would find noxious but they don’t: neo-Nazis probably would rather see more neo-Nazi things rather than fewer, even if the broader social consensus is (at least on the pages of the New York Times) that those things fall under the uselessly vague header of “toxic.”
Even if the recommenders only filter out some of what you want less of, ditching them entirely means you’ll see more of the portion of the slop that they’re deprioritizing the way you want them to.
> aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?
No, they're incentivized to increase the amount of content advertisers find acceptable.
Plus, youtube isn't strictly watching, many people do make a living from the platform and these algorithmic controls are not available to them in any way at all.
> ditching them entirely
Which is why I implied that user oriented control is the factor to care about. Nowhere did I suggest you had to do this, just remove _corporate_ control of that list.
Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.
I think the author is right that harms caused by incorrect content aren't—and shouldn't be—the fault of section 230, and are instead the fault of the original producers of the content.
I think the author is wrong in claiming that modern attention-optimizing recommendation algorithms are better than more primitive, poorer recommendation algorithms. Appearing to be more engaging/addictive does not imply more value. It's a measurement problem.
> Appearing to be more engaging/addictive does not imply more value.
for modern day businesses it sadly does. But that misalignment of how to define "quality" is a part of why we're in this real time divide of whether social media is good/bad to begin with.
Id like to propose something akin to the Ship of Theseus Paradox: lets call it the Ransom Letter Paradox.
At what point do newspaper clipping arranged together become the work of the arranger and not the individual newspapers. If I take one paragraph from the NYT and one paragraph from the WSJ am I the author or are the NYT and WSJ the author? If I take 16 words in a row from each and alternate, am I the author? If I alternate sentences am I the author?
At some point, there is a higher order "creation" of context between individually associated videos played together in a sequence. If I arrange one minute clips into an hour long video, I can say something the original authors never intended. If I, algorithmically, start following up videos with rebuttals, but only rebuttals that support my viewpoint, I am ADDING context by making suggestions. Sure people can click next, but in my ransom note example above, people can speed read and skip words as well. Current suggestion algorithms may not be purposely "trying to say something" but they effectively BECOME speakers almost accidently.
Ignoring that a well crafted sequences of videos can create new meaning leaves us with a disingenuous interpretation of what suggestion algorithms either are doing or can do. I'm not saying that google is purposely radicalizing children into lets say white nationalists, buuut there may be something akin to negligence going on, if they can always point to a black box algorithm, one with a mind of its own, as the culprit. Winter v. GP Putnam giving them some kind of amnesty from their own "suggestions" rubs me the wrong way. Designing systems to give people "more of what they want" rubs me the wrong way because it narrows horizons not broadens them. That lets me segue into again linking to my favorite internet article ever (which the bbc has somehow broken the link to so here is the real link, and an archive https://www.bbc.co.uk/blogs/adamcurtis/entries/78691781-c9b7... https://archive.ph/RoBjr ) Im not sure I have an answer, but current recommendation engines are the opposite of it.
If one treats the order of content as a message unto itself, then wouldn't an attempt to regulate or in some way restrict recommendation algorithms infringe upon freedom of speech? If I decide to tweak my site's recommendation algorithm to slightly more often show content in favor of a particular political party, isn't that my right?
Section 230 is about who's liable if speech either breaks the law or causes damages in some way unprotected by the First Amendment. That's the table-stakes of the discussion. It's a little silly to bring up the First Amendment given that context.
You have it backwards.
230 is about who is NOT liable. Platforms are NOT liable for what they don't moderate, just because they moderated other things. It protects them from imperfect moderation being used to claim endorsement.
Surely they're one and the same. By explicitly stating that someone is not liable, you're indirectly defining more the set of people who are liable.
How could you break the law by merely reordering the content a user will see?
That's why I proposed the paradox. At what tipping point does the arranger become the speaker?
Read 230.
https://www.law.cornell.edu/uscode/text/47/230
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
"No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.
230 says you can moderate however you like and what you choose to leave up doesnt become your own speech through endorsement osmosis.
I agree with 230 to a point, but at some extreme it can be used to misrepresent speech as "someone elses." Similar to how the authors of newspapers wouldnt be the speaker of a ransom note because they contributed one letter or word, and it woild be absurd to claim otherwise.
If the arranger is the speaker, restrictions on free speech apply to their newly created context. Accountability applies.
The way the problem is phrased makes it reducible to the sorites problem.
There are better ways of formulating the question that avoid this paradox, such as "what are the necessary and sufficient conditions for editorial intervention to dominate an artifact?"
And for better or worse spreading white nationalist propaganda isn't illegal. It's not good, but we have the first amendment in this country because we don't want the government to decide who can speak.
That first BBC link loaded the article for me and then threw a 404... what is that about?
they broke something. very odd.
i conject they had some kind of page move and archival that is broke.
I’m sure that the difficulty the New York Times editors have about summarizing laws related to online publishing shouldn’t make you wonder about what glaring mistakes are in their other reports about topics the newspaper wouldn’t be expected to know as deeply.
Related to that, there's the "Gell-Mann Amnesia" effect[1], where an expert can see numerous mistakes on his area of expertise being reported on the news, but somehow takes the rest as being accurate.
[1]: https://www.epsilontheory.com/gell-mann-amnesia/
Murray Gell-Mann - "The quality of information" lecture on youtube from 1997 is also worth a listen.
Murray basically points out how much communication in general is really just exchanging errors for no reason.
I suspect this is where the idea of "Gell-Mann Amnesia" came from.
I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.
~~As an example, there are subreddits like /r/therewasanattempt or /r/interestingasfuck that ban users that post in /r/judaism or /r/israel (there used to be a subreddit /r/bannedforbeingjewish that tracked this but that was banned by reddit admins). This isn't the First Amendment, since it's just a ban based on identity instead of posting content.
According to the US legal system, discrimination based on religion is wrong. I should be able to fix this by complaining to reddit and creating actual knowledge of discrimination. In practice, because there is no contact mechanism for reddit, it's impossible for me to create actual knowledge.~~
edit: I still believe the above behaviour is morally wrong, but it isn't an accurate example of the actual knowledge standard as others have pointed out. I'm leaving it here for context on the follow-up comments.
The TechDirt article doesn't engage with this. It asserts that:
>> Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.
> None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable.
However, if you try to use Facebook's defamation form[1] and list the United States as an option:
> Facebook is not in a position to adjudicate the truth or falsity of statements made by third parties, and consistent with Section 230(c) of the Communications Decency Act, is not responsible for those statements. As a result, we are not liable to act on the content you want to report. If you believe content on Facebook violates our Community Standards (e.g., bullying, harassment, hate speech), please visit the Help Center to learn more about how to report it to us.
[1]https://www.facebook.com/help/contact/430253071144967
> This isn't the First Amendment, since it's just a ban based on identity instead of posting content.
This is not why the First Amendment does not apply. The First Amendment does not apply to private entities. It restricts the government…
> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
At least from a First Amendment standpoint, Reddit can do what it feels like to establish a religion, ban all religious discussion, tell reporters to go to hell, start a news feed, etc. There are other laws they do need to deal with of course.
More commentary on this here:
https://www.cnn.com/2021/01/12/politics/first-amendment-expl...
I agreed with you. If Reddit wanted to ban certain types of posts, they're entitled to under the First Amendment.
They're not entitled to discriminate against certain races, ethnicities, or religions though.
https://harvardlawreview.org/print/vol-133/white-v-square/
They most certainly are allowed to discriminate based on religion. The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion:
https://www.eeoc.gov/religious-discrimination
There's no law that says you can't say, run a website that only allows atheists to participate, or non-Jews, or non-Christians, or non-Muslims, or whatever religion or religious classification that you want.
Discriminating based on race or ethnicity is a different topic entirely. You can choose your religion (it's like, just your opinion, man) but you can't choose your race/ethnicity. There's much more complicated laws in that area.
> The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion.
You've overlooked prohibitions on religious discrimination in public accommodations [1].
[1] https://www.law.cornell.edu/uscode/text/42/2000a
Reddit is not a place of public accommodation. It's a private web company. And furthermore, Reddit isn't the one doing the banning. It's Reddit users that are blocking members from their subreddit. The rest of Reddit is free to be browsed by said users.
If I create a Google chat group, and I only invite my church members to said group is Google violating anti-discrimination laws? No.
~~You're~~ the commenter 3 layers above is trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself.
> You're trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself
You are failing to take into account context. My comment had nothing whatsoever to do with anything about Reddit.
My comment is responding to a comment that asserted that the only laws regarding religious discrimination are for employers potentially discriminating against employees.
I provided an example of a law prohibiting religious discrimination in something other than employment.
There absolutely is such a law, the Civil Rights Act of 1964, specifically Title II. If you provide public accommodations you may not discriminate based on religion. You can't stick a "No Muslims allowed" sign on your restaurant because it's open to the public.
Reddit is publicly available even if they require registration, and neither Reddit nor subreddit mods may legally discriminate based on anything covered under the CRA.
https://www.justice.gov/crt/title-ii-civil-rights-act-public...
The CRA only covers physical spaces (places of "public accommodation"). Not services (like Reddit).
Websites are public accommodations, this has been litigated over and over and over again. That why you can sue the owner of a website over ADA violations if they offer services or products to the public.
I don't know why you keep doubling down when you're verifiably, provably wrong. I understand you may not want this to be the case, but it is.
My understanding is that businesses cannot deny service based on protected class. E.g. Reddit couldn't put "Catholics are barred from using Reddit" in their TOS.
But subreddit bans are done by users of Reddit, not by Reddit itself. If someone on Xbox Live mutes the chat of Catholics and kicks them from the lobbies they're hosting, you can't go complain to Microsoft because these are the actions of a user not the company.
But Reddit isn't discriminating against certain races, ethnicities, or religions. Individual subreddit admins are discriminating on the basis of identity. This no different than creating a Discord server or IRC chat channel where you only let in your church friends. Reddit isn't refusing service on the basis of protected class. Reddit users are doing so.
The issue is that individual subreddit moderators each control hundreds of subreddits with millions of users. If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.
If one friend group is racist and you can't eat dinner at their house, that's qualitatively different than systemic discrimination by the restaurant industry.
In this case, Reddit's platform has enough systemic discrimination that you have to choose between full participation in front-page posts or participation in Jewish communities.
If you're talking about what your opinion of is morally right, or a healthy social media ecosystem I'm not really disagreeing with you - I don't think it's good for the subreddit mods to do this. But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws. This is incorrect.
> If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.
The impact is not what matters. What matters is that the banning is done by users, not by the company. Non-discrimination laws prohibit businesses from denying business to customers on the basis of protected class. It doesn't dictate what users of internet platforms do with their block button.
> that's qualitatively different than systemic discrimination by the restaurant industry.
Right, but a restaurant refusing a customer is a business denying a customer. If Discord or Reddit put "We don't do business with X race" in their ToS that's direct discrimination by Reddit. If subreddit moderators ban people because they do or don't belong to a protected class, that's an action taken by users. You're free to create your own /r/interestingasfuckforall that doesn't discriminate.
A bar can't turn away a customer for being Catholic. If a Catholic sits down at the bar, and the people next to him say "I don't want to sit next to a Catholic", and change seats to move way from a Catholic patron that's their prerogative. Subreddit bans are analogous to the latter.
> But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws.
I edited my original comment because others have pointed out it doesn't run afoul of antidiscrimination laws. You acknowledge that this behaviour is morally wrong, but don't say whether or not a platform should have a responsibility to prevent this behaviour. I believe they should
While the mechanism by which systemic discrimination occurs is different because it's individual users instead of the business, the impact is the same as businesses/public spaces discriminating against individuals.
This is because common social spaces are barred off to people of certain ethnicities and that means they can't fully engage in civic life.
Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their services. Attempts to sue the stores fail, because the store no longer exists by the time you gather the information necessary to sue it. While the mall itself does not promote racial discrimination, it is impossible for visible minorities to shop at the mall.
Should the mall have an obligation to prevent discrimination by its tenants?
I would say "yes". In your bar example, it is still possible for a Catholic to get a drink at the bar. In the mall example, it is impossible for a minority to shop.
On Reddit, if it is impossible for someone to be Jewish or Israeli because they are banned on sight from most subreddits, that should be illegal.
> Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their service
Again, this is already illegal because the stores are businesses and they can't deny service on the basis of protected class.
The issue at hand is that the government cannot compel individual users to interact with other users. How would this work? You try to mute someone on Xbox Live and you get a popup, "Sorry, you've muted too many Catholics in the last month, you can't mute this player." Likewise, would Reddit force the moderators to allow posts and comments from previously banned users? And what would prevent their content from just getting downvoted to oblivion and being automatically hidden anyways?
It's a similar situation because the laws aren't enforceable against the pop-up stores in the same way you can't sue an anonymous subreddit moderator from being discriminatory.
> Likewise, would Reddit force the moderators to allow posts and comments from previously banned users?
In Reddit's case, moderators are using tools that automatically ban users that have activity in specific subreddits. It's not like it's hidden bias, the bans are obviously because of a person's religion.
Correct, and what's to keep users of the subreddit from tagging posters who've posted in the Israel subreddit and down voting them until they're hidden? There's no effective way to force users to interact with other users they don't want to interact with.
Reddit can (and sometimes does) control who can be, and who is a moderator.
So your distinction doesn't exist: This is effectively the business Reddit engaged in discrimination.
Plus when I read Reddit I'm not interacting with a moderator, I'm interacting with a business.
An actual analogous example would be if individual people use a tool to block anyone Jewish from seeing their comments and replying to them. It would be pretty racist of course, but not illegal. A subreddit though, is not the personal playing area of a moderator.
Reddit bans subreddits whose moderators do not remove content that breaks the ToS. They do not require that communities refrain from banning certain people or content. Basically, you can only get sanctioned by reddit as a moderator for not banning and removing content from your subreddit.
> A subreddit though, is not the personal playing area of a moderator.
Oh, yes. Yes it is.
Many of the better communities have well organized moderation teams. But plenty do not. And the worst offenders display the blunt reality that a subreddit is indeed the play thing of the top mod.
None of this has to do with the First Amendment including the legal review you linked to.
The Unruh Civil Rights Act that is discussed does not extend the First Amendment as the First Amendment does not restrict the actions of businesses. The Unruh Civil Rights Act does not extend the First Amendment as it does not restrict the actions of Congress or other legislatures.
Freedom of Speech in the Amendment also has specific meaning and does not fully extend to businesses.
https://constitution.findlaw.com/amendment1/freedom-of-speec...
People need to understand that the only entity that can violate the constitution is the government. Citizens and companies are not restricted in their actions by the constitution, only the law.
False. See generally the state actor doctrine. Courts have ruled extensively in the context of criminal investigations and FedEx; railroads and drug testing; NCMEC and CSAM hashes; and informant hackers and criminal prosecution.
Which is just the government acting through proxies
> I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.
This is misleading. It seems like you're predicating your entire argument on the idea that there is a version of Section 230 that would require platforms to act on user reports of discrimination. But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
Section 230 immunity doesn't depend on "actual knowledge." The law specifically provides immunity regardless of whether a platform has knowledge of illegal content. Providers can't be treated as publishers of third-party content, period.
It's not that "'actual knowledge' no longer matters," it's that it never mattered. Anti-discrimination law is usually for things like public accommodations, not online forums.
My point is that platforms should have more of a responsibility when they currently have none.
> But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
I understand that this is the purpose of the law, and I disagree with it. Section 230 has led to large platforms outsourcing most of their content to users because it shields the platform from legal liability. A user can post illegal content, engage in discrimination, harassment, etc.
> Anti-discrimination law is usually for things like public accommodations, not online forums.
Anti-discrimination law should be applicable to online forums. The average adult spends more than 2 hours a day on social media. Social media is now one of our main public accommodations.
If one of the most-used websites in the USA has an unofficial policy of discriminating against Jewish people that isn't covered by the current laws as that policy is enforced solely by users, that means the law isn't achieving its objectives of preventing discrimination.
> platforms should
> Anti-discrimination law should
I don't disagree with you. But you must distinguish between what the law does, and what it should do, in your view. Otherwise you are misleading people.
You're correct (as you pointed out elsewhere), so I edited my original comment.
I'm not sure where you learned that the US legal system is against religious discrimination in private organizations, but it's not strictly true.
Many religious organizations in the US openly discriminate against people who are not their religion, from christian charities and businesses requiring staff to sign contracts that state they agree with/are members of the religion, to catholic hospitals openly discriminating against non-catholics based on their own "religious freedom to deny care". One way they can do this is an exemption to discrimination called bona fide occupational qualification suggesting that only certain people can do the job.
In a more broad sense, any private organization with limited membership (signing up vs allowing everyone) can discriminate. For example some country clubs discriminate based on race to this day. One reason for this is that the constitution guarantees "Freedom of Association" which includes the ability to have selective membership.
It's on a state-by-state basis.[1] In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate:[2]
> The California Supreme Court held that entering into an agreement with an online business is not necessary to establish standing under the Unruh Act. Writing for a unanimous court, Justice Liu emphasized that “a person suffers discrimination under the Act when the person presents himself or herself to a business with an intent to use its services but encounters an exclusionary policy or practice that prevents him or her from using those services,” and that “visiting a website with intent to use its services is, for purposes of standing, equivalent to presenting oneself for services at a brick-and-mortar store.”
I'm not sure what Catholic hospitals refuse non-Catholics care. My understanding is they refuse to provide medical treatments such as abortion that go against Catholic moral teachings, and this refusal is applied to everyone.
[1] https://lawyerscommittee.org/wp-content/uploads/2019/12/Onli...
[2] https://harvardlawreview.org/print/vol-133/white-v-square/
> In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate.
Yes.
But there is a legal distinction between a business website that offers goods/services to the public (like an online store), and a social media platform's moderation decisions or user-created communities.
Prager University v. Google LLC (2022)[1] - the court specifically held that YouTube's content moderation decisions didn't violate the Unruh Act. There's a clear distinction between access to services (where public accommodation laws may apply), and content moderation/curation decisions (protected by Section 230).
[1] https://law.justia.com/cases/california/court-of-appeal/2022...
You are right, thank you for the citation.
edit: there's another comment chain you might be interested in about whether the federal civil rights act is applicable.
Thanks :-)
That's a good thing. We don't want Meta to be adjudicating defamation. Just look at the mess DMCA takedown notices are. When you tell companies to adjudicate something like copyright or defamation, they are just going to go with an "everybody accused is guilty" standard. (The only exception is large and well known accounts that bring in enough ad revenue to justify human involvement.) This will just turn into another mechanism to force censorship by false reporting.
Mike Masnick is a treasure. He posts on Bluesky here: https://bsky.app/profile/mmasnick.bsky.social
I just found out that he replaced Dorsey on the board of Bluesky as well.
https://bsky.app/profile/mmasnick.bsky.social/post/3l7lnkz6f...
I create recommender systems for a living. They are powerful and also potentially dangerous. But many people fall into the trap of thinking that just because a computer recommends something it’s objectively good.
It is math but it’s not “just math”. Pharmaceuticals is chemistry but it’s not “just chemistry”. And that is the framework I think we should be thinking about these with. Instagram doesn’t have a God-given right to flood teen girls’ feeds with anorexia-inducing media. The right is granted by people, and can be revoked.
> Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.
Let’s flag for a moment that this is a value judgement. The author is using “can’t” when they really mean “should not”. I also think it is a strawman to suggest anyone is requiring absolute certainty.
When dealing with baby food manufacturers, if their manufacturing process creates poisoned food, we hold the manufacturer liable. Someone might say it’s unfair to require that a food manufacturer guarantee none of their food will be poisoned, and yet we still have a functioning food industry.
> The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”
Sure. But “relevant” is fuzzy and not quantifiable. Computers like things to be quantifiable. So instead we might use a proxy like click. Click will lead to boosting clickbait content. So maybe you include text match. Now you boost websites that are keyword stuffing.
If you continue down the path of maximal engagement somewhere down the line you end up with some kind of cesspool of clickbait and ragebait. But choosing to maximize engagement was itself a choice, it’s not objectively more relevant.
I can't speak to the legality or meaning of section 230, but I can share my somewhat controversial opinions about how I think the internet should operate.
The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms. This is OK I guess because the author _can_ be held responsible.
If the author had been anonymous and the publisher could not accurately identify who should be responsible, then I would like to live in a society where the publisher _was_ held responsible. I don't think that's unreasonable.
Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility. I think big tech needs to be able to accurately identify the author of the harmful material, or take responsibility themselves.
Yeah, it's illegal to shout "fire" in a crowded theater, but if you hook up the fire-alarm to a web-api, the responsibility for the ensuing chaos disappears.
It is not illegal to shout "fire" in a crowded theater. That was from an argument about why people should be jailed for passing out fliers opposing the US draft during WWI.
https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...
It essentially is, you’ll get a disorderly conduct charge (the Wikipedia article confirms this). You’ll also be held liable for any damages caused by the ensuing panic.
You can contrive situations where it falls within the bounds of the law (for instance, if you do it as part of a play in a way that everyone understands it’s not real) but if you interpret it the way it’s meant to be interpreted it’s breaking a law pretty much anywhere.
>The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms.
I don't think this is true at all. If a publisher publishes a book that includes information that is not only incorrect, but actually harmful if followed, and represents it as true/safe, then they would be liable too.
From the article.
>I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.
"We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs."
How odd, that's not the case for defamation, where publishers have a duty to investigate the truth of the statements they publish. What's going on in the ninth circuit?
>> Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility.
I sometimes suggest that the internet should start from strongly verifiable identity. You can strip identity in cases where it makes sense, but trying to establish identity is very hard. When people can be identified it make it possible to track them down and hold them accountable if they violate laws. People will generally behave better when they are not anonymous.
“We should dox everyone on the internet” is definitely not a take I expected.
That take has been around since Facebook's real name policy as what seemed like a good idea at the time. It failed to make people behavior any better. Yet absolute idiots keep on thinking that if we just make the internet less free it will solve all of our problems and deliver us into a land of rainbows, unicorns, and gumdrops where it rains chocolate. For god's sake stop doing the work for a dystopia for free!
The original title of the article is:
> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment
Four chars too long, though.
And it's "Section 230", if you were not parsing the integer value in the title.
They know, they are the one who submitted the article to HN. I believe they were posting the comment just to denote that they had to change it.
The following preserves all info under the title limit: NYT Gets 230 Wrong Again; Misrepresenting History, Law, And The 1st Amendment
> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment
I get that. It was more directed at the dork that downvoted the comment, to let them know why OP might have elaborated.
It isn’t surprising that they get details wrong. It’s the same NY Times that called the constitution “dangerous”(https://www.nytimes.com/2024/08/31/books/review/constitution...), fanning the flames of a kind of uncivil line of thinking that has unfortunately been more and more popular.
But this article itself makes mistakes - it does not seem to understand that the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says. The author makes an illogical claim that there is a category of speech that we want to illegitimize and shield platforms from. This is fundamentally opposed to the principles of free speech. Yes there is the tricky case of spam. But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.
Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
> the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says.
The First Amendment definitely is not about "free speech principles." It's the first of a short list of absolute restraints on the previous text, which is a description of US government, insisted upon by interests suspicious of federalization under that government. Free speech writ large is good and something to fight for, but the First Amendment is not an ideology, it is law.
The reason (imo) to talk about the First Amendment in terms of these giant social media platforms is simply because of their size, which was encouraged by friendly government acts such as Section 230 in the first place, without which they couldn't scale. Government encouragement and protection of these platforms gives the government some responsibility for them.
> But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.
> Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
This is all well and good, but maybe the place to rehash a debate about whether vaccines work is not in fact, the center town square. I would say that a person who has no idea about any of the underlying science and evidence, but is spreading doubt about it anyway (especially while benefiting financially), is not in fact, 'sharing their ideas', because they don't meet the minimum standard to actually have an opinion on the topic.
Everyone is equal.
Just because they don't "meet the minimum standard" doesn't mean their view or opinion is irrelevant.
There are people sprouting crazy ideas in actual public town squares all the time, and they have done so forever. You don't have to go there and you don't have to listen.
> they don't meet the minimum standard to actually have an opinion on the topic
Who should judge that and why? I think that’s what makes free speech a basic right in functional democracies - there is no pre judging it. Challenging authority and science is important if we want to seek truth.
In the case of vaccines, for example, people were getting censored for discussing side effects. Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines. But not long ago it was labeled as a “conspiracy theory” and you would get banned on Twitter or Reddit for mentioning it.
> Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines.
Sure. But the MRNA vaccines were the first ones to be available and the Myocarditis risk from getting COVID-19 while unvaccinated is still multiple times higher than the risk from the MRNA vaccine. So, all that telling people about the risk of myocarditis does is dissuade some portion of people from getting the MRNA vaccine... which leads to more cases of myocarditis/deaths.
The main point the author is making is that algorithms represent the opinion of the corporation/website/app maker and opinions are free speech. That is, deciding what to prioritize/hide in your feed is but a mere manifestation of the business's opinion. Algorithms == Opinions.
This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!
In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:
(from: https://www.jonesday.com/en/insights/2015/03/supreme-court-c...)That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.
In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).
I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)
> That is, a company can be held liable if it intentionally mislead its client
But only in a case where it has an obligation to tell the truth. The case you cited was about communication to investors, which is one of the very few times that legal obligation exists.
Furthermore, you would be hard pressed to show that an algorithm is intentionally misleading unless you can show that it has been explicitly designed to show a specific piece of information. And recommendation algorithms don't do that. They are designed to show the user what he wants. And if what he wants happens to be misinformation, that's what he will get.
Yeah, that's the motte-and-bailey argument about 230 that makes me more scrutinous of tech companies by the day
motte: "we curate content based on user preferences, and are hands off. We can't be responsible for every piece of (legal) content that is posted on your platform .
bailey: "our algorithm is ad-friendly, and we curate content or punish it based on how happy or mad it makes out adverts, the real customers for our service. So if adverts don't like hearing the word "suicide" we'll make creators who want to be paid self-censor".
if you want to take hands on what content is allowed on that granular a level, I don't see why 230 should protect you.
>I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents.
I'm sure they'd word it very carefully to prevent that, or limit it only to software defined as "social media".
This is the actual reason for s230 existing; without 230, applying editorial discretion could potentially make you liable (e.g. if a periodical uncritically published a libelous claim in its "letters to the editor"), so the idea was to allow some amount of curation/editorial discretion without also making them liable, lest all online forums become cesspools. Aiding monetization through advertising was definitely one reason for doing this.
We can certainly decide that we drew the line in the wrong place (it would be rather surprising if we got it perfectly right that early on), but the line was not drawn blindly.
I'd say the line was drawn badly. I'm not surprised it was drawn in a way to basically make companies the sorts of lazy moderators that are commonly complained about, all while profiting billions from it.
Loopholes would exist, but the spirit of 230 seemed to be that moderation of every uploaded piece of content was bound to not represent the platform. Enforcing private rules that represents the platforms will seems to go against that point.
Remember your history for one. Most boards at the time were small volunteer operations or side-jobs for an existing business. They weren't even revenue neutral, let alone positive. Getting another moderator depended upon a friend with free time on their hands who hung around there anyway. You have been downright spoiled by multimillion dollar AI backed moderation systems combined with large casts of minimum wage moderators. And you still think it is never good enough.
Your blatant ignorance of history shines further. Lazy moderation was the starting point. The courts fucked things up as they wont to do by making lazy moderation the only way to protect yourself from liability. There was no goddamned way that they could keep up instantly with all of the posts. Section 230 was basically the only constitutional section of a censorship bill and was designed to specifically 'allow moderation' instead of opposed to 'lazy moderator'. Not having Section 230 means lazy moderation only.
God, people's opinions on Section 230 have been so poisoned by propaganda from absolutely morons. The level of knowledge of how moderation works has gone backwards!
You say "spoiled", I say "ruined". Volunteer moderation of the commons is much different from a platform claiming to deny liability for 99% of content but choosing to more or less take the roles of moderation themselves. Especially with the talks of AI
My issue isn't with moderation quality so much as claiming to be a commons but in reality managing it as if you're a feudal lord. My point it that they WANT to try to moderate it all now, removing the point of why 230 shielded them.
And insults are unnecessary. My thoughts are my own from some 15 years of observing the landscape of social media dynamics change. Feel free to disagree but not sneer.
> It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.
Is that really how opinions work in US law? Isn’t an opinion something a human has? If google builds a machine that does something, is that protected as an opinion, even if no human at google ever looks at it? „Opinion“ sounds to me like it’s something a humans believes, not the approximation a computer generates.
Not a lawyer, but that’s how I understand it. In the Citizens United case the Supreme Court has given corporations free speech rights. Included in that right is the right to state an opinion. And when you develop an algorithm that promotes content you’re implicitly saying is “better” or “more relevant” or however the algorithm ranks it, that’s you providing an opinion via that algorithm.
Not a lawyer either, but I believe regulations on speech, as here, are scrutinized ("strictly") such that regulations are not allowed to discriminate based on content. Moderation is usually based on content. Whether something is fact or not, opinion or not, is content-based.
Citizens United held that criminalizing political films is illegal. Allowing Fahrenheit 9/11 to be advertised and performed but criminalizing Hillary: The Movie is exactly why Citizens United was correct and the illiberal leftists are wrong.
Section 230 discussions seem to always devolve into a bunch of CS people talking way outside their depth about “free speech”, and making universal claims about the First Amendment as if the literal reading of the amendment is all that matters. I wish some actual lawyers would weigh in here
Wrong-- section 230 is whatever some 80-year-old judge wants to interpret it as. Didn't you guys watch the LiveJournal court case? Those guys had to run away to Russia while Twitter, Reddit, Discord, etc. all are completely flooded with pirated content and it's cool.
Maybe we can add " Section " before 230 in the title? I honestly thought this was a numerical thing.
The question that section 230, and the Communications Indecency Act in general, is the same one that plagued the court cases leading up to it, is to what degree the voluntary removal of some content implies an endorsement of other content.
Some material you can be required to remove by law, or required to suspend pending review for various MDCA safe harbor provisions. But when you remove content in excess of that, where does the liability end?
If you have a cat forum and you remove dog posts, are you also required to remove defamatory posts in general? If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?
I generally dislike section 230, as I feel like blanket immunity is too strong -- I'd prefer that judges and juries make this decision on a case-by-case basis. But the cost of litigating these cases could be prohibitive, especially for small or growing companies. It seems like this would lead to an equilibrium where there was no content moderation at all, or one where you could only act on user reports. Maybe this wouldn't even be so bad.
That is the entire point of 230. You can remove whatever you want for whatever reason, and what you leave up doesn't make you the speaker or endorser of that content.
Taken to the extreme it obviously leaves a window for a crazy abuse where you let people upload individual letters, then you remove letters of your choice to create new sentences and claim the contributors of the letters are the speakers, not the editor.
However, as far as I know, nobody is yet quite accused of that level of moderation to editorialize. Subreddits however ARE similar to that idea. Communities with strict points of view are allowed to purge anything not aligned with their community values. Taking away their protection basically eliminates the community from being able to exist.
> If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?
Twitter was sued for this, because they attached a note to a user's post. But note that this was not a user-generated community note. It was authored directly by Twitter.
Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content. I think even acting on user reports would still result in liability. The two court cases that stablished this are here:
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
> Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content
Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.
And
> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure
It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.
Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.
I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.
> Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Because Cubby did zero moderation.
> Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
What gives you the impression that this was because the moderation was "heavy handed"? The description in the Wikipedia page reads:
> The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in three ways: 1) by posting content guidelines for users; 2) by enforcing those guidelines with "Board Leaders"; and 3) by utilizing screening software designed to remove offensive language.
Posting civility rules and filtering profanity seems like pretty straightforward content moderation. This isn't "heavy handed moderation" this is extremely basic moderation.
These cases directly motivated Section 230:
> Some federal legislators noticed the contradiction in the two rulings,[4] while Internet enthusiasts found that expecting website operators to accept liability for the speech of third-party users was both untenable and likely to stifle the development of the Internet.[5] Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) co-authored legislation that would resolve the contradictory precedents on liability while enabling websites and platforms to host speech and exercise editorial control to moderate objectionable content without incurring unlimited liability by doing so.
Wikipedia's characterization is too broad. You can read the decision here [1] and decide for yourself.
[1] https://www.dmlp.org/sites/citmedialaw.org/files/1995-05-24-...
What are you reading in that decision that suggests Prodigy was doing moderation beyond what we'd expect a typical internet forum to do?
This is the relevant section of your link:
> Plaintiffs further rely upon the following additional evidence in support of their claim that PRODIGY is a publisher:
> (A)promulgation of "content guidelines" (the "Guidelines" found at Plaintiffs' Exhibit F) in which, inter alia, users are requested to refrain from posting notes that are "insulting" and are advised that "notes that harass other members or are deemed to be in bad taste or grossly repugnant to community standards, or are deemed harmful to maintaining a harmonious online community, will be removed when brought to PRODIGY's attention"; the Guidelines all expressly state that although "Prodigy is committed to open debate and discussion on the bulletin boards,
> (B) use of a software screening program which automatically prescreens all bulletin board postings for offensive language;
> (C) the use of Board Leaders such as Epstien whose duties include enforcement of the Guidelines, according to Jennifer Ambrozek, the Manager of Prodigy's bulletin boards and the person at PRODIGY responsible for supervising the Board Leaders (see Plaintiffs' Exhibit R, Ambrozek deposition transcript, at p. 191); and
> (D) testimony by Epstien as to a tool for Board \Leaders known as an "emergency delete function" pursuant to which a Board Leader could remove a note and send a previously prepared message of explanation "ranging from solicitation, bad advice, insulting, wrong topic, off topic, bad taste, etcetera." (Epstien deposition Transcript, p. 52).
So they published content guidelines prohibiting harssment, they filtered out offensive languages (presumably slurs, maybe profanity), and the moderation team deleted offending content. This is... bog standard internet forum moderation.
"additional evidence" your quote says. Just before that, we have:
> In one article PRODIGY stated:
> "We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate."
The judge goes on to note that while Prodigy had since ceased its initial policy of direct editorial review of all content, they did not make an official announcement of this, so were still benefitting from the marketing perception that the content was vetted by Prodigy.
I don't know if I would have ruled the same way in that situation, and honestly, it was the NY Supreme Court, which is not even an appellate jurisdiction in NY, and was settled before any appeals could be heard, so it's not even clear that this would have stood.
A situation where each individual case was decided on its merits until a reasonable de facto standard could evolve I thing would have been more responsible and flexible than a blanked immunity standard which has led to all sorts of unfortunate dynamics that significantly damage the ability to have an online public square for discourse.
> Note that the issue of Section 230 does not come up even once in this history lesson.
To be perfectly pedantic, the history lesson ends in 1991 and the CDA was passed in 1996.
Also, I am not sure that this author really understands where things stand anymore. He calls the 3rd circuit TikTok ruling "batshit insane" and "deliberately ignores precedent". Well, it is entirely possible (likely even?) that it will get overruled. But that ruling is based on a Supreme Court ruling earlier this year (Moody vs. NetChoice). Throw your precedence out the window, the Supreme Court just changed things (or maybe we will find out that they didn't really mean it like that).
The First Amendment protects you from the consequences of your own speech (with something like 17 categories of exceptions).
Section 230 protects you from the consequences of publishing someone else's speech.
Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech. And the 3rd circuit TikTok ruling spends paragraphs discussing this. You can read it for yourself and decide if it makes sense.
"Section 230 protects you from the consequences of publishing someone else's speech."
Right.
"Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech."
????????????????????????????? Read what you wrote. It's the user who created it.
I strongly suggest reading the actual 3rd circuit TikTok ruling as well as the actual Moody vs NetChoice Supreme Court ruling, both from the year 2024.
Things have changed. Hold on to your previous belief at your own peril.
> The First Amendment protects you from the consequences of your own speech
It does no such thing. It protects your right to speak, but does not protect you from the consequences of what you say.
Under that standard North Korea has free speech.
Bingo. If you threaten, or promote harm/hate speech, you're not suddenly immune from the consequences of that. It's that the platform is (generally) immune from those same consequences.
It’s not just that. If I say something that people find derogatory, and they do not want to associate with me because of that, that’s THEIR first amendment right.
There can always be social consequences for speech that is legal under the first amendment.
Yes, this is really what I meant. The First Amendment does protect you from government retaliation for critical speech. It does not prevent you from becoming a pariah in your community or more widely for what you say. Just look at any celebrity who lost multi-million dollar sponsorship contracts over a tweet.
Free Speech https://xkcd.com/1357/
It's important to note that this article has its own biases; it's disclosed at the end that the author is on the board of Bluesky. But, largely, it raises very good points.
> This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).
The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
Realistically, the entire web ecosystem and thus a significant part of our economy rely on Section 230's protections for companies. IMO, regulation that provides users of large social networks with greater transparency and control into what their algorithms are showing to them personally would be a far more fruitful discussion.
Should every human have the right to understand that an algorithm has classified them in a certain way? Should we, as a society, have the right to understand to what extent any social media company is classifying certain people as receptive to content regarding, say, specific phobias, and showing them content that is classified to amplify those phobias? Should we have the right to understand, at least, exactly how a dial turned in a tech office impacts how children learn to see the world?
We can and should iterate on ways to answer these complex questions without throwing the ability for companies to moderate content out the window.
> The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.
Chubby Inc. vs. CompuServe established that a non-moderated platform evaded liability for user generated content. https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.
People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.
I guess most people that think section 230 is excessive are not advocating for its complete removal, but more like for adding some requirements that platforms have to adhere in order to claim such immunity.
Sure, but I find that few people are able to articulate in any detail what those requirements are and explain how it will lead to a better ecosystem.
A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?
Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.
IIUC, most large ad providers allow you to see and tailor what they use in their algorithms (ex. [1]).
I think the big problem with "Should every human have the right to understand that an algorithm has classified them in a certain way" is just that they flat out can't. You cannot design a trash can that every human can understand but a bear can't. There is a level of complexity that your average person won't be able to follow.
[1]: https://myadcenter.google.com/controls
Yes but it _appears_ that there are very different algorithms/classifications used for which ads to recommend vs what content to recommend. Opening up this insight/control for content recommendations (instead of just ads) would be a good start.
Have at it.
https://support.google.com/youtube/answer/6342839?hl=en&co=G...
Yeah there’s some controls, but they are much less granular than what ad-tech exposes. I’ve just never really been sure why Google/meta/etc. choose to expose this information differently for ads vs content.
[flagged]
[flagged]
Techdirt's coverage of section 230 has been pretty consistent from before bluesky existed.
In fact, the causality is reversed; he's on the board due to his influence on us. Masnick wrote the Protocols not Platforms essay which inspired Dorsey to start the Bluesky project. Then Bluesky became the PBC, we launched, became independent, etc etc, and Masnick wasn't involved until the past year when we invited him to join our board.
I hope you view his writing and POV as independent from his work with us. On matters like 230 you can find archives of very consistent writing from well before joining.
TIL. I'll admit, I'm not avid reader of TechDirt, follower of Mike Masnick or care that much about Bluesky since I don't interact a ton with social media.
However, my initial feelings are correct. NYT article is bemoaning about Section 230, Mike seems to ignore why those feelings are coming up and burying there might be conflict of interest in caring since I guess BlueSky has algorithms it runs to help users? Again, admitting I know nothing about BlueSky. In any case, I don't think consistent PoV should bypass disclosure of that.
His arguments about why Section 230 should be left intact are solid and I agree with some of them. I also think he misses the point that letting algorithms go insane with 100% Section 230 protection may not be best idea. Whether or not Section can be reformed without destroying the internet or if First Amendment gets involved here, I personally don't know.
BlueSky’s big selling point is no algorithm by default. It’s default timeline is users you follow in descending chronological order. You can use an algorithm (called a feed) if you like. They provide a few, but also are open about their protocol and allow anyone who wants to write an algorithm to do so, and any user who wants to opt in to using it can.
The user who made the bonkers content is liable.
Sure, but with other forms of media, the publisher is liable as well about bonkers content with certain exceptions.
This is what most of Section 230 fight is about. Some people, myself included, would say "No, Facebook is selecting content that doesn't involve user choice, they are drifting into publisher territory and thus should not be 100% immune to liability."
EDIT: I forgot, Section 230 has also been used by Online Ad Publishers to hide their lack of moderation with scam ads.
Reading the law, it sure seems to be aimed at protecting services like ISPs, web hosts, CDNs/caches, email hosts, et c, not organizations promoting and amplifying specific content they’ve allowed users to post. It’s never seemed to me that applying 230 to, say, the Facebook feed or maybe even to Google ads is definitely required by or in the spirit of the law, but more like something we just accidentally ended up doing.
I thought safe harbor was the relevant statute here (section 512 of DMCA)?
That’s narrowly concerned with copyright infringement, no?
Yeah. It's been a while. This is interesting
[flagged]
[flagged]
[flagged]
>zero control
I have had reasonable success with Youtube's built-in algorithm-massaging features. It mostly always respects "Do Not Recommend Channel", and understands "Not Interested" after a couple repetitions.
I seem to remember part of TikTok’s allure being that it learned your tastes well from implicit feedback, no active feedback required. We around here probably tend to enjoy the idea of training our own recommenders, but it’s not clear to me that the bulk of users even want to be bothered with a simple thumbs-up/thumbs-down.
If they want to make money, don’t they need you to stick around? As a side effect of making money, then, aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?
That doesn’t work as well for stuff that you wish other people would find noxious but they don’t: neo-Nazis probably would rather see more neo-Nazi things rather than fewer, even if the broader social consensus is (at least on the pages of the New York Times) that those things fall under the uselessly vague header of “toxic.”
Even if the recommenders only filter out some of what you want less of, ditching them entirely means you’ll see more of the portion of the slop that they’re deprioritizing the way you want them to.
> aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?
No, they're incentivized to increase the amount of content advertisers find acceptable.
Plus, youtube isn't strictly watching, many people do make a living from the platform and these algorithmic controls are not available to them in any way at all.
> ditching them entirely
Which is why I implied that user oriented control is the factor to care about. Nowhere did I suggest you had to do this, just remove _corporate_ control of that list.
We need to stop calling it “Section 230,” “Section 230 of the CDA,” or worst “Section 230 of the Telecommunications Act.”
Call it “Section 230 of the Communications Decency Act.”
As the name implies, the CDA was an attempt at censorship, specifically an act to outlaw internet pornography.
An unconstitutional (per SCOTUS) act to outlaw unpopular speech (pornography).
Far from being some libertarian Christmas gift, the whole point is to facilitate internet censorship.
Without 230 of the CDA, third parties would be held liable for attempts at moderation (a synonym for private censorship).
It’s so bizarre that the popular understanding is so far removed from what was actually done.
TFA:
Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.
And Nero played the fiddle while Rome burned.