The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.
Well why haven't all the big tech companies done it then?
They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"
And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.
The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.
Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)
We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.
OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.
[Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.
Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/
"Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"
> Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.
(Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).
>If it was then this kind of solution would be being legislated for.
What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?
The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.
Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.
The tech companies are the ones lobbing for age verification.
The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.
An outstanding idea. Those lobbying for age verification hate it though, because they want to be the arbiters of age, and all that juicy PII that they can analyze and resell.
Think about how they validate how old you are. Meta and Google, who are lobbying in support of this legislation,will force you to sign up with your real ID, and be the arbiter for questions like “are you old enough for this website”. For every request that you make through some third-party website that needs to know your age, Meta and Google will know where you tried to login, and for which content. They will then resell this data to the highest bidder. Additionally, through all their ad networks and tracking, they will follow your session and have verified ID to match your entire browsing history. This is the end of anonymity and privacy on the Internet.
I'm not so sure. I think the push is from the government actually. But companies are not exactly opposed to it. Quite the contrary. Big corporations see compliance as a moat. Tobacco companies supported stricter regulations on tobacco advertisements, because they had the deep pockets required to follow the changing laws. Mr. Altman is all-in on AI regulation, because it will mire down competitors while OpnAI has already "slipped past the wire" and done all their training pre-crackdown. When given a choice between regulating their industry (platforms and operating systems) vs regulating someone else's (porn sites and the like) they'll always helpfully "volunteer" to be the first to be regulated. It's just good business.
"The government" is the same as those lobbying the government. The people in the government get paid to push it, so they push it, and get paid more when it goes through, by the people who want that PII to analyze.
That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.
What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.
That's between parents and their local governments. Yes when I was a kid my mom let me watch whatever and go wherever. The parent in my example ultimately decides what a kid may or may not do which is in alignment with existing laws. If the parent is endangering their kid that is up to them and their government to sort out.
Point being, put the controls entirely into the hands of the device owner. Options can be to default to:
- Block everything by default unless header states otherwise.
- Block only sites that state they are adult.
- Do nothing. Obey the operator. (Controls disabled on child accounts or make them an adult or otherwise unrestricted account on the device).
I think the options are just limited to our imagination.
This is the problem. What is an "adult" web site? Websites that show porn? Websites that show gore? Websites that show violence? Websites that show non-porn naked people? Websites that have curse words? Websites that promote cults and alternate religions?
Why is it the site's responsibility to "state" that they are adult, given whatever parameters they dream up? Why is it the government's responsibility to say "This is adult content, but that isn't adult content?" Shouldn't the parent get to decide which categories of content count as "adult"?
That was our struggle with implementing "blocking" tech at a school I worked at. Is a kid looking up how to do a breast self exam porn? What about a self testicular exam.. What about actual Sex Ed kinds of sites?
> I know parents who have no problem with their kids seeing porn.
Surely you mean at least teenagers, and not literally children, right? Consider the prevalence of violence, racial stereotyping, and escalation of fetishism into degeneracy that clearly exists within this medium; what's the line that these parents draw? Are they making sure it's only something vanilla? Or is there no line whatsoever?
Then those parents can turn off their browser/client’s age protections. I think that’s actually a decent argument for the solution posed by this thread.
The US. If they want to serve users in other countries, or if certain states make their own rules, it's business as usual whether to serve different content there or serve a different header or take the legal risk.
It's the exact same problem that age verification faces. There are different laws in different jurisdictions and operators have to figure out how to comply with the ones that matter to them.
Think of the (current) header as meaning "we would have blocked you if we saw you were under 18" or whatever equivalent and it should make sense.
> I know parents who have no problem with their kids seeing porn.
I don't agree with showing actual children porn, but I also totally expect teenagers to find some way to get access to it in the age of the Internet.
Part of the challenge with this is cultural. Different places in the world think about sex, sexuality, and even the concept of what is a child differently. In the US, showing a woman's bare breasts to a person under 18 is generally considered wrong, and in many cases is illegal. In most of Europe it wouldn't even raise an eyebrow, because bare breasts are on television, sometimes in commercials even.
Set aside for a moment the question of age verification and age limits, we cannot even agree in any sort of universal sense what even qualifies as porn or adult content, and at what age someone should be able to see it. There's a difference between a 7 year old and a 17 year old seeing the same type of content, and there's also a difference between a photographic nude and a video of people engaged in coitus.
The story is basically the same for everything else you listed.
These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do. What the GP suggested using RTA headers though puts the control into the parent's hands, which is as it should be.
We don't need to care what France or China thinks when we make our laws that are about our own citizens. They do the same over there.
> These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do.
Yes there's a chance our rules spill over there naturally, and I don't consider that wrong either.
I considered many of the same points you mentioned.
Though, one area I am still struggling to grasp is the harm that governments are trying to mitigate. If a child were to see inappropriate material, then what harm can truly arise? Also, why do governments need to enact such laws when the onus of protecting children should be on their parents?
I am not trying to start any kind of flame war, but I really cannot see any other basis for all this prohibition that is not somehow traceable back to Western religious beliefs and the societies born and molded from such beliefs.
i can make arguments as to potential merits of kids having a beer/cigarette, listening to swear words, or witnessing casual violence. i cant make an argument for letting kids see hardcore pornography in any capacity.
Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.
So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.
So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).
While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.
Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.
Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site https://www.rtalabel.org/
For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.
I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.
But does this actually still get used by parental controls?
Some parental control applications will look for it but it is not yet legislated to be mandatory on a majority of user-agents.
All I am suggesting is we legislate the header to be added to URL's that may contain material not appropriate for small children and mandate the majority of user-agents the ones that are default installed on tablets and operating systems look for said header to trigger optional parental controls. Child accounts created by parents on the device should not be able to install alternate user-agents or bypass the controls (at least not easily). Parents should be guided through this on device setup.
Indeed their site is old and rarely touched. The ideas and concepts have not changed. It really could just be a static text site formatted in ways that law makers are used to or someone could modernize it.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
Quite true. The US corporations act like a giant global rabid dog. Fake legislation appears in the USA - lo and behold, it is copy/pasted into the EU. At the least lobbyists are getting rich right now.
At least the EU has GDPR. In the US, our personal data is collected by every app and website and company and packaged, sold and sifted through by a vast collection of private data brokers which the government already ingests.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
An age header is not the answer. Why should a site have to decide what content is appropriate for a 18 year old and what content is not? Who is qualified to make that decision for every 17 year old in the world? Do they know my 17 year old? Do they know the rules in our home? What if I'm OK with my kid seeing sex-education stuff, but some lawyer at Wikipedia just decides to tag sex ed articles as 18+? Now I have a shitty choice: Open up the floodgates of "18+" to my kid, do it temporarily while the kid browses the sex ed sites, or not let the kid browse them.
Letting a company or government decide what's appropriate for what exact specific age is fraught with problems.
Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.
An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.
But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.
For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.
If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.
But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything as I could not predict what people may upload which made for a very long header.
Agreed though in my example the point would be to set the header in the case the child is logged in but for whatever reason the site does not know their age. Instead of a third party site, a header is sent with the video tagged as adult that triggers parental controls if they are enabled by the device owner.
Yeah this seems like the best tradeoff. You avoid the central control infrastructure and you provide information to clients. It's also a great match with free computing devices, which can then utilize the new information, empowering users (eg parents -> parental control on device, or individuals who want to skip some kinds of content).
There are issues today with this approach such as lacking granular information for sites that have many kinds of context, but if you stop investing in the central control infra and invest in this instead that could be remedied.
I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.
Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary
I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.
Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.
The point and overall goal should be to not signal anything to the server operator unless a credit card is being used. Everyone is whomever they claim to be as far as anyone is concerned, until payments are required which today means sharing identity and age (via the credit card information on file with the financial institution and is shared today).
In the case of RTA the only signalling taking place is a server header being transmitted to the client. The client could be anyone at any age. Nothing to explicitly leak or disclose. Server operators can guess all they desire as some do using AI based on user behavior of which they sometimes get wrong.
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
The question was about fining entities outside of the original jurisdiction, so I am not sure what you have in mind that could be done by network/security engineers here.
In terms of fines if they do not pay the fine their country is at risk of sanctions or embargoes which is probably a bit heavy handed but may incentivize their government to also enforce the rules, collect fines keeping some for themselves and passing the original fine back to the countries implementing child safety controls.
This is extremely naive and short-sighted. There is a literal example of this happening rn, and hopefully you will see why your approach isn't that good.
UK's OFCOM is currenly issuing legal threats to 4chan, for allegedly serving adult content and not willing to implement age verification. 4chan's lawyer tells them to pound sand[0], on the basis that 4chan is hosted in the US and has zero business presence in the UK, and UK is more than welcome to ban the website on their end through UK ISPs. The saga has been ongoing for a while, and the lawyer has been pretty prolific online talking about the case.
Anyway, following your approach, UK should embargo US over 4chan not willing to implement age verification as required by UK law? I plainly don't see this happening, or even being considered, ever.
4chan servers are in the US and the owner is in Japan. If the US wanted to they could seize all the servers but they will not because they have real time monitoring of all activity on the boards and have ever since Christopher testified before congress and the site was sold. If anything 5-eyes want that site to be unrestricted. 4chan has been a goldmine of people self reporting for wanting to shoot up or bomb places, as has Reddit leading to many body-cam videos of the site users and in some cases the moderators being busted.
The IP addresses are all captured by Cloudflare. It is literally next to impossible to post on 4chan without enabling javascript on Cloudflare or buying a 4chan-pass which leaves a money trail not perfect, nothing is but most mentally unstable people do not think these things through.
Should legislation be added to require the RTA header 4chan could and likely would add it in a heart-beat. They already have some decent security headers in place.
I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children and adults. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.
That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.
Surely we both understand the point I was making in that labels are already compelled by laws today.
Fine, cigarettes must be labelled as being a risk of causing cancer. The punishment for failing to do this is both civil and federal penalties including massive fines and federal prison time.
I never implied an internet license. Rather if a server operator a business has content that may be adult in nature they must label their site. Businesses require a license already but that is unrelated to this.
Clients could refuse to show content that does not have headers set.
On other hand servers might choose to lie. After all that is their free speech right.
So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.
Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't undo it.
This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].
It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.
I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.
Now I disagree very strongly with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.
I am only interesting in protected the majority of children which I believe my proposal more than covers. There will always be exceptions. Today teens share porn, warez, pirated movies and music with small children in rated-G video games. I am not proposing anything for that. It is up to businesses to detect and block such things.
Point being, there will be a myriad of exceptions. I am not looking to address the exceptions. Those can be a game of whack-a-mole as they are today. I am proposing something that would prevent the vast majority of children from being exposed to the trash we today call social media and of course also porn sites.
Look, please don't sideline/marginalize people by using the "whataboutism" term. Thats being used more and more to silence dialog from people that see problems outside the focus of a specific area. Its important that we see ALL sides of the problem.
Thank you for understanding. I know sometimes topics can get out of hand with comments about related things, but I this case. We might be better off looking at all the extremities.
These aren't exceptions or whataboutism. It's the debate being had on the floors of state legislatures.
> It is up to businesses to detect and block such things.
Which is exactly why age verification legislation is hitting the books. No one (serious) cares about whether kids can download porn and R rated movies. Parental controls already exist if the threat model is preventing access to specific content that is able to report itself as _being_ that content.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content. They define specifically what classifies as an addictive stream and put the onus on service providers to assert that they're not delivering addictive streams of media to kids. An HTTP header isn't enough, because it's not about the content being shown to kids but the design patterns of how it's accessed.
Essentially: age verification isn't about porn. 18+ content stirs the pot a bit with the evangelical crowd but it's really not what people are worried about when it comes to controlling digital media access with age gates.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content.
That sounds simple to me. If a type of content is addictive then require the RTA header.
- Adult content, or possible adult content.
- User contributed or generated content (this covers most of social media)
- Site psychological profiles that are deemed addictive (TikTok and their ilk)
Overall we are describing things that are harmful to the development of the minds of small children. If adults wish to avoid such content they can create a child account on their device for themselves to be excluded from this behavior as well. I use a child account in a couple of popular video games to avoid most of the trash talking and spam. I'm not hiding my age as the games have my debit card information but rather I opt-in to parental controls.
How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?
I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it which is admittedly not uniformly applied.
This already has been solved. Youtube disables viewing via embeds for any content that has been age restricted. Either you view it on Youtube which requires logging in to see age restricted content in the first place, or you get the ! icon and the warning about needing to log in.
THe government shouldn't be raising anyone's children, that's what parents are for. If you're a bad parent, your kids will get access to bad things and could become an adult failure.
The future of your family and your legacy is up to you, not the government. We don't need age verification to restrict the social darwinism of raising children.
I wish I could upvote this comment harder. I started having unsupervised internet access (with the family computer in the living room) when I was 8. I'm a functional and successful adult because I trusted my parents. When my mother forbade me from registering on online forums I complied. When I read "fellation" in some minecraft chat (albeit somewhat later) I asked my mom what it was and understood that "sex" was something for the grown-ups and that I shouldn't worry about it. All because I would never even conceive that my parents wouldn't do what's best for me, and was unconditionally loved (even though I didn't know about this concept).
I would rather have parenting licenses than online age verification
Yeah I'm not sure why the govt or any other 3rd party needs to get involved. If I don't want my kids to look at porno online I will educate them on porn. If I don't trust my kids to listen to me then I will install an open source monitoring software and educate them on trust.
Letting the govt dictate what is age restricted is an easy way for the govt to control speech and narrative. For example, children's books that feature LGBT characters are being reclassified as adult [1], thus requiring additional verification. If I do/don't want my kids to read LGBT books, it's my decision. The govt should not dictate that. What else will the govt reclassify? Anything involving people of color?
I keep thinking we can't fight age verification by just saying "no" to it, and have to offer an alterative.
Maybe we need to turn it on its head, point out that if we want legislation to help out with this, we could choose legislation that gives power to parents. Age verification laws put the power directly into the law itself, they're a blanket solution that gives all the power to legislators and that prevents parents from making decisions about what's appropriate for their kids and what isn't.
If the market isn't delivering the level of parental controls people want, then sure, maybe legislation is needed. But it should be legislation that improves parental such that parents can make decisions about what's appropriate for their children.
Yeah I agree. Let me decide what's appropriate for my kids. Like for video games or movies... A game rated M for foul language and nothing else might be OK for my adolescent kid. A game rated M for excessive nudity and sex probably not.
Also, different kids mature at different rates. I wouldn't give a shit about my kid watching, say, an R rated movie if I understand they'll be able to handle it and understand it's fiction. If I had a 14 or 15 year old and they had a healthy understanding of sex and the dangers of porn, I wouldn't give a shit if they managed to see some poorly drawn tits online. Why? Because if you didn't intentionally seek out lewd content as a teenager you're either very very religious or a liar
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
A tech company doing scans for validation could actually connect to a state database to verify the ID is legit and is not already being used for a different account. It would then be saved. I don't think real world vs tech world usage of fake IDs are the same at all.
>Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
Not necessarily true. There's a local stripclub that scans and saves the scan to fight chargebacks and the like. It is definitely logging stuff. They've told me that they were going through the logs once and the bartender ended up googling my fullname. We're cool and I didn't care, but this what you said is not a blanket true statement. I trust a physical business that I can visit far more than some ID verification company that is going to get hacked at some point.
I've seen this before in London too in some venues. They have full-on computers that scan your passport and take your photo, for the express purpose of storing this info.
tech companies care even less? how do you arrive to that conclusion? tech companies log/store EVERYTHING. this would be an absolute boon for them to be able unequivocally assign to you all of the data they track about you. suddenly, anonymous analytics become identified data and not just deanonymized data.
Logs of location data on people are already worth real money. The FBI has admitted to buying it. The companies that do age verification will absolutely be selling that data unless there are severe penalties for doing so, and what are the odds that the U.S. government passes a law making it illegal for the FBI to buy data?
That's bad enough if you're a U.S. citizen. If you're a non-U.S. citizen, now you're in the situation where all these U.S. social media sites are collecting personal information from you and reselling it, but you have no legal protection unless your government risks tariffs and invasion threats to pass legislation against it, which the U.S. will probably ignore anyways.
This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
> This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
I guess, but like, who? During the time TikTok was not available on an app store (even though the service wasn't stopped), people were trying some of the other Chinese apps, and they were not very compelling as the exodus never happened.
It's a chicken and egg problem. Without users, a new social platform lacks content, so it can't attract users. Unless something decidedly new and compelling comes along, users will probably stick with what they know... unless something happens that really pisses them off.
If I'm being honest though, I don't think privacy concerns will be what does it. The TikTok generation doesn't give a fig about privacy. You can build a panopticon around them and they won't even notice.
> Handing an ID to a bouncer at a bar or similar is not logging anything.
Some of the bars in the party areas of my college town have a digital scanner they hold the ID up against, and they even had a screen showing a scrolling Wall of Shame of fake IDs. And they had this like 20 years ago. So I would not necessarily agree with you here
They also use them to flag people who've been previously banned and the systems work across venues. The idea that verification in the real world is cursory is not accurate.
The vast majority of places I frequent do not even have a person at the door checking IDs. If the bar tender/server thinks you look young, they ask for ID. I clearly do not look to be too young, so there's that. The last place I went to with an actual scanner was more of a nightclub that had a cover charge.
There's a fine line between night clubs and bars (and a venue can operate as both, depending on the night).
Functioning as a bar where people come in, drink and eat - generally not checking ID's at the door.
Functioning as a night club, generally checking ID's at the door. Almost no places I've been to scan ID's. I'm also middle aged and not going to night clubs hardly ever. Pretty much just a couple concerts a year in the big city. Those venues scan ID's.
sure, but it is what it is. the places with scanners may be more sophisticated than i give them credit, but you cannot deny there are places that do not card every person every time you visit. online places will never not know it was you. if you cannot see the differences, then you're just deliberately being obstinate about it
ID system should be based on commercial bank. If you need to prove your identity or whatever about yourself just tell them to ask your bank and bank will ask you which information about yourself you are willing to share with whoever requested to confirm something about you.
When ID is tied to your bank account you guard it like you guard your bank account. Because it is the same thing. This will drastically lower the incentives to "share" your identity with anyone.
What's more this system is already operational in many countries.
I wonder how many months until this suggestion becomes slightly embarrassing. I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online. Especially when "my" bank goes under and all of my data gets sold off to whoever takes it over.
The proposed system moves sources of identity from the nation to private banks under it. So banks own people. Propose a financial regulation to the national congress/parliament and you stop existing, digitally or potentially physically as well. That's feudalism. Or Chinese struggles-of-nations warlord era situation which is often grouped up into that concept as close enough things.
You use eID when explicitly interacting with a govt entity or bank or otherwise similar institution because you have to and want to prove who you are. Yes, I do want to prove who I am when I file taxes, vote or want to start a business...
You don't use it when just browsing randomly on the internet. You don't use it to buy games on steam. Your computer isn't forced to store it because a law arbitrarily says so.
if it's done by the government, what prevents the goverment to not allowing opposition members to access social media? I think social media and porn are harmful for children but still
Why not, seems to be made exactly for this purpose if you look at the "‘Age over 18’: true" flag. What's bad about that solution?
> The technical solution for an EU age verification app is privacy-preserving, open source and user-friendly.
> First, the user downloads the app onto their phone and sets it up by certifying their age.
This can be done with a biometric passport/ID card, a national eID (e.g. national ID Card or other electronic identification mean), a pre-installed third-party app (e.g. a banking app), or in person (e.g. at the post office). Only the information confirming that the user is over the age will be saved in the app. No name, no birthday, or any other data is saved.
> After completing this step, the communication between the app and the provider certifying the user’s age (e.g. eID, third-party app) ends. No further data is exchanged.
> The app is then ready to be used online. When an online platform asks to verify the user’s age, the user can use the app to communicate they are over a certain age (e.g. ‘Age over 18’: true) to the platform.
I don't disagree with random browsing. I do use it to buy games on steam as any online purchase on my card uses it. And my computer doesn't store it, my phone does.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
The two methods that seem feasible are making it hard to copy (putting it in the secure element in your phone, for example, which I don't love) or doing tokens that can only be used a limited number of times per day, like in : https://eprint.iacr.org/2006/454
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
Further, "porn tokens" are the pointy end of the wedge, because it's easy to misconstrue any opposition as advocating for "kids should have access to porn, actually". The broad end that is being hammered towards is "kids aren't allowed on social media because it's harmful to them" AKA "free speech tokens".
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
Continuous age verification isn't possible, so you'll have to store some sort of proof of age somewhere, and that proof will always be sharable.
Let's say Facebook has verified my age somehow. I could share my Facebook login credentials, or the token that their authorization server sends back in response. You can create some hurdles to doing that, like requiring a second factor, but I can just share that too.
You might as well go down the route of accepting that possibility. These systems are never going to hold up in the face of a determined enough teenager.
That really depends. A zero knowledge system would show to the verifier that the person is authorized for access _right now_, but thats just the answer to a particular challenge. Outside of the verifier who knows they came up with a random challenge without bias or influence, the response would mean nothing.
I think a lot of age verification systems are the solution to the real core of legislation - to make companies liable for underage viewing of content. To put such legislation in place without providing a feasible way to accomplish age verification would be argued as discriminatory.
In that sense, a zero knowledge system which doesn't give a company non-repudiation so that they can defend themselves in court may very well be insufficient. And that will require tracking identity long-term, although it could be done with a third-party auditor under break-the-glass situations with proper transparency.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
There's already authorities that know your age, so verifying age with them to get the credential isn't the part that needs to be anonymous. The issue is them knowing what you do with your credential, which anonymous credentials solves by making it impossible to track tokens back to the credential holder. As far as sharing, there are some possible mitigations.
Right. And the possible sharing mitigations generally amount to tracking.
This isn’t even getting to the issue that mandating government-issued credentials is the “foot in the door”. If you mandate the use of government creds for accessing websites, it’s an obvious step to turn around and demand that sites report credential use to “fight credential fraud”.
Yep look who is backing these regulations. It's absolutely for no other purpose than to further enable surveillance capitalism and the surveillance state.
Yes, but this is not popular among technologists (see the average sentiment towards age verification here). Legislators aren't going to build technology. This will happen if age verification actually becomes a widespread requirement. But until that point the prospective builders will be fighting the entire premise of such systems.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
Young people setting up a MITM and getting deeper into tech rather than consuming short-form-content is something I'd appreciate as a nice bonus effect.
Of course the EU solution isn't perfect and there are bypasses (there will always be and have always been), but let's appreciate it that way rather than too many PII, if it must come. I'd prefer the Age/RTA header and parental responsibility too.
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
I did. Restricting children’s access to certain things is not ageism.
We can argue the merits of restricting children’s access to the internet, or certain books, or alcohol, or pornography, or whatever else. We can debate the merits of those various restrictions based on the benefits and costs to both the children and society at large.
But it is not ageism to attempt to protect children. It is not ageism even of the restriction is a bad idea. To claim it is ageism is an emotional appeal (“ageism bad!”), not a logical one.
It depends on what you're restricting and why. Restricting access to things based on age can absolutely be ageism if the thing does not need to be restricted.
I don’t think it’s ever “ageism” in the normal sense to restrict children’s activities for their safety. But even if that’s the right term in some cases, it hinges on “if the thing does not need to be restricted”.
The burden is still to demonstrate that a restriction is wrong. If that can’t be demonstrated, then labeling it ageism is a purely emotional appeal.
I used a rhetorical device to demonstrate why restricting children’s activities is not simply ageism.
I don’t know how you can seriously come here and accuse me of engaging in bad faith when I’ve taken the time to make my viewpoint explicit multiple times in this thread now, including directly to you.
Hyperbole is a rhetorical device, if that’s what you mean.
Just because I had a hard time following your logic doesn’t mean I didn’t engage in good faith. You also seem to be arguing in a heated way with every person who responds to you.
Ageism is a legally defined form of discrimination as well as the subject of ethical discussions. It's a real, defined thing. Just because we disagree on what qualifies as ageism doesn't mean you get to call foul and say it's irrational/emotional.
This is literally a “think of the children[‘s freedom]” appeal. You’re not arguing for or against the restriction on its merits.
In the US at least there’s also no such thing legally as age discrimination against minors so far as I’m aware.
Edit:
Let me frame this differently. “Ageism” is basically by definition bad, so applying the term “ageism” to a restriction is a an attempt to label the restriction bad without establishing that on its own merits.
If you try to provide a consistent definition of “ageism” that applies to restricting access to the internet but not restricting access to alcohol, you will most certainly have to resort to phrases like “reasonable restrictions” (if not, I’m very interested in your definition), which means that there’s still a need to establish what is reasonable. Applying the label “ageism” without establishing reasonableness is then a circular argument.
You* are using “ageism” as a synonym for “bad”. You are also labeling restrictions as “ageism” without establishing that they are actually bad.
In effect you are saying “that’s bad!” without accepting the burden of establishing why it’s bad, but hiding this behind a different term that carries more emotional weight. It’s a very politically effective strategy but it’s not logically sound.
This is why we need verification technology that protects identity. Implemented as anonymous verification, without distinguishing between adult age, or permissioned by parent.
That solution doesn't negate parental freedom of choice, it facilitates it.
I am baffled at how often the "they don't want it, because of their ulterior surveillance motivations, therefore it isn't a solution" argument is made. "They" don't want it because it is a solution to the nominal problem, that they cannot abuse, and would negate their ability to use it as a cover with a large well-meaning voting constituency.
Two problems, nominal and ulterior, resolved in the right way by one solution.
When a nominally sensible problem is used as a cover for overreach, solving the nominal problem in a healthy way is the best offense. The alternative is an endless war of attrition, and the "hope" that politicians resist the efforts of well-paid lobbyists and tens of millions of well-meaning voting parents forever. That is a ridiculous strategy, doomed to fail, delivering irreversible damage. As is already evident by the abusable laws that are accumulating.
I worry at the lack of political acumen and foot-gun reflexes in the ethically-motivated technical community.
Stop endlessly fighting to lose less. Just play the winning move already. Stop the irreversible damage.
I think part of the issue people are missing is what the late Randy Pausch would call a “head fake”. My specific autism is not privacy, digital security, none of that. So I will be honest about my gaps. But from my little corner what this is about is geopolitics - specifically a potential war with China. If you zoom out to the macro level first understand the reason China setup the Great Firewall. Why countries like Iran cut the internet whenever there are protests. These are, first and foremost, defensive measures against foreign influence. America is subject to these same outside forces. The difference is that our free and open society makes things like "a Great Firewall" simply unpalatable to the American people. And rightly so. But it is also becoming increasingly evident that these malign actors are using our own values against us.
Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
There is hardly a nation on the earth that is not involved in some way in the American discourse - each pushing and pulling to their own aims and individual agenda. Historically there was a sort of Nash equilibrium with Americans caught somewhere in the center. But as the loudest voices, or rather the most well funded, begin to dominate the discussion via social media and covert funding, we are seeing it become increasingly problematic for American democracy. That is why you are starting to see this consensus over 'verification' and 'identification' begin to coalesce. The government, both left and right of center, has begun to realize the long term ramifications of these actors.
So how do you solve that inherent tension between our intrinsic right to free-speech and those who would abuse it to cause us actual harm? An independent, 3rd party verifier with limited scope makes sense - but would that solve the greater geopolitical implications? In truth I've long expected social media like Reddit, Facebook, et al. to formulate a body of their own like the MPAA. But likewise I don't think there is a clear answer here. Do you trust the Tech Oligarchs with this power over the Government itself? This is core to the problem. How do you 'censor' the internet without really 'censoring' Americans? I think this is part of what the last administration was trying to do with the failed "Disinformation Governance Board". And that failure is what has led us to where we are now.
The original twitter thread is right to say this isn't a left-versus-right issue. This is undeniably a censorship mechanism designed to exclude a set of voices from the internet as we know it today. As with the patriot act, they choose to wrap the bitter pill in a bacon-flavored rhetoric of safety and protecting the youth from perverts and degenerates. But what has failed to be acknowledged is the intrinsic cost of having an open society in a world where that openness has become an attack surface. Make no mistake: the goal is censorship. But the solution space to what you call 'the nominal problem' is less trivial than I think you believe.
I’m in the UK and we recently got the Online Safety Act. We failed, this legislation is very popular with voters and not getting rolled back. Those that dislike it use a VPN and aren’t interested in fighting. I’d say most of the public here is exhausted with cost of living and internet freedom just isn’t relevant to their voting habits.
I grew up around a lot of the hacker ethos, open internet, Information Wants To Be Free etc… feels like a part of my identity is being striped away by my government.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Dumb- BUT immediate links to sites of the right legislators!
Adam B. Schiff
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
Alex Padilla
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Let them pry ID from our cold dead hands. If a site requires ID, it doesn't get my business.
Example, Discord wanted my ID to enable certain features, I declined, I now can't use those features, fine by me. If they started asking for ID anyway, I'd say no and see what happens, even if that means they lock me out entirely. There's no universe where they get my ID.
To anyone reading this, please take the extra step beyond striking down age-verification laws, and start taking measures to prove to Congress that it's not needed.
Your nextdoor neighbor whose misbehaving child that's permanently on their phone? Help them out.
Your friend that joked about sending death threats to someone? Scold and report him.
That girl endlessly scrolling Instagram? Get her help.
Please take a step back and examine how insane the internet is and how it's affecting our everyday lives. Political violence and mental illness is increasing, and the internet is solely to blame for this.
"If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary."
Federalist 51
We're all too familiar with the latter part of that quote, but we're completely oblivious to the former. At this point, we've all but proven that the government needs to step in and regulate internet access. And unfortunately for us, they're going to do it in the most dystopian, authoritarian way possible.
I want to be on the side of freedom and strike this bill down. But when it is struck down, everyone is going to cheer, go on their merry way, and continue to let demorilization, radicalization, and mental illness infect the psyche of the everyday human being, and do nothing about it. And then the cycle will repeat itself.
At this point, I actually hope this bill passes. Not because I want it to, but because maybe then everyone will stop using the internet for everything, and some sanity will return.
I have long thought that all content (local and remote) should be properly labeled with metadata. Just like the cans of soup in the supermarket, you don't have to open it to find out if it has peanuts, lactose, or MSG in it; you should be able to filter data before accessing it.
You could define a set of 5 or six categories (nudity, sex, drugs, violence, etc.) and have a scale from 1 to 10 for each. Each content producer would rate each category according to defined criteria.
Then each user, or their parent, can set what their own acceptable level is. If you set your violence level at 4 then nothing level 5 or higher will load.
Unfortunately, their most prominent call to action doesn't seem to address the various state-specific and non-US legislation (focusing on KOSA instead). Here it is:
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years? The direction we give kids coming up always seems to lag behind reality by 10 or 20 years. Perhaps we shouldn't stand in the way of the new generation figuring things out for themselves in this brave new world. The old playbooks to a solid middle class life are increasingly outdated.
> Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years?
Also so they don't end up stupid and useless like a potted plant. People with too little education are easy to manipulate and dim. They're perfect fodder for the propaganda machines.
It would be nice if we could just let kids loose like wild animals and they'd, somehow, figure everything out. But no, we actually have to try. Otherwise they end up illiterate and eating so much candy they throw up. Because they're kids.
None of your concerns are relevant. We're not talking about 6 year olds here but presumably 12-16 year olds. And the issue isn't whether they drop out of school, but whether school must be their sole focus.
I don't think it truly is, but I do think that the younger generations think it is.
My nieces and nephews really don't know what they are going to do in their futures because so much is uncertain right now.
If it feels like a longshot to expect normal 9-5 office jobs to be around in 5 years, and it's also a longshot being an influencer, then why not go for the influencer thing?
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
Big social media companies are likely overjoyed to be able to get discrete, government issued info of a person's full legal name, date of birth, residential address (as is printed on US drivers licenses) for advertising and demographic profile targeting purposes. And then be able to correlate it with their existing social media history/clicks/profile, browser fingerprinting, IP address, daily usage patterns, geolocation. It's a massive gift to them.
I doubt they need that to identify you. There are also lots of other problems like algorithmic manipulation. But also just stop using these junky websites. Everyone always complains about Meta doing this, TikTok doing that, and it's like if all they do is make you mad, stop being their user/customers?
It will spread to everywhere else if we allow it for social media. In Australia for example, mandatory age verification has already spread to video games.
That's the cynical view, yes, but we can see educational standards and performance going down in the United States, we have seen plenty of scientific and medical studies showing problems with children and more specifically teenagers using social media. I'm not one to want to want to limit someone's rights, but it seems like the trade-off here is in favor of requiring age verification at least for social media companies.
Separately I still don't fully agree with concerns raised regarding social media and identification for everyone. Bots, people who are online just stirring up trouble, &c. are causing pretty significant challenges and problems for society. If you spew a bunch of racist stuff for example I think people deserve to know who you are.
And you know we do this all the time. Folks want gun registries and things like that (and I agree, as a matter of practice, but not principal) so I'm not sure why we're ok with that form of requiring identification to exercise your rights and against this one other than political priorities.
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.
Did you intend to link to Meshtastic as an example of how not to achieve your goals? Because it definitely isn't capable of scaling up to anything like the whole internet, and the project struggles to agree on any goals they want to reliably achieve.
There are so many caveats and limitations that bringing it up in this context is downright dishonest. The most you could fairly say is that some of the philosophy driving some of the meshtastic developers is what you want to see applied to the development of an internet-scale network (which in reality would have less technology in common with meshtastic than with the current internet).
Nothing against Twitter, but I just don't feel like logging in, so that site makes it way easier to read this. Also it doesn't take like 900TiB of RAM to render.
Really the hill to die on is that the first amendment should preclude any content-based restrictions for anyone. If you believe children shouldn't be exposed to certain materials that's between you and your kids, and should not involve the government whatsoever
Honestly, not even in favor of legislating any kind of increased device-side control or age gating. I understand the "this should be up to the parents" angle but I'd push it further: modern tech already allows parents too much control over their children. Freaky helicopter parents are already perfectly enabled to spy on their kids location, device usage, inspect and monitor their conversations, and it's already normalized to an insane degree. Absolutely no reason to make it an out of the box experience to tempt otherwise sane parents to go mad with that kind of abusive power.
I've heard that we could use zero-knowledge ID proofs to show someone is of age without revealing any more but I don't think that's the plan and the demand for age restrictions doesn't feel like a grassroots effort of concerned parents. It feels like an NGO/bureaucrat driven law and I assume its purpose is to de-anonymize people on the internet.
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
A photo identifies you. This is the digital equivalent of having a photo taken of you upon entering the mag store, stored digitally forever, shared with government, and tied to every magazine you read and purchase.
> I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification. It's not explicitly part of the proposed law but there are only a handful of companies who meet the qualifications to provide this service (id.me, Persona) and this is how they do it.
I believe if you are a "minor" then you can go the post-a-selfy route.
If someone wanted to be a martyr and just uploaded all their personal documents so they could be accessed by everyone, I wonder if an interesting court case might follow.
I could imagine it ending with a court ruling that people are responsible to protect their own personal documents which... yeah, that would muddy the waters in a world where every website expects to see your ID.
The verification apps are starting to require live video selfies to verify that the person doing the verifying is the same face as the person on the scanned ID credential.
> In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification.
That's not just the plan - that's what's already legally required in many US states.
These laws were introduced by the explicitly religious right-wing groups like Exodus Cry and Morality in Media, as ways to de facto outlaw pornography (in their own words). They've since been laundered into the mainstream so the general public is unaware of the root cause.
Whether it can be done this way is besides the point. It is about how regimes like ours in the US that have demonstrated an interest in spying on their subjects choose to regulate this over time.
In the age of AI I think it’s only necessary and inevitable to implement some of kind of internet ID system to stop the massive onslaught of AI generated fraud, malicious hacking, and spam. If age verification is a Trojan horse to erase online anonymity, so be it, I see that as a worthy goal.
Humans are inherently social, and social networks are based on trust. Trust is primarily a function of reputation, peer pressure, and legal consequences. Reputation requires tying behavior to a stable identity. Peer pressure only works when you’re not anonymous. For there to be legal consequences for bad behavior, we must identify bad actors. I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
Also I don’t think that the “pro privacy” activists really understand the scale and severity of harm being done to children through the internet. I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
We will see how your opinion changes when someone steals your ID and voice and you end up being defrauded due to the government chosing the cheapest Indian shop to mishandle your data
> Trust is primarily a function of reputation, peer pressure, and legal consequences.
The trust is somewhat of a one-way street. We are supposed to trust the entities in power. If we break their trust, there are consequences. If said entities break our trust, we can do little about it.
> I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
For some, perhaps. However, I also would rather protect people from a potentially grim future. What is permissible and acceptable now may not always be the case in the future. The Holocaust, for example, only ended 81 years ago. The notion of another one, even against different groups, seems completely infeasible -- the same as the first one.
> I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
Tone is hard to read in text, but are you be facetious? If not, you are essentially saying that you would support shutting down the Internet to protect even just one child. Yet, despite these real and active harms that already exists, you will continue to still use and profit off the Internet in the meantime?
While we've been agonizing over Age Verification (real or planned), Greece has apparently introduced a ban on anonymity on social media. I'm not liking where the world is headed, but I have no idea how to push back against it.
We simply don't need online age verification. It's not the state or private business' job to parent children. It's their parents job.
This is not only unnecessary, but will with 100% certainty lead to negative downstream affects, either via leaks, or the state being able to find people for things that aren't crimes once they're adults.
There's simply no good reason for it that outweighs the bad. But what it really boils down to is completely unnecessary.
Good: some commenters here realize it's an attack on privacy
Bad: some still entertain the idea that we should do age verification using some sort of crypto primitives
There is no reason for age verification at all.
I am from the goatse generation. Rotten.com. steakandcheese. Horrific stuff tbh, I mostly stayed away from it, and I didn't need a helicopter government to protect me from it.
The moment you accept the narrative that kids need to be protected from the Internet you have already lost.
You've already condemned those kids to a life of slavery. So much for protecting them.
What we need is not online verification, but a competent government that does its existing job well.
Who's been arrested over the Epstein files? Who is protecting those kids?
No one.
That same government wants to "protect" your kids by KYCing everyone.
Nah, that already didn't work because corps are very good at creating network effects in children and will set up multi-billion-dollar businesses around them. And then the kids with protective parents become the weird ones in school. I'll die on the hill of curtailing this stuff in a privacy-preserving way.
> I'll die on the hill of curtailing this stuff in a privacy-preserving way.
At some point you'll realize the contradiction in not trusting these "multi-billion-dollar businesses" to the point that you are risking enslaving humanity and "dying on this hill" and yet at the same time trusting those same businesses to implement this dystopian system in a privacy-preserving way.
When that realization hits, it will be a loud sound, possibly heard by nearby telepaths.
Over a decade ago, on the website of a cable news network named after vermin, you could watch an uncensored video of terrorists setting someone on fire.
Right? I especially don't understand where some of the "think of the children" attitude on porn sites, as they for the most part already ask for your age and if you didn't get some kind of amusement out of seeing tits as a teenager you're a liar
It's a function of our society becoming more puritan and conservative in the past 10-15 years. This has been a slow burn.
We are back to perceiving viewing boobies as an existential threat to people. Currently, sexuality is being demonized all around, and sexual morality is once again becoming a currency in society.
I encourage people to talk to some Gen Z kids. They're much more puritan than millennials. They're focused on virginity and the moral superiority of monogamy. It's bizarre.
I spend most of my social media time on tumblr, and it's really funny to see the whiplash of attitudes between the older and younger gen z. The younger ones tend to be the puritans and the older ones are all polyamorous bisexual furries who want to have sex with robots (obviously exaggerating but not by much)
There are lots of ways to implement identity verification while preserving privacy. It's actually a super interesting engineering problem. Estonia has an excellent model to build on. The government can maintain a "traditional" ID system based on documents and in-person verification, and provide you with a device similar to a yubi-key or Bitcoin hardware wallet that could be used to share specific, cryptographically verifiable claims with third parties, like your age, or even just a boolean "over 18", but also your name or other information if you choose, with a way to control the access and audit which parties have verified which claims with the govt.
In Poland online banks to that. You can verify your identity for government purposes with the use of your online bank. No need for government to set up a scheme to confirm millions of people in person.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
Since it is so harmful to let children use social media, why aren't parents being put in prison for abuse and neglect when they let their children use social media? Why should everyone else have to suffer when it's parents that should be punished?
Because this is a golden opportunity erode privacy riots under a complete guise of "protecting the children." Same goes for "preventing terrorism" and various other attempts to appeal to authority.
I want that. I'm tired of bots being half the internet traffic or more. It's driving the general public insane and anonymity on the internet has zero utility. If journalists need to send sensitive information, they'll always be able to use Tor.
I think there's plenty of utility. People can express opinions that they hold honestly but would fear social retribution for if it could be tied back to them publicly. For example, any political opinion that I hold that's modestly center or right of center I would not appreciate being attached to my name online since people are completely incapable of nuance or compartmentalization.
If you wouldn’t make a political statement in a town hall setting where you’re going to show ID, then you probably shouldn’t say it on the internet.
But keep in mind that these laws don’t result in your identity being public. They will ultimately result in the sites you’re posting on know that you’re an enumerated individual. The ultimate benefit as I see it is removing outsized leverage over public opinion by botting likes on your statement or otherwise operating tons of accounts. It should also eliminate threats of violence from the digital public square, since building a prosecution pipeline against those would be easy to do. Same with child grooming, but I’ll acknowledge there’s a way to make that argument in a glib way, as an excuse to realize some of the other goals. It is a real problem though.
After reading these comments, I don't want to hear any of you suggest that kids shouldn't be allowed to have unrestricted access to smartphones or social media ever again.
Age verification requires identity verification once — but it doesn't require revealing your identity ever to a third party. With FHE (fully homomorphic encryption), identity data is encrypted on your device and never leaves it in plaintext. Not to the merchant, not to us as the verification service — nobody. We only compute on encrypted data and return a yes/no. I'm building this at identified.app
Kids will always find ways around regulation. Look at cigarettes, vapes, alcohol, weed; they will just get it from their dealers. Pornography? I expect something like: download a Torrent, get it from a classmate, share HardDrives in school, get it through an older brother.
And porn companies should always be held responsible for not doing their due diligence and freely distributing porn to minors. Which is already illegal in teh US and most places.
I'm not suggesting that actually. I look at my nephews and see them buy cigerettes, vapes, etc from small dealers instead of stores. Not saying we should just let them smoke, just expecting that they will be able to circumvent online age restrictions as well.
My question is: are digital age verifications the best way to protect kids from harmfull effects of pornography? And my worry is: what unwanted side-effects will age verification have for our society as a whole.
I agree, doxxing yourself to some shady gray-market adjacent data broker is not acceptable as age verification, and age verification was safer using the honor system as before. But for some communities, especially social media communities, some kind of verification is better than none, otherwise what's to stop them from being overwhelmed with alt accounts that are used simply for harassment or other targeted objectives?
People should not be able to misrepresent themselves on the internet, it may have been safe in low volumes but it is scary now and will be outright dangerous as a modality in the hands of AI agents. If you think teen mental health is bad now, wait until social media campaign capabilities previously only available to nation states fall into the hands of ordinary school bullies.
Maybe age verification isn't the way to mitigate this obvious risk, but there has to be something that can be done to stop rampant sockpuppeting.
"Age verification is the Trojan horse. And once it is inside the gates, the surveillance state becomes operational."
Braindead meme. "Age verification" is not a "Trojan Horse". No one, regardless of age, _wants_ to use age verification. They are being effectively _forced_ to ask for it or use it. Age verification (identity verification) is a tradeoff. A "Trojan Horse" is something that people actually want, not an obvious tradeoff, a sacrifice, a compromise. No one is being "fooled" into complying with identity verification in the form of age verification
The surveillance state is already operational. If you use "platforms" then you are already inside the gates with the enemy. The surveillance apparatus is operated by so-called "tech" companies that perform data collection, surveillance and online ad services as a "business model". These companies provide access to and information about internet users to advertisers and law enforcement
If "age verification" dissuades some people from accessing "platforms" (servers) run by so-called "tech" companies, then that is a loss for the companies and a privacy gain for those people. The "hill to die on" is not using "platforms"
These companies are the reason that "age verification" is proceeding. They push the allegedly harmful content because it makes money for them. Further, the companies' "platforms" make "age verification" possible. This is because they intermediate transmissions between internet users through these so-called "platforms". Governments need not comply with laws that protect individuals from government surveillance when they can target "platforms" instead
It is disturbing that anyone would want to "die on a hill" to save "platforms" from "age verification". These third parties are surveillance companies. They built the surveillance state. They already know who you are, they do not need government-issued ID
If the people spreading this "Trojan Horse" meme cared about surveillance, including identity verification, then they would not be defending "platforms" from regulation, they would stop using the "platforms"
I would say be careful what you choose to believe. Online identity verification is the only way to end the war that’s being waged on the American people by foreign states via social media. If I were a bad actor, I would very much want to convince the public that this is a bad idea.
This seems hyperbolic as it's actually a long path between age verification to full digital identity tracking. But I agree that pushing the burden of verification to websites is ridiculous. Like the GDPR requirements where every webpage has an annoying consent modal, the verification and preferences should be controlled on the device you use to access these digital services. My browser should know and enforce my cookie preferences in a way that has a uniform user experience. Likewise, if I am a minor, my parent should provide me with a device (or profile on a device) which knows my age and can use that to inform online services of the age of the user rather than needing to go through a separate process for each service.
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
Contacting my representatives is about as effective as making a silent wish. Whenever I've done it, I'll either get no response, or a boilerplate reply which basically says "I'm doing this, go fuck yourself". Then I'll be added to their spam list. The truth is that my reps don't represent me and they're going to do what they want regardless. After all, I'm not the one backing the truck of money up to their front door.
Took forever to get a response and likely achieved little, but to their credit the response wasn't entirely canned and did at least give the impression that they understood what I'm saying
Hopefully this will give yet another push towards decentralized, open source services. Platforms where noone and everyone is responsible and the state does not get to decide the rules.
I don't think most people have been inconvenienced enough yet. ID verification is invasive enough and should cause enough friction to push another bunch over the edge.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
For a forum that supposedly consists of hackers and tech-savvy people, this number of comments supporting age verification is concerning.
The author has said a lot about what kind of future awaits with mass surveillance and AI, but I believe it’s not enough. Technofascism Is not that far away.
Reminder: Age Verification are not being passed to protect anyone but social media companies. But in addition, they will be used for a massive surveillance state. This is the DMCA of the 2020s, but far worse.
Usually Fear is the realm of governments. Modern republics are basically legitimized around the fears of something terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, immigration, globalization, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
I'm not a fan of online age verification, but this is completely absurd:
> Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...
No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into massive first-amendment constitutional hurdles. It would be struck down easily.
There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.
We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
We've spent the past three decades trying to invent ways to deduce identity and build profiles of what would otherwise be anonymous users. When the government steps in and compels people to formally identify themselves by their government names, what would you expect these companies to do? They're not gonna say "no thanks."
Why the heck would the government compel people to formally identify themselves to read or comment on a newspaper or a blog? That's absurd and unconstitutional in the US.
You're starting from an assumption that is invalid to begin with.
I don't know. Why would the government compel someone to formally identify himself to put cash in a box at the bank? Why would the government compel people to take off their shoes to get on a plane? Or submit biometric data drive a car? KYC for a phone line...
It's not invalid. I have no reason to believe that this isn't going to creep.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock.
its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
Enjoy dying on that hill then because without mandatory ID for potentially harmful services like social media, we will continue to descend further into the brainrot that many of you suffer from today.
Brainrot isn’t wrongthink. Brainrot is brinksmanship and zero sum discourse. As a member of the public it’s virtually impossible to know where the real consensus is on any issue today due to wishful thinking backed up by gigantic botnets. Brainrot will make people certain that they’re part of some majority consensus to the point that they will fight legislation like this because being provably part of a fringe line of thinking would cause them psychological pain. Right now, everyone (including the “moon mission was fake” fringe) thinks they’re part of a majority consensus. Even sovereign citizens and flat earthers believe they’re in a much larger cohort than they really are. A lot of these ideas are harming people offline in addition to degrading their personal mental health.
I’m betting that bot activity plummets once accounts are tied to real identities. That’s a discourse benefit. I’m also betting that discussion will become a lot more rational once people have to put their names on what they are saying. Death threats also become more easily prosecutable.
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
It's not necessary to give it to every website. Verification to the website can be a true/false from the OS. In fact that's how it already works now.
I would say it's not really an ID no, which is the point. The post is claiming that a digital ID is necessary for age verification, but clearly it isn't.
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.
Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
There is a suddenconcerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
> Pretty much every person in every culture cares deeply about their children.
I would substitute "deeply" for "superficially". Like, if my parents found some way to prohibit porn when I was an adolescent, I wouldn't say they cared deeply about me. I would say they were misguided and authoritary. The "care deeply" idea you are putting forward is just trying to distil whatever societal norm currently is into the youngs.
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
I agree. I don't call it "age verification" though - it is age sniffing. And it has nothing to do with children - that is the lie.
What is fascinating is to see how governments ALL fall for it. There is zero resistance. This is fascinating to me. It shows how little real effort is necessary once you have the lobbyists in place. Kind of scary to witness too.
It is an apartheid system. All apartheid slavery systems will eventually die, so age sniffing will die too. But it will most likely be a long fight as more and more money will be invested by crazy corporations such as Palantir and others.
The whole "debate" is already not logical by the way. Let's for a moment assume the "but but but the kids!" is a real argument rather than a strawman argument, which it is. Ok so ... I am a "concerned parent", for the sake of discussion. I have three young kids. I am not a tech nerd. The kids see "unfitting content" on the antisocial media such as facebook and what not. So, what do I do? Well ... they have a smartphone? Aha, so ... I am not so concerned? Having no smartphone is no option? Ok so ... I say they can have a smartphone, but they may not use antisocial media. Ok. First - in any free society, is it acceptable that this kind of censorship is done on ALL kids? What if I, as a parent, do not agree with this? Well, tough luck - the laws force you into the age sniffing routine suddenly. But, even those parents who want the state to act as totalitarian: why would I want to hand over control to ANY politician for that matter? That makes no sense to me. I am aware that some parents may think differently, but do all parents think like that, even IF they buy into the "we protect the children" lie? I don't want ANY information from ANY of my computers to go into private hands here. So the whole argument already makes zero sense from the get go.
Of course those who know how things work, they know that this is the build up towards identifying everyone on the world wide web at all times AND to make access to information conditional, e. g. if the state does not know you, you can not access information. Aka a passport system for the www. Built right into the operating system too. Windows already complied. MacOSX too. The battle for Linux will be interesting; it may be some hybrid situation, like systemd. And the systemd distributions will all succumb to age sniffing, courtesy of Poettering "this is really harmless if we store your age in the database, just trust me".
>And it has nothing to do with children - that is the lie.
You're not qualified to say that because you aren't a proponent of age verification. That's just imputing motives.
As a proponent of age verification and can tell you it's absolutely about protecting kid from damaging services like porn. It's a common sense control and that's why it has bi-partisan support in the US during a time where there is nearly 0 bi partisan support.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
> The best way to not be in a digital cage is to opt out of the current digital products.
Bullshit. These are all-encompassing monopolies and government services. More likely, they'll ban you and you'll end up having to go to court out of desperation to demand that they service you.
This is very limited thinking. If you lacked this sort of imagination 20 years ago, you wouldn't have been able to predict today.
> Frankly I would welcome a world in which kids are not using Instagram or TikTok.
This is the sort of passive reactionary nonsense that causes the danger that we're in. Everything isn't something to give up lightly, even if you think that it will force your neighbor to turn his music down, or get rid of bad reality television. I don't like kids on social media either. I don't like adults on it. I think kids are suffering more from surveillance than from TikTok.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
Why? Kids have had access to the internet for over 30 years. What is the tiktok brainwashing (I don't use it), and how do you qualify the danger of it from say google news brainwashing, or even (gasp) public school brainwashing? I mean, if we're going to group ban information, at least let people in the local communities make those decisions. Otherwise, we're going to get the Epstein class making these decisions.
It's mind boggling how far Stallman saw into the future. Saddest part is we're losing this war. They're going to destroy freedom of computation, freedom of information, and it turns out that... Nobody cares. Nobody but a bunch of nerds.
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
And not distributing porn to children is a porn company's responsibility.
You are repeating a very common talking point but its not a good one.
Age verification laws make it possible to hold services providers liable for breaking the law (it's already illegal to distribute porn to minors in many places, like the US).
It's both true and completely irrelevant that parents should do a better job protecting their children from harmful services online.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
Responsible parties like porn companies that distribute porn to minors? Parents are still accountable with age verification laws.
If parents suck at parenting, they will suffer.
If porn companies distribute porn to minors, which is illegal in many places such as the US, they will not suffer. Unless you start holding them accountable.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously and without the content of your speech being sold to Walmart and the cops. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
That's a discussion that's entirely tangential to age verification. However, I think porn should be illegal entirely as it's just prostitution. As such I think porn companies should not exist, the same as brothels or heroin dealers. If they have to exist for practical reasons along with other objectively harmful things, such as alcohol, marijuana or gambling, then obviously they should be regulated to ensure they're not targeting minors.
That does not detract from the fact that the people arguing for age verification are using "think of the children" in order to push surveillance.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
Im completely OK with verifying someone's age before distributing age-restricted services to them. That's what an age restricted service is, and obviously we shouldnt let porn companies distribute porn to minors (its already illegal most place). Just dont use porn, facebook, online gambling etc. if you dont want to share your identity.
I can see why it's unfortunate but the idea posited that that it's somehow illegal in the US is ridiculous. You have no right to watch porn anonymously at the expense of holding porn companies liable for distributing porn to minors.
Internet 1.0 was largely read only, ephemeral, or decentralized. Chat rooms, IRC, personal webpages, etc. There was anonymity and there were not age restricted services.
Internet 2.0 introduced age restricted services and the enforcement lagged. The enforcement is now catching up. You can still do all the Internet 1.0 things anonymously but you can no longer gamble online as a 14 year old and hopefully soon you wont be able to watch porn either.
Private companies now can link all your online activities to you. Not an advertisement ID, but directly to you and your loans and your health data and whatever they're selling in the black market. Every data breach is a 100 times. It was already almost possible to directly know about you by buying data, now it's easier.
The point of this is not to verify age really. It is to verify identity. There's no way to prove someone is some age without presenting a legal ID.
Also, it's not just porn, facebook, online gambling etc. It is the OS based on some bills. So ALL your activities.
This argument as framed doesn’t make any sense. Porn is (and WAS) Internet 1.0.
There was porn before most everything on the web. Porn is also speech / art.
Anonymous access should be available for any website that wants to share their content on the Internet provided they have the rights to that content.
States that seek to limit that could make a legal argument that they have the right to limit access, but in the end it’s infringing speech. Worse, it’s unenforceable.
And yes, I would make the same arguments for people posting hateful shit or misinformation.
The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
[1] - https://news.ycombinator.com/item?id=46152074
This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.
Well why haven't all the big tech companies done it then?
They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"
And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.
The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.
Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)
We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.
OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.
> These are facts, whether you like them or not.
[Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.
Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/ "Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"
> Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.
This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.
We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.
(Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).
>If it was then this kind of solution would be being legislated for.
What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?
The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.
Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.
> sticking their fingers
I actually think it was giant wads of cash.
The tech companies are the ones lobbing for age verification.
The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.
An outstanding idea. Those lobbying for age verification hate it though, because they want to be the arbiters of age, and all that juicy PII that they can analyze and resell.
What PII? They get a boolean "old enough"
Think about how they validate how old you are. Meta and Google, who are lobbying in support of this legislation,will force you to sign up with your real ID, and be the arbiter for questions like “are you old enough for this website”. For every request that you make through some third-party website that needs to know your age, Meta and Google will know where you tried to login, and for which content. They will then resell this data to the highest bidder. Additionally, through all their ad networks and tracking, they will follow your session and have verified ID to match your entire browsing history. This is the end of anonymity and privacy on the Internet.
I'm not so sure. I think the push is from the government actually. But companies are not exactly opposed to it. Quite the contrary. Big corporations see compliance as a moat. Tobacco companies supported stricter regulations on tobacco advertisements, because they had the deep pockets required to follow the changing laws. Mr. Altman is all-in on AI regulation, because it will mire down competitors while OpnAI has already "slipped past the wire" and done all their training pre-crackdown. When given a choice between regulating their industry (platforms and operating systems) vs regulating someone else's (porn sites and the like) they'll always helpfully "volunteer" to be the first to be regulated. It's just good business.
"The government" is the same as those lobbying the government. The people in the government get paid to push it, so they push it, and get paid more when it goes through, by the people who want that PII to analyze.
Or could have a header saying this is not adult-only content, and a parentally-controlled device will block things that don't participate.
That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.
[1] - https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R... [THESE ARE ADULT SITES, NSFW]
What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.
That's between parents and their local governments. Yes when I was a kid my mom let me watch whatever and go wherever. The parent in my example ultimately decides what a kid may or may not do which is in alignment with existing laws. If the parent is endangering their kid that is up to them and their government to sort out.
Point being, put the controls entirely into the hands of the device owner. Options can be to default to:
- Block everything by default unless header states otherwise.
- Block only sites that state they are adult.
- Do nothing. Obey the operator. (Controls disabled on child accounts or make them an adult or otherwise unrestricted account on the device).
I think the options are just limited to our imagination.
> - Block only sites that state they are adult.
This is the problem. What is an "adult" web site? Websites that show porn? Websites that show gore? Websites that show violence? Websites that show non-porn naked people? Websites that have curse words? Websites that promote cults and alternate religions?
Why is it the site's responsibility to "state" that they are adult, given whatever parameters they dream up? Why is it the government's responsibility to say "This is adult content, but that isn't adult content?" Shouldn't the parent get to decide which categories of content count as "adult"?
That was our struggle with implementing "blocking" tech at a school I worked at. Is a kid looking up how to do a breast self exam porn? What about a self testicular exam.. What about actual Sex Ed kinds of sites?
> I know parents who have no problem with their kids seeing porn.
Surely you mean at least teenagers, and not literally children, right? Consider the prevalence of violence, racial stereotyping, and escalation of fetishism into degeneracy that clearly exists within this medium; what's the line that these parents draw? Are they making sure it's only something vanilla? Or is there no line whatsoever?
They don't care. The kids won't think to ask until they are teens, and they are not showing it until then, but it is technically available.
Then those parents can turn off their browser/client’s age protections. I think that’s actually a decent argument for the solution posed by this thread.
There is such a thing as making the "kid ok" header so rare or "18+" so eager that nobody takes it seriously, so that'd need to be kept in mind.
There are already laws defining this. Had to draw the line somewhere, and they did.
In which legal jurisdiction and culture? Many or most website are have users from many locations.
Is the header a json encoded map from country code to age rating?
The US. If they want to serve users in other countries, or if certain states make their own rules, it's business as usual whether to serve different content there or serve a different header or take the legal risk.
That seems unworkable and a practical matter
It's the exact same problem that age verification faces. There are different laws in different jurisdictions and operators have to figure out how to comply with the ones that matter to them.
Think of the (current) header as meaning "we would have blocked you if we saw you were under 18" or whatever equivalent and it should make sense.
They already do this, like there's Victoria's Secret's US website vs Qatar.
> I know parents who have no problem with their kids seeing porn.
I don't agree with showing actual children porn, but I also totally expect teenagers to find some way to get access to it in the age of the Internet.
Part of the challenge with this is cultural. Different places in the world think about sex, sexuality, and even the concept of what is a child differently. In the US, showing a woman's bare breasts to a person under 18 is generally considered wrong, and in many cases is illegal. In most of Europe it wouldn't even raise an eyebrow, because bare breasts are on television, sometimes in commercials even.
Set aside for a moment the question of age verification and age limits, we cannot even agree in any sort of universal sense what even qualifies as porn or adult content, and at what age someone should be able to see it. There's a difference between a 7 year old and a 17 year old seeing the same type of content, and there's also a difference between a photographic nude and a video of people engaged in coitus.
The story is basically the same for everything else you listed.
These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do. What the GP suggested using RTA headers though puts the control into the parent's hands, which is as it should be.
We don't need to care what France or China thinks when we make our laws that are about our own citizens. They do the same over there.
> These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do.
Yes there's a chance our rules spill over there naturally, and I don't consider that wrong either.
I considered many of the same points you mentioned.
Though, one area I am still struggling to grasp is the harm that governments are trying to mitigate. If a child were to see inappropriate material, then what harm can truly arise? Also, why do governments need to enact such laws when the onus of protecting children should be on their parents?
I am not trying to start any kind of flame war, but I really cannot see any other basis for all this prohibition that is not somehow traceable back to Western religious beliefs and the societies born and molded from such beliefs.
i can make arguments as to potential merits of kids having a beer/cigarette, listening to swear words, or witnessing casual violence. i cant make an argument for letting kids see hardcore pornography in any capacity.
Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.
So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.
So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).
While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.
I always love seeing pros and cons of whitelist vs blacklist sorts of strategies in different scenarios.
Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.
Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site https://www.rtalabel.org/
For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.
I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.
But does this actually still get used by parental controls?
Some parental control applications will look for it but it is not yet legislated to be mandatory on a majority of user-agents.
All I am suggesting is we legislate the header to be added to URL's that may contain material not appropriate for small children and mandate the majority of user-agents the ones that are default installed on tablets and operating systems look for said header to trigger optional parental controls. Child accounts created by parents on the device should not be able to install alternate user-agents or bypass the controls (at least not easily). Parents should be guided through this on device setup.
Indeed their site is old and rarely touched. The ideas and concepts have not changed. It really could just be a static text site formatted in ways that law makers are used to or someone could modernize it.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
The porn companies already set the RTA header. It was designed by an organisation funded by the porn companies.
https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...
It seems there is a GitHub repo somewhere mapping Meta money to lobbyists inside other companies Which is at least interesting
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
There's a great relevant quip: "If you think that the cost of compliance is high, try noncompliance".
If only it were true today :|
Sure but the government doesn't police corporations in the US anymore. The Hayes code was before neoliberalism.
Quite true. The US corporations act like a giant global rabid dog. Fake legislation appears in the USA - lo and behold, it is copy/pasted into the EU. At the least lobbyists are getting rich right now.
At least the EU has GDPR. In the US, our personal data is collected by every app and website and company and packaged, sold and sifted through by a vast collection of private data brokers which the government already ingests.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
An age header is not the answer. Why should a site have to decide what content is appropriate for a 18 year old and what content is not? Who is qualified to make that decision for every 17 year old in the world? Do they know my 17 year old? Do they know the rules in our home? What if I'm OK with my kid seeing sex-education stuff, but some lawyer at Wikipedia just decides to tag sex ed articles as 18+? Now I have a shitty choice: Open up the floodgates of "18+" to my kid, do it temporarily while the kid browses the sex ed sites, or not let the kid browse them.
Letting a company or government decide what's appropriate for what exact specific age is fraught with problems.
Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.
I think the better tack is to stop acting like these laws are being pushed by honest actors with good faith intentions of protecting children.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.
>fined
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
I’m here to die on this hill!
People were wrong.
We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.
An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.
But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.
For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.
If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
>But if someone is willing and able to pay
If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.
But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
There was a random comment here on HN few days back that adult contents have lower chargeback rates than everything else.
So ig stop spreading hallucinatory misinformations?
link?
> Back in the late 90s or so, there was a proposal
This one: https://www.w3.org/PICS/
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything as I could not predict what people may upload which made for a very long header.
> unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube
YT already does this. I never watch YT signed in, and I often see videos that require you to be logged in as the video is age restricted.
Agreed though in my example the point would be to set the header in the case the child is logged in but for whatever reason the site does not know their age. Instead of a third party site, a header is sent with the video tagged as adult that triggers parental controls if they are enabled by the device owner.
Yeah this seems like the best tradeoff. You avoid the central control infrastructure and you provide information to clients. It's also a great match with free computing devices, which can then utilize the new information, empowering users (eg parents -> parental control on device, or individuals who want to skip some kinds of content).
There are issues today with this approach such as lacking granular information for sites that have many kinds of context, but if you stop investing in the central control infra and invest in this instead that could be remedied.
I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.
It should indicate which exact HTML elements are classified, so that a social media feed can selectively tag posts on the home feed.
A MIME type for every genre.
If they can scrape and fine, they can just make a list and the browser can use that.
“solutions” like this presume that age verification/gating is the goal. it’s not. it’s a cover story.
the goal is eradicating anonymous publishing. the goal is making strong government ID mandatory to use the internet.
any privacy preserving age gating system is useless toward that goal, so it is irrelevant.
Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary
I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.
Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.
>I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device.
Not being able to detect all children doesn't mean that being able to detect 80% of them is somehow less disturbing.
The point and overall goal should be to not signal anything to the server operator unless a credit card is being used. Everyone is whomever they claim to be as far as anyone is concerned, until payments are required which today means sharing identity and age (via the credit card information on file with the financial institution and is shared today).
In the case of RTA the only signalling taking place is a server header being transmitted to the client. The client could be anyone at any age. Nothing to explicitly leak or disclose. Server operators can guess all they desire as some do using AI based on user behavior of which they sometimes get wrong.
This is also how age attestation works. The client could be anyone at any age, all the server knows is they've opted to see over-18 content
That's true. But leaking an age threshold is not the same as private companies being able to link all your online activities to a single legal person.
Adults could also use this to filter out unwanted content without needing to rely on outdated filter lists.
How are they supposed to fine sites out of their jurisdiction?
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
[1] - https://news.ycombinator.com/item?id=47950843
The question was about fining entities outside of the original jurisdiction, so I am not sure what you have in mind that could be done by network/security engineers here.
In terms of fines if they do not pay the fine their country is at risk of sanctions or embargoes which is probably a bit heavy handed but may incentivize their government to also enforce the rules, collect fines keeping some for themselves and passing the original fine back to the countries implementing child safety controls.
This is extremely naive and short-sighted. There is a literal example of this happening rn, and hopefully you will see why your approach isn't that good.
UK's OFCOM is currenly issuing legal threats to 4chan, for allegedly serving adult content and not willing to implement age verification. 4chan's lawyer tells them to pound sand[0], on the basis that 4chan is hosted in the US and has zero business presence in the UK, and UK is more than welcome to ban the website on their end through UK ISPs. The saga has been ongoing for a while, and the lawyer has been pretty prolific online talking about the case.
Anyway, following your approach, UK should embargo US over 4chan not willing to implement age verification as required by UK law? I plainly don't see this happening, or even being considered, ever.
0. https://www.bbc.com/news/articles/c624330lg1ko
4chan servers are in the US and the owner is in Japan. If the US wanted to they could seize all the servers but they will not because they have real time monitoring of all activity on the boards and have ever since Christopher testified before congress and the site was sold. If anything 5-eyes want that site to be unrestricted. 4chan has been a goldmine of people self reporting for wanting to shoot up or bomb places, as has Reddit leading to many body-cam videos of the site users and in some cases the moderators being busted.
The IP addresses are all captured by Cloudflare. It is literally next to impossible to post on 4chan without enabling javascript on Cloudflare or buying a 4chan-pass which leaves a money trail not perfect, nothing is but most mentally unstable people do not think these things through.
Should legislation be added to require the RTA header 4chan could and likely would add it in a heart-beat. They already have some decent security headers in place.
> fine sites not participating into oblivion.
That would also amount to compelled speech.
That would also amount to compelled speech.
I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children and adults. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.
That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.
Surely we both understand the point I was making in that labels are already compelled by laws today.
Fine, cigarettes must be labelled as being a risk of causing cancer. The punishment for failing to do this is both civil and federal penalties including massive fines and federal prison time.
Do you believe using the Internet should require a license? Isn’t that what covers these product warning labels?
I never implied an internet license. Rather if a server operator a business has content that may be adult in nature they must label their site. Businesses require a license already but that is unrelated to this.
Clients could refuse to show content that does not have headers set.
On other hand servers might choose to lie. After all that is their free speech right.
So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.
Compelled to disclaim facts is good compelled speech, though.
RTA = restricted to adults
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
Such censorship shouldn't exist in the first place.
I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.
Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't undo it.
This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].
It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.
I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.
Now I disagree very strongly with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.
[0] https://www.justice.gov/usao-wdnc/pr/discord-user-who-catfis...
[1] https://www.nbcnews.com/news/us-news/kidnapping-roblox-rcna2...
[2] https://www.nbcmiami.com/news/local/nebraska-man-charged-wit...
[3] https://www.fbi.gov/contact-us/field-offices/boston/news/ope...
I am only interesting in protected the majority of children which I believe my proposal more than covers. There will always be exceptions. Today teens share porn, warez, pirated movies and music with small children in rated-G video games. I am not proposing anything for that. It is up to businesses to detect and block such things.
Point being, there will be a myriad of exceptions. I am not looking to address the exceptions. Those can be a game of whack-a-mole as they are today. I am proposing something that would prevent the vast majority of children from being exposed to the trash we today call social media and of course also porn sites.
Look, please don't sideline/marginalize people by using the "whataboutism" term. Thats being used more and more to silence dialog from people that see problems outside the focus of a specific area. Its important that we see ALL sides of the problem.
Fair enough. Even though I do not perceive it that way I removed it in the event a majority of others have come to this conclusion.
Thank you for understanding. I know sometimes topics can get out of hand with comments about related things, but I this case. We might be better off looking at all the extremities.
These aren't exceptions or whataboutism. It's the debate being had on the floors of state legislatures.
> It is up to businesses to detect and block such things.
Which is exactly why age verification legislation is hitting the books. No one (serious) cares about whether kids can download porn and R rated movies. Parental controls already exist if the threat model is preventing access to specific content that is able to report itself as _being_ that content.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content. They define specifically what classifies as an addictive stream and put the onus on service providers to assert that they're not delivering addictive streams of media to kids. An HTTP header isn't enough, because it's not about the content being shown to kids but the design patterns of how it's accessed.
Essentially: age verification isn't about porn. 18+ content stirs the pot a bit with the evangelical crowd but it's really not what people are worried about when it comes to controlling digital media access with age gates.
Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content.
That sounds simple to me. If a type of content is addictive then require the RTA header.
- Adult content, or possible adult content.
- User contributed or generated content (this covers most of social media)
- Site psychological profiles that are deemed addictive (TikTok and their ilk)
Overall we are describing things that are harmful to the development of the minds of small children. If adults wish to avoid such content they can create a child account on their device for themselves to be excluded from this behavior as well. I use a child account in a couple of popular video games to avoid most of the trash talking and spam. I'm not hiding my age as the games have my debit card information but rather I opt-in to parental controls.
This is assuming children should be on social media at all, which I for one would debate.
>I a small server operator and a client of the internet will not participate in any other methods period, full-stop.
You will however follow the law if it mandates you to do else.
Which is we "age verification" should be stopped before it's too late.
How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?
I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it which is admittedly not uniformly applied.
This already has been solved. Youtube disables viewing via embeds for any content that has been age restricted. Either you view it on Youtube which requires logging in to see age restricted content in the first place, or you get the ! icon and the warning about needing to log in.
THe government shouldn't be raising anyone's children, that's what parents are for. If you're a bad parent, your kids will get access to bad things and could become an adult failure.
The future of your family and your legacy is up to you, not the government. We don't need age verification to restrict the social darwinism of raising children.
I wish I could upvote this comment harder. I started having unsupervised internet access (with the family computer in the living room) when I was 8. I'm a functional and successful adult because I trusted my parents. When my mother forbade me from registering on online forums I complied. When I read "fellation" in some minecraft chat (albeit somewhat later) I asked my mom what it was and understood that "sex" was something for the grown-ups and that I shouldn't worry about it. All because I would never even conceive that my parents wouldn't do what's best for me, and was unconditionally loved (even though I didn't know about this concept).
I would rather have parenting licenses than online age verification
Yeah I'm not sure why the govt or any other 3rd party needs to get involved. If I don't want my kids to look at porno online I will educate them on porn. If I don't trust my kids to listen to me then I will install an open source monitoring software and educate them on trust.
Letting the govt dictate what is age restricted is an easy way for the govt to control speech and narrative. For example, children's books that feature LGBT characters are being reclassified as adult [1], thus requiring additional verification. If I do/don't want my kids to read LGBT books, it's my decision. The govt should not dictate that. What else will the govt reclassify? Anything involving people of color?
[1] https://www.ala.org/bbooks/book-ban-data
I keep thinking we can't fight age verification by just saying "no" to it, and have to offer an alterative.
Maybe we need to turn it on its head, point out that if we want legislation to help out with this, we could choose legislation that gives power to parents. Age verification laws put the power directly into the law itself, they're a blanket solution that gives all the power to legislators and that prevents parents from making decisions about what's appropriate for their kids and what isn't.
If the market isn't delivering the level of parental controls people want, then sure, maybe legislation is needed. But it should be legislation that improves parental such that parents can make decisions about what's appropriate for their children.
> and have to offer an alterative.
It's called "software." It already just exists. It's sold for the purposes of locking devices down so they're safer for children to use.
> point out that if we want legislation to help out with this
Make this software tax deductible. The end.
Yeah I agree. Let me decide what's appropriate for my kids. Like for video games or movies... A game rated M for foul language and nothing else might be OK for my adolescent kid. A game rated M for excessive nudity and sex probably not.
Also, different kids mature at different rates. I wouldn't give a shit about my kid watching, say, an R rated movie if I understand they'll be able to handle it and understand it's fiction. If I had a 14 or 15 year old and they had a healthy understanding of sex and the dangers of porn, I wouldn't give a shit if they managed to see some poorly drawn tits online. Why? Because if you didn't intentionally seek out lewd content as a teenager you're either very very religious or a liar
> THe government shouldn't be raising anyone's children, that's what parents are for.
The government does raise children. It's called the public school system.
There's an angle everyone misses.
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
A tech company doing scans for validation could actually connect to a state database to verify the ID is legit and is not already being used for a different account. It would then be saved. I don't think real world vs tech world usage of fake IDs are the same at all.
>Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.
Not necessarily true. There's a local stripclub that scans and saves the scan to fight chargebacks and the like. It is definitely logging stuff. They've told me that they were going through the logs once and the bartender ended up googling my fullname. We're cool and I didn't care, but this what you said is not a blanket true statement. I trust a physical business that I can visit far more than some ID verification company that is going to get hacked at some point.
I've seen this before in London too in some venues. They have full-on computers that scan your passport and take your photo, for the express purpose of storing this info.
The tech companies care even less than the bouncers do.
They just want a plausible defence should it ever end up in court.
tech companies care even less? how do you arrive to that conclusion? tech companies log/store EVERYTHING. this would be an absolute boon for them to be able unequivocally assign to you all of the data they track about you. suddenly, anonymous analytics become identified data and not just deanonymized data.
Logs of location data on people are already worth real money. The FBI has admitted to buying it. The companies that do age verification will absolutely be selling that data unless there are severe penalties for doing so, and what are the odds that the U.S. government passes a law making it illegal for the FBI to buy data?
That's bad enough if you're a U.S. citizen. If you're a non-U.S. citizen, now you're in the situation where all these U.S. social media sites are collecting personal information from you and reselling it, but you have no legal protection unless your government risks tariffs and invasion threats to pass legislation against it, which the U.S. will probably ignore anyways.
This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
> This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
I guess, but like, who? During the time TikTok was not available on an app store (even though the service wasn't stopped), people were trying some of the other Chinese apps, and they were not very compelling as the exodus never happened.
It's a chicken and egg problem. Without users, a new social platform lacks content, so it can't attract users. Unless something decidedly new and compelling comes along, users will probably stick with what they know... unless something happens that really pisses them off.
If I'm being honest though, I don't think privacy concerns will be what does it. The TikTok generation doesn't give a fig about privacy. You can build a panopticon around them and they won't even notice.
> Handing an ID to a bouncer at a bar or similar is not logging anything.
Some of the bars in the party areas of my college town have a digital scanner they hold the ID up against, and they even had a screen showing a scrolling Wall of Shame of fake IDs. And they had this like 20 years ago. So I would not necessarily agree with you here
How does a tech company calling into a government database to verify your identity maintain your anonymity?
It does the opposite: allowing the government to track your online activity as a side-effect of site owners' validating your ID every time you visit.
That's the point, and it's a big part of why opposing online age verification is a hill to die on.
My mistake. My question was rhetorical but I thought this whole thread was rooted in the parallel conversation about anonymous credentials systems.
> Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything.
> Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo.
Many/most bars do scan IDs now. Ostensibly it's to verify that it's real, but they do use those systems to keep a log of everyone who enters.
They also use them to flag people who've been previously banned and the systems work across venues. The idea that verification in the real world is cursory is not accurate.
The vast majority of places I frequent do not even have a person at the door checking IDs. If the bar tender/server thinks you look young, they ask for ID. I clearly do not look to be too young, so there's that. The last place I went to with an actual scanner was more of a nightclub that had a cover charge.
There's a fine line between night clubs and bars (and a venue can operate as both, depending on the night).
Functioning as a bar where people come in, drink and eat - generally not checking ID's at the door.
Functioning as a night club, generally checking ID's at the door. Almost no places I've been to scan ID's. I'm also middle aged and not going to night clubs hardly ever. Pretty much just a couple concerts a year in the big city. Those venues scan ID's.
Anecdotal evidence is weak (not) evidence.
This is true but your orignal reply was also anecdotal.
sure, but it is what it is. the places with scanners may be more sophisticated than i give them credit, but you cannot deny there are places that do not card every person every time you visit. online places will never not know it was you. if you cannot see the differences, then you're just deliberately being obstinate about it
Well then it’s a good thing my fake id is from a state or foreign government without a checkable database
ID system should be based on commercial bank. If you need to prove your identity or whatever about yourself just tell them to ask your bank and bank will ask you which information about yourself you are willing to share with whoever requested to confirm something about you.
When ID is tied to your bank account you guard it like you guard your bank account. Because it is the same thing. This will drastically lower the incentives to "share" your identity with anyone.
What's more this system is already operational in many countries.
>Banks
I wonder how many months until this suggestion becomes slightly embarrassing. I barely want my banks to know what I buy and to be responsible for my money. I really don't want them knowing everywhere I go online. Especially when "my" bank goes under and all of my data gets sold off to whoever takes it over.
That's just feudalism with less extra steps
From what I know about feudalism, this is a non-obvious statement. Care to develop ?
The proposed system moves sources of identity from the nation to private banks under it. So banks own people. Propose a financial regulation to the national congress/parliament and you stop existing, digitally or potentially physically as well. That's feudalism. Or Chinese struggles-of-nations warlord era situation which is often grouped up into that concept as close enough things.
Plenty European countries have eID without these issues.
You use eID when explicitly interacting with a govt entity or bank or otherwise similar institution because you have to and want to prove who you are. Yes, I do want to prove who I am when I file taxes, vote or want to start a business...
You don't use it when just browsing randomly on the internet. You don't use it to buy games on steam. Your computer isn't forced to store it because a law arbitrarily says so.
if it's done by the government, what prevents the goverment to not allowing opposition members to access social media? I think social media and porn are harmful for children but still
Why not, seems to be made exactly for this purpose if you look at the "‘Age over 18’: true" flag. What's bad about that solution?
> The technical solution for an EU age verification app is privacy-preserving, open source and user-friendly.
> First, the user downloads the app onto their phone and sets it up by certifying their age. This can be done with a biometric passport/ID card, a national eID (e.g. national ID Card or other electronic identification mean), a pre-installed third-party app (e.g. a banking app), or in person (e.g. at the post office). Only the information confirming that the user is over the age will be saved in the app. No name, no birthday, or any other data is saved.
> After completing this step, the communication between the app and the provider certifying the user’s age (e.g. eID, third-party app) ends. No further data is exchanged.
> The app is then ready to be used online. When an online platform asks to verify the user’s age, the user can use the app to communicate they are over a certain age (e.g. ‘Age over 18’: true) to the platform.
https://digital-strategy.ec.europa.eu/en/faqs/eu-age-verific...
I usually buy games on steam using a process that does involve my bank, do they actually take bitcoin or cash posted in an envelope?
I don't disagree with random browsing. I do use it to buy games on steam as any online purchase on my card uses it. And my computer doesn't store it, my phone does.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
Matthew Green talks about this in his blog on the subject: https://blog.cryptographyengineering.com/2026/03/02/anonymou...
The two methods that seem feasible are making it hard to copy (putting it in the secure element in your phone, for example, which I don't love) or doing tokens that can only be used a limited number of times per day, like in : https://eprint.iacr.org/2006/454
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
Further, "porn tokens" are the pointy end of the wedge, because it's easy to misconstrue any opposition as advocating for "kids should have access to porn, actually". The broad end that is being hammered towards is "kids aren't allowed on social media because it's harmful to them" AKA "free speech tokens".
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
Continuous age verification isn't possible, so you'll have to store some sort of proof of age somewhere, and that proof will always be sharable.
Let's say Facebook has verified my age somehow. I could share my Facebook login credentials, or the token that their authorization server sends back in response. You can create some hurdles to doing that, like requiring a second factor, but I can just share that too.
You might as well go down the route of accepting that possibility. These systems are never going to hold up in the face of a determined enough teenager.
That really depends. A zero knowledge system would show to the verifier that the person is authorized for access _right now_, but thats just the answer to a particular challenge. Outside of the verifier who knows they came up with a random challenge without bias or influence, the response would mean nothing.
I think a lot of age verification systems are the solution to the real core of legislation - to make companies liable for underage viewing of content. To put such legislation in place without providing a feasible way to accomplish age verification would be argued as discriminatory.
In that sense, a zero knowledge system which doesn't give a company non-repudiation so that they can defend themselves in court may very well be insufficient. And that will require tracking identity long-term, although it could be done with a third-party auditor under break-the-glass situations with proper transparency.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
There's already authorities that know your age, so verifying age with them to get the credential isn't the part that needs to be anonymous. The issue is them knowing what you do with your credential, which anonymous credentials solves by making it impossible to track tokens back to the credential holder. As far as sharing, there are some possible mitigations.
Right. And the possible sharing mitigations generally amount to tracking.
This isn’t even getting to the issue that mandating government-issued credentials is the “foot in the door”. If you mandate the use of government creds for accessing websites, it’s an obvious step to turn around and demand that sites report credential use to “fight credential fraud”.
But likewise, someone can share (or have stolen) their ID
https://news.ycombinator.com/item?id=47951372
The destruction of privacy is the whole point.
Yep look who is backing these regulations. It's absolutely for no other purpose than to further enable surveillance capitalism and the surveillance state.
This is something that's technologically feasible, but will never happen in practice.
Yes, but this is not popular among technologists (see the average sentiment towards age verification here). Legislators aren't going to build technology. This will happen if age verification actually becomes a widespread requirement. But until that point the prospective builders will be fighting the entire premise of such systems.
Apple and Google have already implemented private age verification.
They are interested - interested specifically in opposing it. These groups don't care about age verification - it is a trojan horse for censorship.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
Young people setting up a MITM and getting deeper into tech rather than consuming short-form-content is something I'd appreciate as a nice bonus effect.
Of course the EU solution isn't perfect and there are bypasses (there will always be and have always been), but let's appreciate it that way rather than too many PII, if it must come. I'd prefer the Age/RTA header and parental responsibility too.
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
The EU's proposed design still has a ton of issues.
Elaborate please
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
It's addressing a real problem in a bad way.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
So do you want 5 year olds driving on the highway and 8 year olds doing shots of tequila or are you ageist?
Or perhaps protecting kids isn’t really ageism at all.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.[1]
[1] https://news.ycombinator.com/newsguidelines.html
I did. Restricting children’s access to certain things is not ageism.
We can argue the merits of restricting children’s access to the internet, or certain books, or alcohol, or pornography, or whatever else. We can debate the merits of those various restrictions based on the benefits and costs to both the children and society at large.
But it is not ageism to attempt to protect children. It is not ageism even of the restriction is a bad idea. To claim it is ageism is an emotional appeal (“ageism bad!”), not a logical one.
It depends on what you're restricting and why. Restricting access to things based on age can absolutely be ageism if the thing does not need to be restricted.
I don’t think it’s ever “ageism” in the normal sense to restrict children’s activities for their safety. But even if that’s the right term in some cases, it hinges on “if the thing does not need to be restricted”.
The burden is still to demonstrate that a restriction is wrong. If that can’t be demonstrated, then labeling it ageism is a purely emotional appeal.
You jumped to children behind the wheel of vehicles and doing tequila shots. There is no way that was a serious effort at good faith discourse.
I used a rhetorical device to demonstrate why restricting children’s activities is not simply ageism.
I don’t know how you can seriously come here and accuse me of engaging in bad faith when I’ve taken the time to make my viewpoint explicit multiple times in this thread now, including directly to you.
Hyperbole is a rhetorical device, if that’s what you mean.
Just because I had a hard time following your logic doesn’t mean I didn’t engage in good faith. You also seem to be arguing in a heated way with every person who responds to you.
Either way it’s probably best if we both move on
I did not accuse you of not engaging in good faith. You accused me of that.
I don’t think I responded to anyone in a heated manner, though I will readily admit to being annoyed when you accused me of bad faith.
Agree we should move on.
If the 5-year-old has passed a proper driving test, why not?
Quit arguing as though the topic is binary. It's not.
I’m not saying anything is binary. I’m saying it’s not ageism to restrict child access. It could be a bad idea but that doesn’t make it ageism.
It depends on what you're depriving them of too. Those are very extreme examples with little to no upside.
Disagree. We can discuss what restrictions are appropriate or reasonable without calling it ageism.
Calling it ageism is an emotional appeal, not a principled stance.
Ageism is a legally defined form of discrimination as well as the subject of ethical discussions. It's a real, defined thing. Just because we disagree on what qualifies as ageism doesn't mean you get to call foul and say it's irrational/emotional.
This is literally a “think of the children[‘s freedom]” appeal. You’re not arguing for or against the restriction on its merits.
In the US at least there’s also no such thing legally as age discrimination against minors so far as I’m aware.
Edit:
Let me frame this differently. “Ageism” is basically by definition bad, so applying the term “ageism” to a restriction is a an attempt to label the restriction bad without establishing that on its own merits.
If you try to provide a consistent definition of “ageism” that applies to restricting access to the internet but not restricting access to alcohol, you will most certainly have to resort to phrases like “reasonable restrictions” (if not, I’m very interested in your definition), which means that there’s still a need to establish what is reasonable. Applying the label “ageism” without establishing reasonableness is then a circular argument.
You’ve lost me.
You* are using “ageism” as a synonym for “bad”. You are also labeling restrictions as “ageism” without establishing that they are actually bad.
In effect you are saying “that’s bad!” without accepting the burden of establishing why it’s bad, but hiding this behind a different term that carries more emotional weight. It’s a very politically effective strategy but it’s not logically sound.
* actually jMyles
Fair point
This is why we need verification technology that protects identity. Implemented as anonymous verification, without distinguishing between adult age, or permissioned by parent.
That solution doesn't negate parental freedom of choice, it facilitates it.
I am baffled at how often the "they don't want it, because of their ulterior surveillance motivations, therefore it isn't a solution" argument is made. "They" don't want it because it is a solution to the nominal problem, that they cannot abuse, and would negate their ability to use it as a cover with a large well-meaning voting constituency.
Two problems, nominal and ulterior, resolved in the right way by one solution.
When a nominally sensible problem is used as a cover for overreach, solving the nominal problem in a healthy way is the best offense. The alternative is an endless war of attrition, and the "hope" that politicians resist the efforts of well-paid lobbyists and tens of millions of well-meaning voting parents forever. That is a ridiculous strategy, doomed to fail, delivering irreversible damage. As is already evident by the abusable laws that are accumulating.
I worry at the lack of political acumen and foot-gun reflexes in the ethically-motivated technical community.
Stop endlessly fighting to lose less. Just play the winning move already. Stop the irreversible damage.
I think part of the issue people are missing is what the late Randy Pausch would call a “head fake”. My specific autism is not privacy, digital security, none of that. So I will be honest about my gaps. But from my little corner what this is about is geopolitics - specifically a potential war with China. If you zoom out to the macro level first understand the reason China setup the Great Firewall. Why countries like Iran cut the internet whenever there are protests. These are, first and foremost, defensive measures against foreign influence. America is subject to these same outside forces. The difference is that our free and open society makes things like "a Great Firewall" simply unpalatable to the American people. And rightly so. But it is also becoming increasingly evident that these malign actors are using our own values against us.
Russia for example aims to sow discord. One classic example is the Black Lives Matter movement. This was not a Russian disinformation campaign - but they did propagate views that exist outside the bell curve of the moderate. They push scenes of cops being under siege for the right and racist policing for the left. They amplify the voices of the most angry, the most extreme and the most radical on both sides of the spectrum to create confusion, distrust and societal division.
China by comparison takes a much more subtle view. They choose to erode what they call "civilizational confidence" by highlighting systemic failures, inconvenient truths, or otherwise undermine institutional credibility. When you read an article and find a moderating factor buried in the last paragraph that is the flavor of Chinese action. The general malaise about American exceptionalism failing and China's inevitable ascent stems from their work. Rather than pure division they aim to emotionally exhaust you into "acquiescence from inevitability".
There is hardly a nation on the earth that is not involved in some way in the American discourse - each pushing and pulling to their own aims and individual agenda. Historically there was a sort of Nash equilibrium with Americans caught somewhere in the center. But as the loudest voices, or rather the most well funded, begin to dominate the discussion via social media and covert funding, we are seeing it become increasingly problematic for American democracy. That is why you are starting to see this consensus over 'verification' and 'identification' begin to coalesce. The government, both left and right of center, has begun to realize the long term ramifications of these actors.
So how do you solve that inherent tension between our intrinsic right to free-speech and those who would abuse it to cause us actual harm? An independent, 3rd party verifier with limited scope makes sense - but would that solve the greater geopolitical implications? In truth I've long expected social media like Reddit, Facebook, et al. to formulate a body of their own like the MPAA. But likewise I don't think there is a clear answer here. Do you trust the Tech Oligarchs with this power over the Government itself? This is core to the problem. How do you 'censor' the internet without really 'censoring' Americans? I think this is part of what the last administration was trying to do with the failed "Disinformation Governance Board". And that failure is what has led us to where we are now.
The original twitter thread is right to say this isn't a left-versus-right issue. This is undeniably a censorship mechanism designed to exclude a set of voices from the internet as we know it today. As with the patriot act, they choose to wrap the bitter pill in a bacon-flavored rhetoric of safety and protecting the youth from perverts and degenerates. But what has failed to be acknowledged is the intrinsic cost of having an open society in a world where that openness has become an attack surface. Make no mistake: the goal is censorship. But the solution space to what you call 'the nominal problem' is less trivial than I think you believe.
I’m in the UK and we recently got the Online Safety Act. We failed, this legislation is very popular with voters and not getting rolled back. Those that dislike it use a VPN and aren’t interested in fighting. I’d say most of the public here is exhausted with cost of living and internet freedom just isn’t relevant to their voting habits.
I grew up around a lot of the hacker ethos, open internet, Information Wants To Be Free etc… feels like a part of my identity is being striped away by my government.
Do you just use a VPN? If not what website have you seen age/identity verification on that you find most ridiculous?
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Heroes @ EFF have our guide (USA residents):
https://www.eff.org/pages/help-us-fight-back#main-content
Of course Chuck Schumer won't let me contact him using this helpful tool.
Perhaps we NYers should organize a rally outside his office in Manhattan like we did for PIPA/SOPA?
Dumb- BUT immediate links to sites of the right legislators!
Yeah I have the same senators. Emailed them directly from their website. There should be links right above those messages.
They do have a physical address, and stamps aren't that expensive.
TIL it's not free to mail your rep. Mailing your MP is free in Canada.
Use every means necessary. If that can be organized, do it.
man the EFF owns
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Any tips for writing the letter, maybe even a starting point?
Let them pry ID from our cold dead hands. If a site requires ID, it doesn't get my business.
Example, Discord wanted my ID to enable certain features, I declined, I now can't use those features, fine by me. If they started asking for ID anyway, I'd say no and see what happens, even if that means they lock me out entirely. There's no universe where they get my ID.
To anyone reading this, please take the extra step beyond striking down age-verification laws, and start taking measures to prove to Congress that it's not needed.
Your nextdoor neighbor whose misbehaving child that's permanently on their phone? Help them out.
Your friend that joked about sending death threats to someone? Scold and report him.
That girl endlessly scrolling Instagram? Get her help.
Please take a step back and examine how insane the internet is and how it's affecting our everyday lives. Political violence and mental illness is increasing, and the internet is solely to blame for this.
"If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary." Federalist 51
We're all too familiar with the latter part of that quote, but we're completely oblivious to the former. At this point, we've all but proven that the government needs to step in and regulate internet access. And unfortunately for us, they're going to do it in the most dystopian, authoritarian way possible.
I want to be on the side of freedom and strike this bill down. But when it is struck down, everyone is going to cheer, go on their merry way, and continue to let demorilization, radicalization, and mental illness infect the psyche of the everyday human being, and do nothing about it. And then the cycle will repeat itself.
At this point, I actually hope this bill passes. Not because I want it to, but because maybe then everyone will stop using the internet for everything, and some sanity will return.
I have long thought that all content (local and remote) should be properly labeled with metadata. Just like the cans of soup in the supermarket, you don't have to open it to find out if it has peanuts, lactose, or MSG in it; you should be able to filter data before accessing it.
You could define a set of 5 or six categories (nudity, sex, drugs, violence, etc.) and have a scale from 1 to 10 for each. Each content producer would rate each category according to defined criteria.
Then each user, or their parent, can set what their own acceptable level is. If you set your violence level at 4 then nothing level 5 or higher will load.
There are some showstoppers here, though. You have to either:
A) Change the laws in all countries (a non-starter), or B) Restrict access to only countries that obey those laws
And Option B is a non-starter to the freedom crowd.
Not to mention all the other issues with labeling, such as:
A) How to label in an internationally-agreeable way B) How to prevent abusive mislabeling
It's fraught, this path.
V-Chip all over again. Now with mandatory browser extensions which hook into the OS' parental controls.
It's no better.
The Electronic Frontier Foundation set up a resource page for this:
https://eff.org/age
Their guide:
https://www.eff.org/files/2026/04/09/condensed-age_verificat...
Unfortunately, their most prominent call to action doesn't seem to address the various state-specific and non-US legislation (focusing on KOSA instead). Here it is:
https://www.eff.org/pages/help-us-fight-back
The EFF has long been a skin suit my dude, they're just here to dissipate and confuse opposition.
>they're just here to dissipate and confuse opposition
What do you mean by this?
The irony of posting ethical social reflection on X though...
https://xcancel.com/GlennMeder/status/2049088498163216560
sadly not having a twitter account in order to read the fucking internet was my hill to die on.
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
>Underage influencers
Anyone who has hone so far as to become an influencer is already a lost cause. No law could save them.
That’s not really a loophole though. We have child actors in Harry Potter.
Perhaps we should stop that too.
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
This is almost verbatim the same argument that people make in support of allowing child labor in factories.
Children do not need, nor are they entitled to, any kind of "freedom" to work for a living.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
>They should focus on school.
Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years? The direction we give kids coming up always seems to lag behind reality by 10 or 20 years. Perhaps we shouldn't stand in the way of the new generation figuring things out for themselves in this brave new world. The old playbooks to a solid middle class life are increasingly outdated.
> Why? Presumably so they can go to college and get a high paying job that may not exist in 10 years?
Also so they don't end up stupid and useless like a potted plant. People with too little education are easy to manipulate and dim. They're perfect fodder for the propaganda machines.
It would be nice if we could just let kids loose like wild animals and they'd, somehow, figure everything out. But no, we actually have to try. Otherwise they end up illiterate and eating so much candy they throw up. Because they're kids.
None of your concerns are relevant. We're not talking about 6 year olds here but presumably 12-16 year olds. And the issue isn't whether they drop out of school, but whether school must be their sole focus.
Since when did being an influencer become 'one of the few remaining methods' to make a decent income?
I don't think it truly is, but I do think that the younger generations think it is.
My nieces and nephews really don't know what they are going to do in their futures because so much is uncertain right now.
If it feels like a longshot to expect normal 9-5 office jobs to be around in 5 years, and it's also a longshot being an influencer, then why not go for the influencer thing?
Less education, more peddling products on Instagram is... certainly an opinion that exists.
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
Indeed, social media companies seem to big proponents of the US legislation.
https://www.politico.com/news/2025/09/13/california-advances...
Big social media companies are likely overjoyed to be able to get discrete, government issued info of a person's full legal name, date of birth, residential address (as is printed on US drivers licenses) for advertising and demographic profile targeting purposes. And then be able to correlate it with their existing social media history/clicks/profile, browser fingerprinting, IP address, daily usage patterns, geolocation. It's a massive gift to them.
I doubt they need that to identify you. There are also lots of other problems like algorithmic manipulation. But also just stop using these junky websites. Everyone always complains about Meta doing this, TikTok doing that, and it's like if all they do is make you mad, stop being their user/customers?
It will spread to everywhere else if we allow it for social media. In Australia for example, mandatory age verification has already spread to video games.
I'm with you on the slippery slope argument. I do mean that I think we would solve most problems with just an implementation on social media.
In the US for buying games online we've had age verification for a long time. For in-store purchases you see that too. Same with movies.
Because it's not about children but requiring identification to speak online.
That's the cynical view, yes, but we can see educational standards and performance going down in the United States, we have seen plenty of scientific and medical studies showing problems with children and more specifically teenagers using social media. I'm not one to want to want to limit someone's rights, but it seems like the trade-off here is in favor of requiring age verification at least for social media companies.
Separately I still don't fully agree with concerns raised regarding social media and identification for everyone. Bots, people who are online just stirring up trouble, &c. are causing pretty significant challenges and problems for society. If you spew a bunch of racist stuff for example I think people deserve to know who you are.
And you know we do this all the time. Folks want gun registries and things like that (and I agree, as a matter of practice, but not principal) so I'm not sure why we're ok with that form of requiring identification to exercise your rights and against this one other than political priorities.
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.
https://meshtastic.org/
Did you intend to link to Meshtastic as an example of how not to achieve your goals? Because it definitely isn't capable of scaling up to anything like the whole internet, and the project struggles to agree on any goals they want to reliably achieve.
It is something, at least you can chat with your friends freely.
There are so many caveats and limitations that bringing it up in this context is downright dishonest. The most you could fairly say is that some of the philosophy driving some of the meshtastic developers is what you want to see applied to the development of an internet-scale network (which in reality would have less technology in common with meshtastic than with the current internet).
If you don't use X/Twitter anymore, XCancel makes it possible to read threads when not logged in: https://xcancel.com/GlennMeder/status/2049088498163216560
Nothing against Twitter, but I just don't feel like logging in, so that site makes it way easier to read this. Also it doesn't take like 900TiB of RAM to render.
If this does not work, use nitter instead
Really the hill to die on is that the first amendment should preclude any content-based restrictions for anyone. If you believe children shouldn't be exposed to certain materials that's between you and your kids, and should not involve the government whatsoever
Honestly, not even in favor of legislating any kind of increased device-side control or age gating. I understand the "this should be up to the parents" angle but I'd push it further: modern tech already allows parents too much control over their children. Freaky helicopter parents are already perfectly enabled to spy on their kids location, device usage, inspect and monitor their conversations, and it's already normalized to an insane degree. Absolutely no reason to make it an out of the box experience to tempt otherwise sane parents to go mad with that kind of abusive power.
I've heard that we could use zero-knowledge ID proofs to show someone is of age without revealing any more but I don't think that's the plan and the demand for age restrictions doesn't feel like a grassroots effort of concerned parents. It feels like an NGO/bureaucrat driven law and I assume its purpose is to de-anonymize people on the internet.
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
A photo identifies you. This is the digital equivalent of having a photo taken of you upon entering the mag store, stored digitally forever, shared with government, and tied to every magazine you read and purchase.
> I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
a convenient record of your face is all we need
doing a selfie with the webcam
First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification. It's not explicitly part of the proposed law but there are only a handful of companies who meet the qualifications to provide this service (id.me, Persona) and this is how they do it.
I believe if you are a "minor" then you can go the post-a-selfy route.
If someone wanted to be a martyr and just uploaded all their personal documents so they could be accessed by everyone, I wonder if an interesting court case might follow.
I could imagine it ending with a court ruling that people are responsible to protect their own personal documents which... yeah, that would muddy the waters in a world where every website expects to see your ID.
The verification apps are starting to require live video selfies to verify that the person doing the verifying is the same face as the person on the scanned ID credential.
Imagine so if that was a pltr right Or like someone who uses pltr What could possibly go wrong? People are being paranoid for no reason!
> In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification.
That's not just the plan - that's what's already legally required in many US states.
These laws were introduced by the explicitly religious right-wing groups like Exodus Cry and Morality in Media, as ways to de facto outlaw pornography (in their own words). They've since been laundered into the mainstream so the general public is unaware of the root cause.
Whether it can be done this way is besides the point. It is about how regimes like ours in the US that have demonstrated an interest in spying on their subjects choose to regulate this over time.
Reddit is one thing but would you do the same for a porn site?
Now Persona has your picture and PII. Pray they never have a breach.
Does it not sound insane to you that you need to expose your biometrics to a corporation just to make anonymous posts on a forum?
In the age of AI I think it’s only necessary and inevitable to implement some of kind of internet ID system to stop the massive onslaught of AI generated fraud, malicious hacking, and spam. If age verification is a Trojan horse to erase online anonymity, so be it, I see that as a worthy goal.
Humans are inherently social, and social networks are based on trust. Trust is primarily a function of reputation, peer pressure, and legal consequences. Reputation requires tying behavior to a stable identity. Peer pressure only works when you’re not anonymous. For there to be legal consequences for bad behavior, we must identify bad actors. I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
Also I don’t think that the “pro privacy” activists really understand the scale and severity of harm being done to children through the internet. I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
We will see how your opinion changes when someone steals your ID and voice and you end up being defrauded due to the government chosing the cheapest Indian shop to mishandle your data
> Trust is primarily a function of reputation, peer pressure, and legal consequences.
The trust is somewhat of a one-way street. We are supposed to trust the entities in power. If we break their trust, there are consequences. If said entities break our trust, we can do little about it.
> I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?
For some, perhaps. However, I also would rather protect people from a potentially grim future. What is permissible and acceptable now may not always be the case in the future. The Holocaust, for example, only ended 81 years ago. The notion of another one, even against different groups, seems completely infeasible -- the same as the first one.
> I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
Tone is hard to read in text, but are you be facetious? If not, you are essentially saying that you would support shutting down the Internet to protect even just one child. Yet, despite these real and active harms that already exists, you will continue to still use and profit off the Internet in the meantime?
While we've been agonizing over Age Verification (real or planned), Greece has apparently introduced a ban on anonymity on social media. I'm not liking where the world is headed, but I have no idea how to push back against it.
And the piece nobody is even considering...
Responsible parents don't have separate OS accounts for their children.
We simply don't need online age verification. It's not the state or private business' job to parent children. It's their parents job.
This is not only unnecessary, but will with 100% certainty lead to negative downstream affects, either via leaks, or the state being able to find people for things that aren't crimes once they're adults.
There's simply no good reason for it that outweighs the bad. But what it really boils down to is completely unnecessary.
Good: some commenters here realize it's an attack on privacy
Bad: some still entertain the idea that we should do age verification using some sort of crypto primitives
There is no reason for age verification at all.
I am from the goatse generation. Rotten.com. steakandcheese. Horrific stuff tbh, I mostly stayed away from it, and I didn't need a helicopter government to protect me from it.
The moment you accept the narrative that kids need to be protected from the Internet you have already lost.
You've already condemned those kids to a life of slavery. So much for protecting them.
What we need is not online verification, but a competent government that does its existing job well.
Who's been arrested over the Epstein files? Who is protecting those kids?
No one.
That same government wants to "protect" your kids by KYCing everyone.
Give me a break.
Nah, that already didn't work because corps are very good at creating network effects in children and will set up multi-billion-dollar businesses around them. And then the kids with protective parents become the weird ones in school. I'll die on the hill of curtailing this stuff in a privacy-preserving way.
> I'll die on the hill of curtailing this stuff in a privacy-preserving way.
At some point you'll realize the contradiction in not trusting these "multi-billion-dollar businesses" to the point that you are risking enslaving humanity and "dying on this hill" and yet at the same time trusting those same businesses to implement this dystopian system in a privacy-preserving way.
When that realization hits, it will be a loud sound, possibly heard by nearby telepaths.
That's fine, say hi to the telepaths in advance for me
Over a decade ago, on the website of a cable news network named after vermin, you could watch an uncensored video of terrorists setting someone on fire.
Right? I especially don't understand where some of the "think of the children" attitude on porn sites, as they for the most part already ask for your age and if you didn't get some kind of amusement out of seeing tits as a teenager you're a liar
It's a function of our society becoming more puritan and conservative in the past 10-15 years. This has been a slow burn.
We are back to perceiving viewing boobies as an existential threat to people. Currently, sexuality is being demonized all around, and sexual morality is once again becoming a currency in society.
I encourage people to talk to some Gen Z kids. They're much more puritan than millennials. They're focused on virginity and the moral superiority of monogamy. It's bizarre.
I spend most of my social media time on tumblr, and it's really funny to see the whiplash of attitudes between the older and younger gen z. The younger ones tend to be the puritans and the older ones are all polyamorous bisexual furries who want to have sex with robots (obviously exaggerating but not by much)
There are lots of ways to implement identity verification while preserving privacy. It's actually a super interesting engineering problem. Estonia has an excellent model to build on. The government can maintain a "traditional" ID system based on documents and in-person verification, and provide you with a device similar to a yubi-key or Bitcoin hardware wallet that could be used to share specific, cryptographically verifiable claims with third parties, like your age, or even just a boolean "over 18", but also your name or other information if you choose, with a way to control the access and audit which parties have verified which claims with the govt.
In Poland online banks to that. You can verify your identity for government purposes with the use of your online bank. No need for government to set up a scheme to confirm millions of people in person.
It is not like a digital control for id verification could be used anyway to control a narrative in war times right?
I’d wager most people want more censorship of the internet.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
But we don't have that, so he's probably right.
Since it is so harmful to let children use social media, why aren't parents being put in prison for abuse and neglect when they let their children use social media? Why should everyone else have to suffer when it's parents that should be punished?
(it's because it's not about protecting children)
Because this is a golden opportunity erode privacy riots under a complete guise of "protecting the children." Same goes for "preventing terrorism" and various other attempts to appeal to authority.
It’s not online age verification. It’s online identity verification.
Would you vote for that? Prove who you are to visit this website? Would you do it to access Hacker News? Your newspaper?
Didn’t think so.
It's turning using a computer into a privilege that can be revoked by the government at any time, for any reason.
I want that. I'm tired of bots being half the internet traffic or more. It's driving the general public insane and anonymity on the internet has zero utility. If journalists need to send sensitive information, they'll always be able to use Tor.
I think there's plenty of utility. People can express opinions that they hold honestly but would fear social retribution for if it could be tied back to them publicly. For example, any political opinion that I hold that's modestly center or right of center I would not appreciate being attached to my name online since people are completely incapable of nuance or compartmentalization.
If you wouldn’t make a political statement in a town hall setting where you’re going to show ID, then you probably shouldn’t say it on the internet.
But keep in mind that these laws don’t result in your identity being public. They will ultimately result in the sites you’re posting on know that you’re an enumerated individual. The ultimate benefit as I see it is removing outsized leverage over public opinion by botting likes on your statement or otherwise operating tons of accounts. It should also eliminate threats of violence from the digital public square, since building a prosecution pipeline against those would be easy to do. Same with child grooming, but I’ll acknowledge there’s a way to make that argument in a glib way, as an excuse to realize some of the other goals. It is a real problem though.
After reading these comments, I don't want to hear any of you suggest that kids shouldn't be allowed to have unrestricted access to smartphones or social media ever again.
How did you think this was going to be enforced?
The question used to be: should we have online censorship?
Now, the question is: what should the implementation details of online censorship be?
Id rather we focused on human Vs bot verification given the state of social media influence right now
Age verification requires identity verification once — but it doesn't require revealing your identity ever to a third party. With FHE (fully homomorphic encryption), identity data is encrypted on your device and never leaves it in plaintext. Not to the merchant, not to us as the verification service — nobody. We only compute on encrypted data and return a yes/no. I'm building this at identified.app
Kids will always find ways around regulation. Look at cigarettes, vapes, alcohol, weed; they will just get it from their dealers. Pornography? I expect something like: download a Torrent, get it from a classmate, share HardDrives in school, get it through an older brother.
>Kids will always find ways around regulation.
And porn companies should always be held responsible for not doing their due diligence and freely distributing porn to minors. Which is already illegal in teh US and most places.
It’s just defence in depth and wholly appropriate for it to be imperfect
And bootleggers will always bootleg, and smugglers will always smuggle. For that matter, murderers will find ways to murder.
Shall we just abolish all laws? None of them have any effect whatsoever, if they are even slightly imperfect... by your rule.
Yeah his point suggests we should stop ID'ing at liquor stores, physical porn stores, etc.
I'm not suggesting that actually. I look at my nephews and see them buy cigerettes, vapes, etc from small dealers instead of stores. Not saying we should just let them smoke, just expecting that they will be able to circumvent online age restrictions as well.
My question is: are digital age verifications the best way to protect kids from harmfull effects of pornography? And my worry is: what unwanted side-effects will age verification have for our society as a whole.
I agree, doxxing yourself to some shady gray-market adjacent data broker is not acceptable as age verification, and age verification was safer using the honor system as before. But for some communities, especially social media communities, some kind of verification is better than none, otherwise what's to stop them from being overwhelmed with alt accounts that are used simply for harassment or other targeted objectives?
People should not be able to misrepresent themselves on the internet, it may have been safe in low volumes but it is scary now and will be outright dangerous as a modality in the hands of AI agents. If you think teen mental health is bad now, wait until social media campaign capabilities previously only available to nation states fall into the hands of ordinary school bullies.
Maybe age verification isn't the way to mitigate this obvious risk, but there has to be something that can be done to stop rampant sockpuppeting.
"Age verification is the Trojan horse. And once it is inside the gates, the surveillance state becomes operational."
Braindead meme. "Age verification" is not a "Trojan Horse". No one, regardless of age, _wants_ to use age verification. They are being effectively _forced_ to ask for it or use it. Age verification (identity verification) is a tradeoff. A "Trojan Horse" is something that people actually want, not an obvious tradeoff, a sacrifice, a compromise. No one is being "fooled" into complying with identity verification in the form of age verification
The surveillance state is already operational. If you use "platforms" then you are already inside the gates with the enemy. The surveillance apparatus is operated by so-called "tech" companies that perform data collection, surveillance and online ad services as a "business model". These companies provide access to and information about internet users to advertisers and law enforcement
If "age verification" dissuades some people from accessing "platforms" (servers) run by so-called "tech" companies, then that is a loss for the companies and a privacy gain for those people. The "hill to die on" is not using "platforms"
These companies are the reason that "age verification" is proceeding. They push the allegedly harmful content because it makes money for them. Further, the companies' "platforms" make "age verification" possible. This is because they intermediate transmissions between internet users through these so-called "platforms". Governments need not comply with laws that protect individuals from government surveillance when they can target "platforms" instead
It is disturbing that anyone would want to "die on a hill" to save "platforms" from "age verification". These third parties are surveillance companies. They built the surveillance state. They already know who you are, they do not need government-issued ID
If the people spreading this "Trojan Horse" meme cared about surveillance, including identity verification, then they would not be defending "platforms" from regulation, they would stop using the "platforms"
I can't agree with this enough and yet I think the long term danger is masked by the current problems for the majority of voters. I'm not hopeful.
Just a reminder that the YC funds many of the companies pushing these laws and building the surveillance state.
I would say be careful what you choose to believe. Online identity verification is the only way to end the war that’s being waged on the American people by foreign states via social media. If I were a bad actor, I would very much want to convince the public that this is a bad idea.
This seems hyperbolic as it's actually a long path between age verification to full digital identity tracking. But I agree that pushing the burden of verification to websites is ridiculous. Like the GDPR requirements where every webpage has an annoying consent modal, the verification and preferences should be controlled on the device you use to access these digital services. My browser should know and enforce my cookie preferences in a way that has a uniform user experience. Likewise, if I am a minor, my parent should provide me with a device (or profile on a device) which knows my age and can use that to inform online services of the age of the user rather than needing to go through a separate process for each service.
I have a fair bit of fatalism on this one.
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
Exceeding gloomy take I know
Contacting my representatives is about as effective as making a silent wish. Whenever I've done it, I'll either get no response, or a boilerplate reply which basically says "I'm doing this, go fuck yourself". Then I'll be added to their spam list. The truth is that my reps don't represent me and they're going to do what they want regardless. After all, I'm not the one backing the truck of money up to their front door.
Yeah I emailed a representative in the UK too.
Took forever to get a response and likely achieved little, but to their credit the response wasn't entirely canned and did at least give the impression that they understood what I'm saying
This whole problem is basically parents admitting they cant parent.
Hopefully this will give yet another push towards decentralized, open source services. Platforms where noone and everyone is responsible and the state does not get to decide the rules.
I dont think most people actually want that in practice. That's why we dont have it right now.
I don't think most people have been inconvenienced enough yet. ID verification is invasive enough and should cause enough friction to push another bunch over the edge.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
Yeah, yeah, but the children...
For a forum that supposedly consists of hackers and tech-savvy people, this number of comments supporting age verification is concerning.
The author has said a lot about what kind of future awaits with mass surveillance and AI, but I believe it’s not enough. Technofascism Is not that far away.
Reminder: Age Verification are not being passed to protect anyone but social media companies. But in addition, they will be used for a massive surveillance state. This is the DMCA of the 2020s, but far worse.
In other news Greece is banning online anonymity. The final form of age verification is here.
https://www.euractiv.com/news/greece-to-ban-anonymity-on-soc...
Ok, maybe that’s a silly thought, but… couldn’t this be provided by Apple/Google anonymously?
When you set up kids devices in your family they ask you to provide the birthday anyway.
I’m keen to see the arguments against this.
Further empowering and depending on either of those companies as a middleman in our lives should make us nauseous.
Usually Fear is the realm of governments. Modern republics are basically legitimized around the fears of something terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, immigration, globalization, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
I'm not a fan of online age verification, but this is completely absurd:
> Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...
No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into massive first-amendment constitutional hurdles. It would be struck down easily.
There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.
We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
We've spent the past three decades trying to invent ways to deduce identity and build profiles of what would otherwise be anonymous users. When the government steps in and compels people to formally identify themselves by their government names, what would you expect these companies to do? They're not gonna say "no thanks."
Why the heck would the government compel people to formally identify themselves to read or comment on a newspaper or a blog? That's absurd and unconstitutional in the US.
You're starting from an assumption that is invalid to begin with.
I don't know. Why would the government compel someone to formally identify himself to put cash in a box at the bank? Why would the government compel people to take off their shoes to get on a plane? Or submit biometric data drive a car? KYC for a phone line...
It's not invalid. I have no reason to believe that this isn't going to creep.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock. its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
Enjoy dying on that hill then because without mandatory ID for potentially harmful services like social media, we will continue to descend further into the brainrot that many of you suffer from today.
I'm curious to hear your theory on how it saves us from the brain rot!
Presumably it makes people fearful to post things that differ from the norm, which is what I'm assuming parent means by brainrot (wrongthink).
Brainrot isn’t wrongthink. Brainrot is brinksmanship and zero sum discourse. As a member of the public it’s virtually impossible to know where the real consensus is on any issue today due to wishful thinking backed up by gigantic botnets. Brainrot will make people certain that they’re part of some majority consensus to the point that they will fight legislation like this because being provably part of a fringe line of thinking would cause them psychological pain. Right now, everyone (including the “moon mission was fake” fringe) thinks they’re part of a majority consensus. Even sovereign citizens and flat earthers believe they’re in a much larger cohort than they really are. A lot of these ideas are harming people offline in addition to degrading their personal mental health.
I’m betting that bot activity plummets once accounts are tied to real identities. That’s a discourse benefit. I’m also betting that discussion will become a lot more rational once people have to put their names on what they are saying. Death threats also become more easily prosecutable.
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
https://news.ycombinator.com/item?id=47809795
https://claude.ai/public/artifacts/7fe74381-a683-4f49-9c2b-1...
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
More and more places I go in other states besides Utah, try to scan IDs when purchasing alcohol.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
California grocery stores scan ID too
How does this prevent a second market for one time codes? I as an adult can just get a code and sell it someone else.
Stings that catch adults reselling codes.
It doesn't have to be perfect.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
This type of system is a horrible idea for the following reasons:
1) the cards can just be re-sold which creates a black market and defeats the "cashier physically saw the person buying the card" angle
2) nickle and dimes people for simply browsing the internet (verification can dystopia anyone?)
3) related to #2, it creates winners in the private sector since presumably you need central authorities handing out these codes
I abhor the idea of digital ID verification, but if we're going to do it, let's not create a web of new problems while we're at it.
Is it even theoretically possible to have bearer anonymity and no reselling option at the same time?
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
'consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.'
I don't think it applies, the difference is that codes are digital and can be sold over the internet, anonymously, in a scallable manner.
I still like this solution because all the solutions I've seen have flaws and this one being so easy to explain makes it great to campaign for.
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
There’s age verification when you buy a gun. Not on a gun handle.
Kids should not be able/allowed to buy/use devices that are dangerous for them
But the device itself should not care at the fallacious idea “it might be able to”
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
Are you going to give your cc number to every website in the world? Also, is that really an ID?
It's not necessary to give it to every website. Verification to the website can be a true/false from the OS. In fact that's how it already works now.
I would say it's not really an ID no, which is the point. The post is claiming that a digital ID is necessary for age verification, but clearly it isn't.
Online age verification is an example of the Motte-and-bailey fallacy (https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy, https://slatestarcodex.com/2014/11/03/all-in-all-another-bri...).
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
What I'm hearing you say:
> Our freedom is already being eroded, saying that it is being eroded more is just fear mongering.
> They want to hurt you, instead of fear mongering, find a middle ground where they're hurting you differently.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.
Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
You posted a giant, AI generated block of junk science.
Age Verification is very offensive. It assumes guilt and creates risk to no societal benefit. https://en.wikipedia.org/wiki/Right_to_privacy
Agree
There is a sudden concerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
It's true for a lot of things in Western countries.
Evident when the fight against "hate" was suddenly everywhere, and also during covid.
Politicians looks to each other. There’s nothing new in that.
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
You'll need to explain how age verification fixes that.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
That's not what they said.
Playing devil's advocate outside of debate club only serves to promote the devil's point of view.
State your well reasoned opinion where you have considered the facts. Or just say you are in support of this openly.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
Why is it always “think of the children” used to abrogate the rights of adults?
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
> Pretty much every person in every culture cares deeply about their children.
I would substitute "deeply" for "superficially". Like, if my parents found some way to prohibit porn when I was an adolescent, I wouldn't say they cared deeply about me. I would say they were misguided and authoritary. The "care deeply" idea you are putting forward is just trying to distil whatever societal norm currently is into the youngs.
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
I agree. I don't call it "age verification" though - it is age sniffing. And it has nothing to do with children - that is the lie.
What is fascinating is to see how governments ALL fall for it. There is zero resistance. This is fascinating to me. It shows how little real effort is necessary once you have the lobbyists in place. Kind of scary to witness too.
It is an apartheid system. All apartheid slavery systems will eventually die, so age sniffing will die too. But it will most likely be a long fight as more and more money will be invested by crazy corporations such as Palantir and others.
The whole "debate" is already not logical by the way. Let's for a moment assume the "but but but the kids!" is a real argument rather than a strawman argument, which it is. Ok so ... I am a "concerned parent", for the sake of discussion. I have three young kids. I am not a tech nerd. The kids see "unfitting content" on the antisocial media such as facebook and what not. So, what do I do? Well ... they have a smartphone? Aha, so ... I am not so concerned? Having no smartphone is no option? Ok so ... I say they can have a smartphone, but they may not use antisocial media. Ok. First - in any free society, is it acceptable that this kind of censorship is done on ALL kids? What if I, as a parent, do not agree with this? Well, tough luck - the laws force you into the age sniffing routine suddenly. But, even those parents who want the state to act as totalitarian: why would I want to hand over control to ANY politician for that matter? That makes no sense to me. I am aware that some parents may think differently, but do all parents think like that, even IF they buy into the "we protect the children" lie? I don't want ANY information from ANY of my computers to go into private hands here. So the whole argument already makes zero sense from the get go.
Of course those who know how things work, they know that this is the build up towards identifying everyone on the world wide web at all times AND to make access to information conditional, e. g. if the state does not know you, you can not access information. Aka a passport system for the www. Built right into the operating system too. Windows already complied. MacOSX too. The battle for Linux will be interesting; it may be some hybrid situation, like systemd. And the systemd distributions will all succumb to age sniffing, courtesy of Poettering "this is really harmless if we store your age in the database, just trust me".
>And it has nothing to do with children - that is the lie.
You're not qualified to say that because you aren't a proponent of age verification. That's just imputing motives.
As a proponent of age verification and can tell you it's absolutely about protecting kid from damaging services like porn. It's a common sense control and that's why it has bi-partisan support in the US during a time where there is nearly 0 bi partisan support.
We now know all the arguments. No more need to persuade anyone.
People will show what they are made of.
An attestation-like system to detect humanity at time of post is absolutely for useful online spaces in the era of AI slop.
The writing style of the author is very annoying.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Until people hit "attest" and then copy the text from ChatGPT.
Those people would be subjected to permanent, identity-bound bans.
It could be done with anonymous credentials though. No tracing to who the human is.
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Very unpopular opinion here on HN: one can't stop it without direct physical action against those who push it.
What do you mean by direct physical action? Do you have some examples?
I will be permabanned on HN for these examples.
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
> who cares this much about the internet?
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
So you're worried that due to age checks you'll no longer be able to anonymously
> register for classes, pay bills, submit something to the government
is that right?
> is that right?
No.
You asked a specific question, and I answered a specific question (which I even quoted in my response).
[delayed]
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
What if all services require ID?
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
> The best way to not be in a digital cage is to opt out of the current digital products.
Bullshit. These are all-encompassing monopolies and government services. More likely, they'll ban you and you'll end up having to go to court out of desperation to demand that they service you.
This is very limited thinking. If you lacked this sort of imagination 20 years ago, you wouldn't have been able to predict today.
> Frankly I would welcome a world in which kids are not using Instagram or TikTok.
This is the sort of passive reactionary nonsense that causes the danger that we're in. Everything isn't something to give up lightly, even if you think that it will force your neighbor to turn his music down, or get rid of bad reality television. I don't like kids on social media either. I don't like adults on it. I think kids are suffering more from surveillance than from TikTok.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
Chrome, Android, and Gmail are optional to use.
So is social media.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
I'm mostly alarmed by kids parroting Andrew Tate and a whole generation being raised propagandised by Tiktok brainwashing.
Why? Kids have had access to the internet for over 30 years. What is the tiktok brainwashing (I don't use it), and how do you qualify the danger of it from say google news brainwashing, or even (gasp) public school brainwashing? I mean, if we're going to group ban information, at least let people in the local communities make those decisions. Otherwise, we're going to get the Epstein class making these decisions.
Is Google tracking which teenagers make which posts on 4chan?
Curious about via Google Chrome versus not
A lot of people dismissed RMS's "Right to Read"[1] essay long ago. All the things it was warning about have come to pass, in spades.
1: https://www.gnu.org/philosophy/right-to-read.html
It's mind boggling how far Stallman saw into the future. Saddest part is we're losing this war. They're going to destroy freedom of computation, freedom of information, and it turns out that... Nobody cares. Nobody but a bunch of nerds.
Responding to tone but not to content is what a dog does.
looks like you ruffled some feathers with this one
Tone was off
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
Nothing "hyperbolic" about the points made. If anything it's not nearly extreme enough. People have no idea how bad things really are.
>They are counting on you caring more about sounding reasonable than protecting your kids from a system designed to control them forever.
Do you actually have an argument to make?
He’s 100% correct.
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
That’s not a world I want to live in.
>For a start, child are parents responsibility
And not distributing porn to children is a porn company's responsibility.
You are repeating a very common talking point but its not a good one.
Age verification laws make it possible to hold services providers liable for breaking the law (it's already illegal to distribute porn to minors in many places, like the US).
It's both true and completely irrelevant that parents should do a better job protecting their children from harmful services online.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
So why are there laws that dont let them buy cigarettes and alcohol?
Did social media exist when you grew up?
Xanga and MySpace are what my friends had; yes
It's weird that none of your arguments or proposals hold accountable the responsible parties.
You want to force us to compromise when we were minding our own goddamn business.
Responsible parties like porn companies that distribute porn to minors? Parents are still accountable with age verification laws.
If parents suck at parenting, they will suffer.
If porn companies distribute porn to minors, which is illegal in many places such as the US, they will not suffer. Unless you start holding them accountable.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
Maybe you're not the target, then.
I haven't heard too many people say these extreme-sounding, yet at least arguably true points out loud.
Someone should be saying them, and the fact that it's not your particular cup of tea may not be the biggest issue here.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
> how unreasonable he sounds
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously and without the content of your speech being sold to Walmart and the cops. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
Do you think children are harmed by porn? Did you know it's illegal to distribute porn to a minor in the US?
It seems reasonable to me to hold porn companies responsible for distributing porn to minors.
That's a discussion that's entirely tangential to age verification. However, I think porn should be illegal entirely as it's just prostitution. As such I think porn companies should not exist, the same as brothels or heroin dealers. If they have to exist for practical reasons along with other objectively harmful things, such as alcohol, marijuana or gambling, then obviously they should be regulated to ensure they're not targeting minors.
That does not detract from the fact that the people arguing for age verification are using "think of the children" in order to push surveillance.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
It seems the Epstein class didn't like this comment
Im completely OK with verifying someone's age before distributing age-restricted services to them. That's what an age restricted service is, and obviously we shouldnt let porn companies distribute porn to minors (its already illegal most place). Just dont use porn, facebook, online gambling etc. if you dont want to share your identity.
I can see why it's unfortunate but the idea posited that that it's somehow illegal in the US is ridiculous. You have no right to watch porn anonymously at the expense of holding porn companies liable for distributing porn to minors.
Internet 1.0 was largely read only, ephemeral, or decentralized. Chat rooms, IRC, personal webpages, etc. There was anonymity and there were not age restricted services.
Internet 2.0 introduced age restricted services and the enforcement lagged. The enforcement is now catching up. You can still do all the Internet 1.0 things anonymously but you can no longer gamble online as a 14 year old and hopefully soon you wont be able to watch porn either.
Private companies now can link all your online activities to you. Not an advertisement ID, but directly to you and your loans and your health data and whatever they're selling in the black market. Every data breach is a 100 times. It was already almost possible to directly know about you by buying data, now it's easier.
The point of this is not to verify age really. It is to verify identity. There's no way to prove someone is some age without presenting a legal ID.
Also, it's not just porn, facebook, online gambling etc. It is the OS based on some bills. So ALL your activities.
This argument as framed doesn’t make any sense. Porn is (and WAS) Internet 1.0.
There was porn before most everything on the web. Porn is also speech / art.
Anonymous access should be available for any website that wants to share their content on the Internet provided they have the rights to that content.
States that seek to limit that could make a legal argument that they have the right to limit access, but in the end it’s infringing speech. Worse, it’s unenforceable.
And yes, I would make the same arguments for people posting hateful shit or misinformation.