I heard someone on a podcast call social media algorithms "the modern-day cigarette" and that really resonated with me. These companies know their product is addictive and bad for users, but they keep pushing it anyways. Like cigarettes, it's bad for everyone, not just kids. I made an algorithm blocker for Safari because of that and it's actually crazy how much more pleasant social media is if you don't have recommendation algorithms at all. I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids, but I understand why it's starting there...
If you didn’t notice, this comment is an ad for a paid app trying to capitalize on social media anger. I respect the hustle, but this is not a neutral comment on the topic due to the financial interest. There are many free alternative plugins for targeting social media feeds if someone wants to filter these.
The modern-day cigarette is such a perfect metaphor for social media. A cabal of unfathomably wealthy companies spreading their harmful products across the world; making them as addictive as possible while actively burying the research which proves how harmful they are. I truly hope one day we'll look back on social media and smartphone use the same way we regard smoking.
Look up images in Google with `eu cigarettes boxes`. Banning is a thin wedge, but I think we need something like these warning labels for social media.
Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.
I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?
Let's face it - the EU is on a path of "Minority Report" here.
> I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids
Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.
Already Hippocrates was linking the mind to the physical brain, and if you've never felt a physical reaction from looking at the fairer sex I feel bad for you son, yet if you got ninety-nine problems at least sex ain't one.
It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.
To hold this view you have to think of information as "not real", not like "real" molecules and receptors, the mind as distinct from the body, and then restrict the legal definition of harm to only "real" things.
This is an odd thing to do, because :
- information is real, it exists in the universe.
- the harm of social media is real, as measured by many of the same measures as the harm of smoking
Why not do something about ads? No, that's a good thought, we should do that too.
I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.
That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.
> Yeah they try to restrict what we can do. We oldschool people call this fascism.
Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.
Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.
If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.
For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).
Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.
If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.
The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.
For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.
The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.
You rely on unambigous, "physical" properties of the videos.
There is a physical property of all the videos: the time of publication.
There is a physical property of all the channels: did you subscribe to it, or not ?
So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.
Now, of course, a brand new user would have no subscription - you show them a search box.
But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)
I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.
There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.
Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.
Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.
If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.
Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]
That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.
I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)
I'm advocating for "accepting your responsibility as an editor".
> If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present
Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.
A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.
Goes away, or is liable for the content promoted to the frontpage under the OP's take?
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.
I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.
There's another angle to this except the algorithm. When it comes to kids today, there seems to be peer pressure and the need to maintain social media presence, be cool online, among your peers and so on. Beyond that, some kids have their lives devastated by others secretly (or not) recording and publicly sharing their vulnerable moments in life. That can happen in a night and profoundly damage someone.
"this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."
ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.
The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.
It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.
Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.
I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.
On the radio I heard a reporter talking about things China does during school exams. Apparently all schools have exams at the same time and during that period, social media shuts down at night. I forget the exact hours (10pm - 6am maybe). I'm starting to think that would be a great policy in general for everybody.
I think they also said AI companies go offline during exam hours, but I may have got that wrong.
Absolutely wild that we’re seeing proposals to shut down parts of the internet and regulate when people can talk to each other on social platforms as a real suggestion on HN.
I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.
Toast notifications were the big mistake. Also badges. In my perfect world, the only thing to retain the ability to keep messages alerting the user that someone tried to contact them would be voicemail, subject to the same spam laws as everything else.
You might have the self-awareness and impulse control to stop yourself from getting addicted to these apps, but the majority of the world's population does not.
These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.
Against that, the average person has no chance. The power balance is hugely uneven.
A responsible government which actually cares for its people has a duty to protect them from abuse like that.
Because, in general, we see adults making bad choices as a price worth paying in a free society, but we recognize that children lack the maturity and judgment to make those choices for themselves.
Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.
Same as for the cigarette: it's a lot easier to regulate stuff for kids, because we as a society tend to agree that they need to be protected. Much harder to do with adults, because it is much less of a consensus.
The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.
> I think especially restricting endless scrolling
The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.
FWIW, social media use is mediated by ∆FosB expression, so the less you use social media, the less you want to use social media. Timeline of ~3 months.
But they are so profitable, and we need them to track people around and create a police state efficiently. Ah let's keep them but just fine them as well for the show.
I don't agree with this. Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing. At some point, books, movies, comics, etc, etc might have been considered addictive.
Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.
> Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing.
What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.
Everything you do “triggers dopamine”. Reading HN triggers dopamine. Eating breakfast triggers dopamine. Dopamine is also important for movement and many other things.
This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.
It's well known, but I'm not convinced it's true. Dopamine levels are measurable by blood test, and some drug abuse studies perform that measurement. Why does the literature on social media and dopamine exclusively talk in vague and general terms, rather than pointing to specific studies where researchers measured dopamine before and after 30 minutes of TikTok scrolling?
Addictiveness is measured by ∆FosB gene expression. The 'addictiveness' of a substance or activity is qualified by how much ∆FosB is expressed. It's decidedly not just a completely subjective thing. Books, movies, comics, etc. can all still be measured on this scale. Everything is addictive in some capacity, generally.
Addiction at least is quite straightforward to differentiate from otherwise engaging things by whether it causes significant harmful effects. E.g. per Wikipedia "Addiction is a neuropsychological disorder characterized by a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences."
Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.
At what point should the responsibility fall on the parent to protect their children from harm?
Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.
It's not so easy, they need phones and social media to communicate with their friends. They also need to fit in and find an identity. The algorithms basically all engagement engines have is harmful for humanity as a whole. They are marketed as recommendation engines but it's 100% about engagement and that is why the content you see is mostly creating dopamine from it being fun or rage for it being provocative. It's built to serve one purpose, to keep people using the platform as much as possible. Not because the platform is good, but because it serves content that maximizes engagement.
I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.
Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.
It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.
I am from Eastern Europe and I’ve been living for many years in Western Europe. Where I come from, kids get their first phones when they start school at 6 (there’s a pre-school year) simply because every other kid has one. I keep coming back in my mind to two examples from my birth country: a friend’s kid carrying an 8 inch smartphone in his hand everywhere because the phone was as big as half his thigh and would have to carry a bag for it. The second one was on a visit at the zoo, I was on a bench and a family with two young children with them, in a cart. And both children, couldn’t have been older than 4 or 5, were scrolling TikTok, that was showing them children content!
In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.
And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time
Replace TikTok with cigarettes, and it'll hopefully make sense to you. There was a time when people had no idea that smoking was bad for you, which is where we are now with these apps.
And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.
BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.
Apparently parents are spending more time with their children than ever. Dads especially. Paradoxically, there is what you're addressing.
Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.
Either what defines an "adult" is going to be raised exponentially or what defines a "kid" is going to be lowered to determine who is allowed access to information in transit and who needs to be "safeguarded" from it.
I do not buy this "holy knight war" by the EU at all.
It also makes no real sense to me.
Nothing against US mega-corporations paying fins, mind you,
but I equally do not trust the EU bureaucrats either. There
has to be a limit to both what politicians can do, what
corporations can do and what bureaucrats can do, while
retaining a democratic base system at all times. If you go
against addictive design, then why not against ALL ads? I
don't want to see any ads. Ublock origin made me change my
mind here - I literally see no reason as to why I would
ever want to burden my brain cells with irrelevant content.
This is a bit different to website layout though. I equally
fail to see why the EU should meta-regulate what is permissive
in regards to design and what is not. Why would I have to accept
any random EU bureaucrat here? If a user interface sucks, I'd
rather expect ublock origin to kill it off. This could also be
community maintained. No need for the EU to waste taxpayers
money. After the EU wants to sniff for age data and then also
declared its holy war against VPNs, I do not trust anything
coming from Brussels. Even less so with Ms. Leyen in charge -
can't the anti-corruption offices in Germany get rid of such
lobbyists?
Imagine the pressure on Instagram and Tiktok to serve better content if they were forced to pick out, say, 100 short videos per person per day. And not just for kids, adults need a break from this addiction machine as well.
they are going to put kids on a drip basis. addiction is still there, just limited amount per session. Intermittent rewards is actually the perfect schedule for an advertising company, you don't want people to be making unmonetizable page views.
In the modern world: any tech proposition that starts with protection of children as a goal can be dismissed out of hand, since it's emotional manipulation masquerading as tech policy. When I hear "protect kids", all I see is a sleazy politician bowing to their respective Security State apparatus.
You know, yeah, you can crack down "addictive design", but then what?
If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.
Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?
And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.
..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.
You see the problem here?
I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.
Makes it an easier sell politically. If you position it as dangerous to kids in particular, your opposition then looks like they're encouraging child harm.
Yeah yeah, virtue signaling, and most of EU online services are now gated by the use of one of the whatng cartel web engines (IRL, google blink), namely EU web sites are broken favoring web apps.
They have to restore interop with noscript/basic html web engines (past/present/and future).
Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).
I must notice that every time, but really every time, EU moves a pinky finger against tech industry, a sizeable chunk of comments here will be like the one above. I wonder, is it about a general sentiment against EU? Or a general sentiment against restricting technology? Or a general sentiment against humans? Or what?
The most on-brand solution for the EU would be to require mobile phone users to upload brain scans in real-time so the state can check for neural activity associated with addiction.
I heard someone on a podcast call social media algorithms "the modern-day cigarette" and that really resonated with me. These companies know their product is addictive and bad for users, but they keep pushing it anyways. Like cigarettes, it's bad for everyone, not just kids. I made an algorithm blocker for Safari because of that and it's actually crazy how much more pleasant social media is if you don't have recommendation algorithms at all. I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids, but I understand why it's starting there...
If you didn’t notice, this comment is an ad for a paid app trying to capitalize on social media anger. I respect the hustle, but this is not a neutral comment on the topic due to the financial interest. There are many free alternative plugins for targeting social media feeds if someone wants to filter these.
I removed the link, just thought it was relevant to the discussion.
The modern-day cigarette is such a perfect metaphor for social media. A cabal of unfathomably wealthy companies spreading their harmful products across the world; making them as addictive as possible while actively burying the research which proves how harmful they are. I truly hope one day we'll look back on social media and smartphone use the same way we regard smoking.
This is still a recommendation algorithm, just less enjoyable/addicting one. Any process by which you decide what to show to a user is an algorithm .
Look up images in Google with `eu cigarettes boxes`. Banning is a thin wedge, but I think we need something like these warning labels for social media.
fascism for the greater good?
What are you trying to imply (while hiding behind a rather unsuitable form of irony)? Not that the EU is taking away essential freedom, I hope?
People stuck in 1940 and not able to imagine new words for new things should not be allowed to discuss these topics online.
That rationale never convinced me.
Smoking has definite physiological effects. Molecules bind to receptors or neurons and initiate cascades/responses.
I don't see this with user interface in a browser at all. IF you wish to reason for that, why are regular ads allowed? They piss me off. Why do I have to see them? They cause my brain an addiction to want to buy crappy products. So why is there no ban here?
Let's face it - the EU is on a path of "Minority Report" here.
> I think the EU and other jurisdictions should really look beyond just limiting this stuff to kids
Yeah they try to restrict what we can do. We oldschool people call this fascism. See the EU trying to destroy VPN. And this is a meta-strategy we see here - many lobbyists are activated and try to "sync" laws that never made any sense to as many countries as possible. I see where corruption happens. And I don't buy the "we protect kids" fake lie for a moment.
Already Hippocrates was linking the mind to the physical brain, and if you've never felt a physical reaction from looking at the fairer sex I feel bad for you son, yet if you got ninety-nine problems at least sex ain't one.
It's just so tedious to see this "information cannot harm anyone" theory in a context where a huge fraction of the people spend their entire day jobs tying to make phishing less effective.
To hold this view you have to think of information as "not real", not like "real" molecules and receptors, the mind as distinct from the body, and then restrict the legal definition of harm to only "real" things.
This is an odd thing to do, because :
- information is real, it exists in the universe.
- the harm of social media is real, as measured by many of the same measures as the harm of smoking
Why not do something about ads? No, that's a good thought, we should do that too.
I think a decent conceptualization here is "psychic damage", as in a video game. These things deal a lot of it.
With some of the legal discovery happening at Facebook, we know that the company did internal research showing that it's products can be addicting and detrimental to kids: https://www.wsj.com/tech/personal-tech/facebook-knows-instag...
That's why I make the cigarette comparison. They know it's bad, but it's profitable for people to be addicted to it. I think it's bad for adults for a different reason, I've seen adults in my own life get influenced by things they see online (conspiracy theories, pseudo-science around health and nutrition, political radicalization). And this happens because it's profitable for people to be hooked on these topics with false or misleading information, not because it's true. That's not to say this never happened before recommendation algorithms, but it's a difference in magnitude. I think that's the reason we are seeing such a dramatic rise in political polarization- because it's profitable.
> Yeah they try to restrict what we can do. We oldschool people call this fascism.
Come on, this is an absurd statement. Governments regulate what people can do, yes. It’s part of their role. It’s why I can’t sell tainted meat on the street. It’s a good thing.
Of course there is a line you can cross where the control becomes excessive but “the government sets rules around what people can do, that’s fascism!” is absurd.
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
So the user opens the app - what is the first video you show them? How does 'the user decide' from the millions upon millions of videos there are?
If the user can search like in Youtube then how do you rank the results? That's also an algorithm.
It isn't pretty easy to solve at all.
In the case of Instagram: You show the videos from the people you follow on instagram, then no more short videos at all. Possibly a search box.
If you search on youtube then it can rank any way it wants, just not use e.g. anything from the viewing history. No "related videos" column. That's what YouTube used to be. But YouTube (unlike TikTok) worked well before it had rabbit holes.
For TikTok the situation is worse. Their whole app just doesn't exist unless you have the custom feeds. This would make YouTube be 2010 youtube, Instagram be 2010 Instagram (great!) but it would effectively be a ban of TikTok's whole functionality (again, great!).
Do it like a library. When a person walks into a library, they're presented with a short curated list of books suggested from the librarian. All visitors to the library see the same books. From there, the visitor can go about their business searching for what they want.
If they don't know what they want, perhaps a good use case for the newfangled LLM-search we have now would be "What's an interesting or popular topic I haven't searched for before?" to which the AI will respond with a list of newly searchable terms.
The first unwatched video from the user's followed/subscribed channels. Chronological, reverse chronological, sorted alphabetically, by the user's channel prioritisation, by likes, by views... whatever the user chooses. And then an end of feed.
For new users? A search bar and a set of (human? AI?) curated seed recommendations that the platform is comfortable with being held liable for.
The internet solved the problem of millions of millions in it's implementation details, you share a URL. You follow people, they share URLs, it grows organically, same way every website worked pre... Instagram? I'm not sure who moved to the algorithmic feed first.
> what is the first video you show them
Whatever is latest posted across their followings/subscriptions?
I would say, no *personalised* algorithms other than those based on deliberate user choices would solve the problem. So, what user chooses to follow, or the same for everyone in the country.
These are multi-billion dollar companies.
Its okay if they have some hard problems to solve.
I made a new YouTube account recently and my homepage was blank.
https://news.ycombinator.com/item?id=37053817
You know old reddit, Flickr, etc., had ways of presenting content based on different things besides impulsive engagement.
It's very easy.
"So the user opens the app - what is the first video you show them?"
You don't. How about that?
This seems to be consciously dishonest. Show them "most recent" or "most upvoted" or "A to Z." Pretending like this is hard is bizarre. People have always selected sort and filter algorithms, until companies started taking them away.
Of course it's easy: such decisions were taken _before_ the feeds where algorithmically built.
You rely on unambigous, "physical" properties of the videos.
There is a physical property of all the videos: the time of publication.
There is a physical property of all the channels: did you subscribe to it, or not ?
So, you show, in (reverse) chronological order of publication, the list of videos published by the channels you subscribed to.
Now, of course, a brand new user would have no subscription - you show them a search box.
But then, now, your search algorithm has to weight the various channels that match - but your algo can be relatively transparent, relatively auditable, and the same for all users (unless given explicit preferences, and of course national laws, etc, etc...)
I'm sorry, but, I have a "subscriptions" page in youtube or substack, and they're chronological, and they show me what I want to watch. You keep that.
There is a "home" page in both service that is algoritmically built, and they show me crap that the algo want me to watch. You get rid of that.
Do this, and I can consider you a "neutral" actor, and accept that you shift the blame to content producer.
Or, keep the algo feed, but don't take money from advertiser when I watch yet another flat earther video because YOU decided it was trending.
If you want to decide what I watch, and make money from that decision - congrats, you are an editor. You get the earnings, and the responsibility.
Please don't tell me, with a straight face, that the people who build the algo don't "decide" what I watch. If they want to tweak the algo to downgrade the flamewars and outrage and conspiracy theories and violence and abuse, they can. They do not want to, for business reasons. [1]
That's fair, up to a point - we need publications with editors that agree on having "edgy" content. I'm not advocating for blanket censorship.
I did not like social network preventing me from _sharing_ articles about Biden's son laptop (this was actually beyond the law, but somehow they managed to find the resources and programmers to implement _that_, because, at the time, the execs where cozying with a different administration.)
I'm advocating for "accepting your responsibility as an editor".
[1] https://en.wikipedia.org/wiki/Frances_Haugen#October_5,_2021...
> If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present
Hacker News is a site that presents data by algorithm. Under your definition, Hacker News goes away, too.
A more accurate framing would be that they’re going after personalized recommendation algorithms. It’s not obvious that offering a recommendation algorithm would mean that the site is no longer an impartial common carrier.
The algorithm is not personalized. It's the same for every user. No issue there.
Goes away, or is liable for the content promoted to the frontpage under the OP's take?
But I'd agree, that it's personalisation rather than just curation that's the issue.
I think even requiring sites to have a "bring your own algo" version (and where ads are targetted to the algorithm, rather than the person) would cure a lot of ills.
As is, even with something like Spotify where you _are_ paying there's no easy way to "reset" your profile to neutral recommendations
> Hacker News goes away, too.
so be it.
This is one of those things that don’t translate to legal reality very well, as then you have to define “what is an algorithm”.
Is adding advertisements an algorithm?
Is including likes an algorithm?
Is automatically starting the next video after a previous one has finished an algorithm?
Is infinite scroll an algorithm?
Etc
This kind of complex leglislation already exists in many areas of the law: revenue collection being the most obvious one. We could choose to treat "societal harm" the way we treat "tax collection".
I'm not saying there aren't infinite edge cases and second-order effects - but we tolerate those already for many things. I'm not pretending this is simple or even desirable - I'm merely stating it's possible if we want to do it.
My biggest fear is that (like the UK Online safety act) this acts to favour the huge corporations because they are the only ones that can afford a team of lawyers. Any legislation should aim to carve out exceptions to avoid indirectly helping monopolies.
This is some kind of a meme where people believe things can’t be defined in legal terms and therefore can’t be regulated. These people are usually not lawyers.
Does anyone know where it’s coming from? I can certainly believe that incompetent jurisdictions have a ton of issues with people misapplying the law and using loopholes.
"By algorithm" can be easily defined.
The easy benchmark to setup can easily be, that any feed that displays the data in a way other than the following is considered an editorial choice and thus the platform is liable as a publisher:
1. In a chronological order, and only filtered based on user selected options.
2. In any other order explicitly selected by the user.
An exception can be made to allow filtering out content that violates the platforms terms and conditions.
Alternatively there can be no exception, effectively making these platforms unworkable. This is also a choice. We do not need these platforms, including this one.
"Algorithm" is a method of selecting the content to display. You're listing presentation types, not selection types. Presentation has nothing to do with supervised selection. Selecting the next video in the infinite scroll would be the algorithm, not the infinite scrolling mechanism itself.
Everything other than sorting the list of entities by a standard measurement unit (time, length, mass, temperature, amount) needs to be covered by this law.
The moment you add other entities to the list (e.g. ads inbetween posts), then it's also subject to the same restrictions.
Ok so then the "algorithm" must be made available to authorities (or even better, the public at large) and be approved or rejected based on a court or a law. Obviously an algorithm based on "engagement" or "narrative" should be rejected with prejudice every time.
This doesn't differ much from the legal reality that I've seen. Terms need to be defined, yes. It will require work to do so. And that work should be done even if it's a bother.
I don't see a single difficult example here. The answer is "NO." It's strange that you couldn't even find one.
I mean "Is including likes an algorithm?" You might as well ask if having a dog in the video is an algorithm. Any question about "likes" would be if you're manipulating the video selection based on likes, or is the user given a control to manipulate the video selection based on likes. If it's you it's an algorithm. If it's the user, it's a control. If you lie about the likes, then it's an algorithm. If you're transparent about the likes, then it is a control.
The other ones aren't even worth discussing. You might as well ask if having a blue logo is an algorithm, or if Comic Sans is an algorithm. "It's all so complicated!"
-----
edit: that being said, the EU does not care about this issue at all, and has had plenty of mandate and plenty of time to have done something about it if it did. They are also going to say "it's all so complicated." Because their problem is the unpopularity of center-left neolib governments that are just barely holding on with extreme minority support through bureaucratic means because they wrote the regulations. They want to keep what's came for British Labour during the recent council elections from coming for them.
So I guarantee that content will somehow become an "algorithm." The goal is to keep people who don't like them from speaking to each other.
The conversation has iterated a couple times and one point that people (on this site at least) are stuck on is “well however you rank things—latest, most popular—you’ll need to use some kind of algorithm, maybe quicksort.” This isn’t what the general public or politicians mean when they say “an algorithm” but it does make something of a point, what exactly the general public and politicians mean when they say that… it’s a bit ambiguous.
I think the EU has fully digested this point, and is focusing on the “addictive design” phrase instead, for good reason. It makes it obvious that the problem is a bit fuzzy and related to the behaviors induced, not some cut-and-dry algorithmic thing.
There's another angle to this except the algorithm. When it comes to kids today, there seems to be peer pressure and the need to maintain social media presence, be cool online, among your peers and so on. Beyond that, some kids have their lives devastated by others secretly (or not) recording and publicly sharing their vulnerable moments in life. That can happen in a night and profoundly damage someone.
Back in my day, they used to be called social networks
"this" - you mean, engagement optimization? i think it would be different content. i don't know how much liability matters, people spend all day watching netflix too, and it is "liable."
ironically, i'm only reading this kind of low brow take because people upvote it, not because it makes any sense.
And when does the user decide? Must a platform do nothing to stimmy spam, or even illegal content to qualify as impartial?
I suppose the answer could be that only platforms that do indeed allow spam or worse are impartial, but that is a tricky position to be in.
The mechanism would be that if the user has chosen to follow an account then posts from that account falls under common carrier. If the platform choses to show you other posts then it's under their responsibility.
This is a bit of systems difference. Under a french law system you would write laws to regulate the harms away. Under english common law liability court cases about the harm would lead to precedents and then to common law derived from it. Though not an expert on this.
You'll need to solve the dark pattern where a new account opens on a blank page with a box saying "Would you like us to suggest what you watch here?"
Why would anyone go to a new platform if they didn't know anyone to follow there? I don't see a problem there. I download TikTok and search for SexyDancingDinosaur I heard was on there and press follow.
How does this specific horrible take rank so highly on HN whenever something adjacent to big tech gets posted. "impartial common carrier" is not even an extent legal concept.
It's been argued to death already, I just have to express shock that I'm still seeing this non-starter constantly here.
It’s so elegant that there’s zero chance the EU will do it since this is all performative for them
Alternative suggestion: Force them to open up the service and allow third party clients. Take Art. 20 GDPR "Right to data portability" and extend it to public content.
I don’t think this is only a kids issue.
A lot of adults need this too. The addictive apps are very well designed, while most blockers are either too easy to ignore or too annoying to keep using.
I built a small iOS blocker because I had the same problem. Making it strict enough to actually work without making people hate it is the main challenge.
On the radio I heard a reporter talking about things China does during school exams. Apparently all schools have exams at the same time and during that period, social media shuts down at night. I forget the exact hours (10pm - 6am maybe). I'm starting to think that would be a great policy in general for everybody.
I think they also said AI companies go offline during exam hours, but I may have got that wrong.
Absolutely wild that we’re seeing proposals to shut down parts of the internet and regulate when people can talk to each other on social platforms as a real suggestion on HN.
I feel like we’ve completely lost the plot when we’re starting to invite government partial Internet shutdowns as a good idea. This is a totalitarian government play.
I can understand regulating dark/abusive patterns, but at the end of the day I should be allowed to doomscroll at night if I want to
Toast notifications were the big mistake. Also badges. In my perfect world, the only thing to retain the ability to keep messages alerting the user that someone tried to contact them would be voicemail, subject to the same spam laws as everything else.
As an adult, who despises all those apps, I don't want to grant government the power to make that decision for me.
You might have the self-awareness and impulse control to stop yourself from getting addicted to these apps, but the majority of the world's population does not.
These giant companies pour millions upon millions of dollars into engineering their services to be as "engaging" (read: addictive) as possible with the specific goal of making users spend more time on them.
Against that, the average person has no chance. The power balance is hugely uneven.
A responsible government which actually cares for its people has a duty to protect them from abuse like that.
An an adult, do you also believe seat belt laws are a bad thing?
If we afford the same protections to adults, we don't need age verification either. Just a thought.
Tell me: why are these algorithms suddenly okay when the victim turns 18?
They are bad for everyone and if you’re willing to regulate them, make them illegal to be used on anyone.
Because, in general, we see adults making bad choices as a price worth paying in a free society, but we recognize that children lack the maturity and judgment to make those choices for themselves.
Most adults also lack the maturity and judgement, but allowing adults to make bad decisions is usually less dangerous than giving someone else the power to decide which decisions are too bad to permit.
Just from this article it's not clear if the methods like endless scrolling or "watch next video" are going to be regulated based on user age or not.
It just says the platform who use such methods, often target kids.
Same as for the cigarette: it's a lot easier to regulate stuff for kids, because we as a society tend to agree that they need to be protected. Much harder to do with adults, because it is much less of a consensus.
I think especially restricting endless scrolling is a good thing overall to reduce the addictiveness of social media and its harmful effects.
HN having pages instead of a feed or endless list is one of the things I really like about it.
For sure.
The other thing I really love about HN is that titles are all supposed to be boring and to the point. The guidelines[1] for titles are excellent and I wish more of the web and honestly legacy media too would behave that way. Things that are of no interest to me are not trying to waste my time and attention.
[1] https://news.ycombinator.com/newsguidelines.html
> I think especially restricting endless scrolling
The actual point is that they are designed to be addictive. "endless scrolling" is just an implementation detail. If you "ban endless scrolling", they'll still be using every other trick to make it addictive.
Thanks, I'm an adult and I need it too
FWIW, social media use is mediated by ∆FosB expression, so the less you use social media, the less you want to use social media. Timeline of ~3 months.
Had the exact same thought
But they are so profitable, and we need them to track people around and create a police state efficiently. Ah let's keep them but just fine them as well for the show.
What else will fund the AI boom but computationally expensive video AI?
I don't agree with this. Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing. At some point, books, movies, comics, etc, etc might have been considered addictive.
Social networks in general should be banned for underage people, that's the thing. And the social network itself should be liable for verifying the age its users, like a nightclub is liable for people who enter it. No bullshit operating system age verification, that's, trust me, totally intended to protect kids and not to spy on you.
> Addictive, unless we're talking about a chemical substance or something like that, is a subjective thing.
What makes you say that? It's well known that the addictive patterns in these apps trigger dopamine the same way drugs do. In a sense, dopamine is the "chemical substance" central to the addiction. Heroine and algorithms are just different ways to get it.
https://med.stanford.edu/news/insights/2021/10/addictive-pot...
Everything you do “triggers dopamine”. Reading HN triggers dopamine. Eating breakfast triggers dopamine. Dopamine is also important for movement and many other things.
This is a lame reduction of brain chemistry that has been used to push agendas. Dopamine is not equivalent do addiction.
It's well known, but I'm not convinced it's true. Dopamine levels are measurable by blood test, and some drug abuse studies perform that measurement. Why does the literature on social media and dopamine exclusively talk in vague and general terms, rather than pointing to specific studies where researchers measured dopamine before and after 30 minutes of TikTok scrolling?
Addictiveness is measured by ∆FosB gene expression. The 'addictiveness' of a substance or activity is qualified by how much ∆FosB is expressed. It's decidedly not just a completely subjective thing. Books, movies, comics, etc. can all still be measured on this scale. Everything is addictive in some capacity, generally.
The reason why it is done this way is that “social media” is much harder to delineate and also not what is generally considered harmful.
Addiction at least is quite straightforward to differentiate from otherwise engaging things by whether it causes significant harmful effects. E.g. per Wikipedia "Addiction is a neuropsychological disorder characterized by a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences."
Addictive would be then something that (for a substantial portion of population) has a tendency to cause addiction.
>At some point, books, movies, comics, etc, etc might have been considered addictive
The difference compared to a book is that a book is not personalized for each individual reader, so the example is not a good one IMHO.
At what point should the responsibility fall on the parent to protect their children from harm?
Don’t get me wrong, if I had my way TikTok wouldn’t exist for anyone, adults included. It’s just so strange to me that so many parents hand their 7 year olds unrestricted access to TikTok and expect someone else to keep their kid safe.
It's not so easy, they need phones and social media to communicate with their friends. They also need to fit in and find an identity. The algorithms basically all engagement engines have is harmful for humanity as a whole. They are marketed as recommendation engines but it's 100% about engagement and that is why the content you see is mostly creating dopamine from it being fun or rage for it being provocative. It's built to serve one purpose, to keep people using the platform as much as possible. Not because the platform is good, but because it serves content that maximizes engagement.
I read a post about someone saying his wife worked for a snack company. They used MRI scans to see how much salt (or sugar) they should have in the snacks to maximize the response in the brain. Sounds disturbing right.
Well engagement engines are the same thing. It's artificial intelligence optimized to get people to react and stay addicted. Basically AI doing harm. It's not what is best for the individual in terms of health. It's what generates most money to the owner of the platform.
It should not be allowed to build a business around something that exploits humans brains. Basically biohacking our brains for profit.
I am from Eastern Europe and I’ve been living for many years in Western Europe. Where I come from, kids get their first phones when they start school at 6 (there’s a pre-school year) simply because every other kid has one. I keep coming back in my mind to two examples from my birth country: a friend’s kid carrying an 8 inch smartphone in his hand everywhere because the phone was as big as half his thigh and would have to carry a bag for it. The second one was on a visit at the zoo, I was on a bench and a family with two young children with them, in a cart. And both children, couldn’t have been older than 4 or 5, were scrolling TikTok, that was showing them children content!
In contrast, in Western Europe, my son is now in the sixth grade, more than half his class doesn’t have phones, phones are absolutely forbidden on school grounds and at school activities, and they are now doing a class trip where they were told that there’s a pay phone at the hotel, in case they want to call the parents - our son promptly informed us that he’ll rather buy a pack of Pokémon cards than call us and 3 days is not so much anyway.
And it is not only at school, he travels for tournaments with his team every other week and mobile phones are absolutely forbidden on the team bus. Children read, play games (including chess on a magnetic board), sing and change stories for hours at a time
Replace TikTok with cigarettes, and it'll hopefully make sense to you. There was a time when people had no idea that smoking was bad for you, which is where we are now with these apps.
And since they're addictive, kids will find a way to get them even if their parents don't allow it. That's why it's most effective to require ID when you're buying cigarettes than it is to shame people for not being perfectly vigilant parents.
BTW, I'm not saying age verification is the solution here. IMO, we should instead ban addictive social media completely. Eg, target specific design patterns/features, require companies to disclose how their algorithms work to regulators, etc.
Apparently parents are spending more time with their children than ever. Dads especially. Paradoxically, there is what you're addressing.
Personally, I think some parents are afraid of their children growing to resent them for infringing upon their "freedom" in ways that keeping them away from the dangers that social media and other technologies present.
> the responsibility of a parent to protect their children from harm
I agree with you, but only in theory. Because that's where we are now and it does not seem to work that well.
Maybe through more education? But then again I think reducing addictive tactics like endless scrolling could be part of a 2 prong attack.
With alcohol we have education on what happens, but we also have laws that regulate it.
When it works.
Imagine if Big Tobacco had something like Section 230
Either what defines an "adult" is going to be raised exponentially or what defines a "kid" is going to be lowered to determine who is allowed access to information in transit and who needs to be "safeguarded" from it.
I do not buy this "holy knight war" by the EU at all.
It also makes no real sense to me.
Nothing against US mega-corporations paying fins, mind you, but I equally do not trust the EU bureaucrats either. There has to be a limit to both what politicians can do, what corporations can do and what bureaucrats can do, while retaining a democratic base system at all times. If you go against addictive design, then why not against ALL ads? I don't want to see any ads. Ublock origin made me change my mind here - I literally see no reason as to why I would ever want to burden my brain cells with irrelevant content.
This is a bit different to website layout though. I equally fail to see why the EU should meta-regulate what is permissive in regards to design and what is not. Why would I have to accept any random EU bureaucrat here? If a user interface sucks, I'd rather expect ublock origin to kill it off. This could also be community maintained. No need for the EU to waste taxpayers money. After the EU wants to sniff for age data and then also declared its holy war against VPNs, I do not trust anything coming from Brussels. Even less so with Ms. Leyen in charge - can't the anti-corruption offices in Germany get rid of such lobbyists?
Imagine the pressure on Instagram and Tiktok to serve better content if they were forced to pick out, say, 100 short videos per person per day. And not just for kids, adults need a break from this addiction machine as well.
they are going to put kids on a drip basis. addiction is still there, just limited amount per session. Intermittent rewards is actually the perfect schedule for an advertising company, you don't want people to be making unmonetizable page views.
In the modern world: any tech proposition that starts with protection of children as a goal can be dismissed out of hand, since it's emotional manipulation masquerading as tech policy. When I hear "protect kids", all I see is a sleazy politician bowing to their respective Security State apparatus.
Never understood the kids focus, looks to me like 50+ are by far the most addicted.
Which makes it also a matter of also parents and grandparents setting bad examples.
Isn’t it more of “emotional” design than “addictive” design?
You know, yeah, you can crack down "addictive design", but then what?
If you don't provide better alternative, the "kids" (and please, stop using "kids" as excuse because everybody can see through it now) will just stick on these platforms because, believe or not, these platforms are much MUCH safer than the alternatives.
How about, let's see the real problem here: 24% of EU children at poverty risk or social exclusion (2024), see https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d.... That's not just a statistic about children, it's also about their parents.
Do you know that if you go outside, then there's this huge risk of having to PAY for stuff you don't actually need to live? Like transportation to go to place that don't bring you wealth, like drink that you drink even you're not that thirsty, like movie tickets just so it will not be too awkward after all the dialogue options are exhausted? Does these politicians just somehow forgot all of these costs money, in this economy that they helped to create?
And that is not to mention the REAL risk, such as drugs the bad ones, rude or crazy drivers, unpleasant adults who's only life purpose is to earn enough money to keep them going a little bit longer, just to name a few.
..... ORRRR, you can just stay in your conformable home, sit on your soft and warm sofa/couch, and swipe your life away on TikTok or Instagram for free, safely.
You see the problem here?
I'm really sick and tired of these politicians putting up this act pretending to "love children", when in the reality what they do is putting up easy patches to hide the real problem, which is poverty and inequality, that's the real problem.
Why, it’s always okay to harm adults?
Like adults spending their hours scrolling through infinite feed is somehow beneficial to the society?
Why should only kids be protected from addiction?
I have a hard time understanding this.
We have plenty of adults with terrible social media addiction that is destroying their lives, and nothing being done about it.
This is the best question of all. Why are we allowing this?
Makes it an easier sell politically. If you position it as dangerous to kids in particular, your opposition then looks like they're encouraging child harm.
Well and if you tell adults that they need to be regulated, they get pissed very, very quickly.
Yeah yeah, virtue signaling, and most of EU online services are now gated by the use of one of the whatng cartel web engines (IRL, google blink), namely EU web sites are broken favoring web apps.
They have to restore interop with noscript/basic html web engines (past/present/and future).
Then, they have to be carefull with their file formats, for instance you never give "carte blanche" to such a disgusting format like PDF, you are very careful at defining a, as simple a possible, subset of it (with some internal software for validation).
Is ending endless scrolling really virtue signaling? Don't you think it will have a measurable effect?
Yeah yeah, whataboutism.
I'm very happy they're taking a stance. I've seen too many messed up kids and there's no doubt the addictive design plays a big role in the problem.
I must notice that every time, but really every time, EU moves a pinky finger against tech industry, a sizeable chunk of comments here will be like the one above. I wonder, is it about a general sentiment against EU? Or a general sentiment against restricting technology? Or a general sentiment against humans? Or what?
The most on-brand solution for the EU would be to require mobile phone users to upload brain scans in real-time so the state can check for neural activity associated with addiction.
The most on brand solution for a kneejerk reactionary American would be to satirize the EU for its consumer protections.