83 comments

  • everdrive 3 hours ago

    Social media itself is a grand experiment. What happens if you start connecting people from disparate communities, and then prioritize for outrage and emotionalism? In years prior, you would be heavily shaped by the people you lived near. TV and internet broke this down somewhat, but social media really blew the doors off. Now it's the case that almost no one seems to be able to explain all the woes we're facing today: extreme ideas, populism, the destruction of institutions. All of this because people are addicted to novelty and outrage, and because companies need their stock price to go up.

    • slg 2 hours ago

      >and then prioritize for outrage and emotionalism

      This isn’t inherent to social networks though. It is a choice by the biggest social media companies to make society worse in order to increase profits. Just give us a chronological feed of the people/topics we proactively choose to follow and much of this harm would go away. Social media and the world were better places before algorithmic feeds took over everything.

      • skybrian 5 minutes ago

        It sure seems inherent to me. You get outrage and emotionalism even in small Internet forums. Moderation is necessary to damp it down.

      • dathinab 2 hours ago

        > This isn’t inherent to social networks though. It is a choice by the biggest social media companies and to make society worse in order to increase profits.

        going beyond social media it's IMHO the side effect of a initially innocent looking but dangerous and toxic monetization model which we find today not just in social media but even more so in news, apps and most digital markets

        • trevwilson 2 hours ago

          Yeah unfortunately this seems to be a common, if not inevitable, result of any product where "attention" or "engagement" are directly correlated with profitability.

        • slg an hour ago

          And if we want to go beyond that, we really just have to blame capitalism. What happens when you build a society around the adversarial collection of money? You get a society that by and large prioritizes making money above all else including ethics and morals.

          • dathinab an hour ago

            yes but it's more complicated

            like if you look at original reasoning why capitalism is a good match for democracy you find arguments like voting with money etc. _alongside with what things must not be tolerated in capitalism_ or it will break. And that includes stuff like:

            - monopolies, (or more generic anything having too much market power and abusing it, doesn't need to be an actual monopoly)

            - unfair market practices which break fair competition

            - situations which prevent actual user choice

            - to much separation of the wealth of the poorest and richest in a country

            - giving to much ways for money to influence politics

            - using money to bare people from a fair trail/from enforcing their rights

            - also I personally would add in-transparency, but I think that only really started to become a systemic issue with globalization and the digital age.

            This also implies that for market wich have natural monopolies strict regulation and consumer protection is essential.

            Now the points above are to some degree a check list of what has defined US economics, especially in the post-Amazone age (I say post Amazone age as the founding story of Amazone was a mile stone and is basically the idea of "let's systematically destroy any fair competition and used externally sourced money (i.e. subsidization) to forcefully create a quasi monopoly", and after that succeeded it became somewhat of the go-to approach for a lot of "speculative investment" founding).

            Anyway to come back to the original point.

            What we have in the US has little to do with the idea of capitalism which lead to the adoption of it in the West.

            It's more like someone took it is twisting it into the most disturbing dystopian form possible, they just aren't fully done yet.

            • slg 12 minutes ago

              >- giving to much ways for money to influence politics

              I think what we're learning is that mass (social) media means that this simply isn't preventable in a world with free speech. Even if the US had stricter campaign finance laws in line with other western democracies, there still needs to be some mechanism so that one rich guy (or even a collection of colluding rich guys) can't buy a huge megaphone like Twitter or CBS.

              As long as there is no upper limit on wealth accumulation, there is no upper limit on political influence in a capitalistic democracy with free speech. Every other flaw you list is effectively downstream of that because the government is already susceptible to being compromised by wealth.

          • Zigurd an hour ago

            It's the combination of software that is infinitely malleable, and capitalism. Successful entrepreneurs in software want liquidity. So no matter how benevolent they start out being, they eventually lose control and the software gets turned into an exploitative adversary to satisfy investor owners.

            This is fine if you can refuse the deal. Lots of software and the companies selling it have died that way. But if you've made a product addictive or necessary for everyday survival, you have the customer by the short hairs.

            The technology underlying Bluesky is deliberately designed so that it's hard to keep a customer captive. It will be interesting to see if that helps.

      • Aurornis 36 minutes ago

        > Social media and the world were better places before algorithmic feeds took over everything

        Some times I feel like I'm the only one who remembers how toxic places like Usenet, IRC, and internet forums were before Facebook. Either that, or people only remember the past of the internet through rose colored glasses.

        Complain about algorithmic feeds all you want, but internet toxicity was rampant long before modern social media platforms came along. Some of the crazy conspiracy theories and hate-filled vitriol that filled usenet groups back in the day makes the modern Facebook news feed seem tame by comparison.

        • linguae 4 minutes ago

          I agree that there’s always been toxicity on the Internet, but I also feel it’s harder to avoid toxicity today since the cost of giving up algorithmic social media is greater than the cost of giving up Usenet, chat rooms, and forums.

          In particular, I feel it’s much harder to disengage with Facebook than it is to disengage with other forms of social media. Most of my friends and acquaintances are on Facebook. I have thought about leaving Facebook due to the toxic recommendations from its feed, but it will be much harder for me to keep up with life events from my friends and acquaintances, and it would also be harder for me to share my own life events.

          With that said, the degradation of Facebook’s feed has encouraged me to think of a long-term solution: replacing Facebook with newsletters sent occasionally with life updates. I could use Flickr for sharing photos. If my friends like my newsletters, I could try to convince them to set up similar newsletters, especially if I made software that made setting up such newsletters easy.

          No ads, no algorithmic feeds, just HTML-based email.

      • dylan604 an hour ago

        bigMedia has been doing this longer than the socials. The socials just took the knob and turned it to 11.

    • rr808 3 hours ago

      Its interesting that TV is regulated. You can't put certain content on there and I'm sure the governments can ultimately control things. Now todays eyeballs are controlled by Meta and TikTok and I dont really trust them at all - they have too much unchecked power.

      • dathinab 2 hours ago

        > interesting that TV is regulated

        soso, it's mostly that freely accessible channels need their content to be in a certain ~PG/age protection range (and in many countries that also changes depending on the time of the day, not sure about the US)

        beyond that the constitution disallows any further regulation of actual content

        through that doesn't mean that they can't apply subtle pressure indirectly.

        Is that legal? no.

        Anyway done for years? yes.

        But mostly subtle not forced, i.e. let's say you give "suggestions" not required changes.

        Except in recent years it has become a lot less subtle and much more forced. Not just giving non binding "suggestions" but also harass media outlets in other seemingly unrelated ways if they don't follow your "suggestions".

        PS: Like seriously it often looks like the US doesn't really understand what free speech is about (as in some of the more important points are freedom of journalism, teaching and also showing your opinions through demonstrations and similar.). And why many historians find it good but suboptimal and why e.g. the approach to free speech was revisited when drafting the west German constitution instead of just more or less copying the US constitution (the US but also France, UK had some say in the drafting of it, it was originally meant to be temporary until reunification, but in the end was mostly kept verbatim during unification as it worked out quite well).

      • Aurornis 2 hours ago

        > I'm sure the governments can ultimately control things

        In the US there is free speech protecting the ability of people to say what they want.

        Public TV has limitations on broadcast of certain material like pornography, obviously, but the government can’t come in and “control” the opinions of journalists and newscasters.

        The current US admin has tried to put pressure on broadcasters it disagrees with and it’s definitely not a good thing.

        You really do not want to encourage governments to “control” what topics cannot be discussed or what speech is regulated. Sooner or later the government will use that against someone you agree with for their own power.

    • decipherer 3 hours ago

      We have exited the age of information, and entered the age of irritation.

    • cal_dent 2 hours ago

      Throw into the mix inherent mimetic desire and where we are in society makes sense. There's a need for a more that frankly can't be satisfied and hard to see how we turn back from that without a structural rejig

    • gjsman-1000 3 hours ago

      > the destruction of institutions

      More like the exposure of institutions. It’s not like they were more noble previously, their failings were just less widely understood. How much of America knew about Tuskegee before the internet? Or the time National Geographic told us all about the Archaeoraptor ignoring prior warnings?

      The above view is also wildly myopic. You thought modern society overcame populist ideas, extreme ideas, and social revolution being very popular historically? Human nature does not change.

      Another thing that doesn’t change? There are always, as evidenced by your own comment, always people saying the system wasn’t responsible, it’s external forces harming the system. The system is immaculate, the proletariat are stupid. The monarchy didn’t cause the revolution, ignorant ideologues did. In any other context, that’s called black and white thinking.

  • Grimblewald 4 hours ago

    From history we know that research left unchecked and unrestricted can start leading to some really dark and horrible things. Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independant ethics committees / reviews. They're free to experiement as they like on the entire population.

    I never understood why this doesn't alarm more people on a deep level.

    Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?

    • terminalshort 4 hours ago

      What counts as research? If I make UI changes, I guess it's ok to roll it out to everyone, because that's not an experiment, but if I roll it out to 1%, then that's research? If I own two stores and decide to redecorate one and see if sales increase vs the other store, do I need government approval?

      Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.

      • CodingJeebus 3 hours ago

        > Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals.

        One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.

        0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...

        • terminalshort 3 hours ago

          Isn't this just what every media company has done since the beginning of time? You think the news companies don't select their stories based on the same concept? And I'm pretty sure you would get approval to do something similar to animals given that you can get approval to actually feed them drugs and see how that affects their behavior.

          • anonymars 3 hours ago

            Can you provide evidence that [non-social] media companies have performed research specifically to see if they can make people sadder, similar to what was described above?

            • terminalshort 3 hours ago

              Turn on cable news for a minute and it's quite obvious that it is designed to make you angry. What difference does it make if they performed research or not?

      • Aurornis 30 minutes ago

        > What counts as research? If I make UI changes, I guess it's ok to roll it out to everyone, because that's not an experiment, but if I roll it out to 1%, then that's research?

        I think this is a good example of how disconnected and abstract the conversations about social media have become. There's a common theme in these HN threads where everything social media companies do is talked about like some evil foreign concept, but if any of us were to do basic A/B testing on a website then that's understandable.

        Likewise, the dissonance of calling for heavy regulations on social media sites or restrictions on freedom of speech is ironic given that Hacker News fits the definition of a social media site with an algorithmic feed. There's a deep otherness ascribed to what's called social media and what gets a pass.

        It gets really weird in the threads demanding ID verification for social media websites. I occasionally jump into those threads and ask those people if they'd be willing to submit to ID verification to use Hacker News and it turns into mental gymnastics to claim that Hacker News (and any other social platforms they use like Discord or IRC) would be exempt under their ideal laws. Only platforms other people use would be impacted by all of these restrictions and regulations.

      • advael 3 hours ago

        I think by now roughly half of us grew up in a world where global reach has been simply taken for granted. I don't think it's particularly onerous to say that there should be some oversight on what a business can and can't do in the context where that business is relying on public infrastructure and can affect the whole-ass world, personally

        • terminalshort 2 hours ago

          There is oversight. Just not oversight of their UI design and algorithm, which is what people are calling for here. Regulation of the feed algorithm would be a massive 1A violation.

          Not sure what public infrastructure has to do with it. Access to public infrastructure doesn't confer the right to regulate anything beyond how the public infrastructure is used. And in the case of Meta, the internet infrastructure they rely on is overwhelmingly private anyway.

          • mikem170 an hour ago

            If algorithm output is protected by the 1st amendment then perhaps Section 230 [0] protections should no longer apply, and they should be liable for what they and their algorithms choose to show people.

            [0] https://en.wikipedia.org/wiki/Section_230

            • advael 13 minutes ago

              That seems a hell of a lot better than repealing section 230 altogether. I also agree with the rest of this argument. Either the editorial choices made by an algorithm are a neutral platform or they're protected speech. They certainly aren't just whatever's convenient for a tech company in any given moment

      • shimman 4 hours ago

        Are you being serious right now or just engaging in "asking questions" to suppress others thoughts? Why are these types of comments so common on this site? No obviously we aren't in fact talking about making basic code changes, but maybe if those changes are being consistently done that clearly show users getting more depressed or alienated it should be questioned more and finally regulated.

        Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.

        It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.

        You can go for the throat and say that social media companies can't own an advertising platform either.

        Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.

        • terminalshort 3 hours ago

          > Are you being serious right now or just engaging in "asking questions" to suppress others thoughts?

          I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.

          And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.

        • BeetleB 2 hours ago

          > Are you being serious right now or just engaging in "asking questions" to suppress others thoughts?

          I don't know why people are being overly reactive to the comment.

          Research means different things to different people. For me, research means "published in academic journals". He is merely trying to get everyone on the same page before a conversation ensues.

        • cortesoft 3 hours ago

          I don’t think it is fair to criticize the person you are responding to for asking the question they did.

          These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.

          If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.

          If we don’t ask these questions, we can never move past hand wringing.

      • fergal 2 hours ago

        Nice example there to trivialize and confues the issue but yea if your hypothetical store redecorating has a public health impact on a large scale then you should need approval.

    • BeetleB 2 hours ago

      > Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independant ethics committees / reviews. They're free to experiement as they like on the entire population.

      If they are going to publish in academic journals, they will have to answer to those bodies. Whether those bodies have any teeth is a whole other matter.

    • vovavili 3 hours ago

      >Right now I think it's a problem that social media companies can do research without answering to the same regulatory bodies that regular academics / researchers would. For example, they don't have to answer to independent ethics committees / reviews.

      These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.

  • bikenaga 7 hours ago

    Original article: "Industry Influence in High-Profile Social Media Research" - https://arxiv.org/abs/2601.11507

    Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "

  • cheriot 4 hours ago

    We need an update of Thank You for Smoking

  • dzink 2 hours ago

    How do you do objective research without a data pipeline? Social media companies can use user privacy as an excuse to not share feeds that influence users. The first step to fixing the wrongs is transparency, but there are no incentives for big tech to enable that.

  • fnoef 4 hours ago

    Would it be appropriate to use :surprised_pikachu_face:?

    I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.

    • greggoB 4 hours ago

      This has been my default expected reaction since Nov 2024. So I'd say so.

      Also feel you wrt living in a forest and leaving this all behind.

  • BurningFrog 4 hours ago

    Keep in mind that those qualified to do research in a field typically has worked in that industry.

    Because that's where people with that expertise work.

    • duskwuff 2 hours ago

      And, in many cases, because that's where funding exists.

      This comes up somewhat frequently in discussions of pet food. Most of the companies doing research into pet food - e.g. doing feeding studies, nutritional analysis, etc - are the manufacturers of those foods. This isn't because there's some dark conspiracy of pet food companies to suppress independent research; it's simply because no one else is funding research in the field.

  • austin-cheney 4 hours ago

    I bet the same is true with AI and bitcoin social media posts and research.

    • fenwick67 4 hours ago

      And cigarettes and fossil fuels

  • potato3732842 2 hours ago

    Literally every industry is like this.

    Academia is basically a reputation laundering industry. If the cigarette people said smokes good or the oil people you'd never believe them. But they and their competitors fund labs at universities, and sure those universities may publish stuff they don't like from time to time, but overall things are gonna trend toward "not harmful to benefactors". And then what gets published gets used as the basis for decisions on how to direct your tax dollars, deploy state violence for or against certain things, etc, etc. And of course (some of) the academics want to do research that drives humanity forward or whatever, but they're basically stuck selling their labor to (after several layers in between) the donors for decades in order to eek out a little bit of what they want.

    It's not just "how the sausage is made" that's the problem. It's who you're sourcing the ingredients for, who you're paying off for the permit to run the factory, who's supplying you labor. You can't fix this with minor process adjustments.

  • chaps 3 hours ago

    Surprised it's not more, but it makes sense when you consider the sources of the data. Gotta have data sharing agreements, yeah?

  • devradardev an hour ago

    This is a clever approach to reduce token usage. In my experience with Gemini 3 for code analysis, the biggest bottleneck isn't just the logic, but the verbosity of standard languages consuming the context window. A targeted intermediate language like this could make 'thinking' models much more efficient for complex tasks.

    • irishcoffee 32 minutes ago

      Almost like… a simplified set of instructions you would give a computer that get distilled down into machine code that executes on bare metal!

  • princevegeta89 3 hours ago

    No surprise. Social media is a shithole.

  • h4kunamata 3 hours ago

    Since when is this news??

    Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.

    Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.

  • Jadiiee 3 hours ago

    My jaw stayed in place

  • schmuckonwheels 3 hours ago

    Experts say

  • Braxton1980 22 minutes ago

    This means their research should be examined in more detail but unless their evidence they are being dishonest in some sense it doesn't invalidate their findings

  • hsuduebc2 3 hours ago

    I’m half expecting headlines thirty years from now to talk about social media the way we now talk about leaded gasoline, a slow, population-wide exposure that messed with people’s minds and quietly dragged down cognition, wellbeing, and even the economy across whole generations.

  • shevy-java 4 hours ago

    A system built to yield to perfect lobbyism.

  • hsuduebc2 3 hours ago

    This is ridiculously recurring pattern.

  • AlexandrB 4 hours ago

    Same as it ever was. You see the same kind of thing is the food industry, pharmaceutical industry, tobacco industry, fossil fuel industry, etc. On the one hand it's almost inevitable. Who (outside of the government) is going to care enough about the results of stuff like this to fund it if not the industry affected? You also often need the industry's help if you're doing anything that involves large sample sizes or some kind of mass production.

    On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.

    • imiric 4 hours ago

      The people in these industries are collectively responsible for millions of preventable deaths, and they, their families, and generations of their offspring are and will be living the best lives money can buy.

      And yet one person kills a CEO, and they're a terrorist.

      • terminalshort 4 hours ago

        Large and complex systems are fundamentally unpredictable and have tradeoffs and consequences that can't be foreseen by anybody. Error rates are never zero. So basically anything large enough is going to kill people in one way or another. There are intelligent ways to deal with this, and then there is shooting the CEO, which will change nothing because the next CEO faces the exact same set of choices and incentives as the last one.

        • BrenBarn 3 hours ago

          Well, given what you said, one obvious mechanism is to cap the sizes of these organizations so that any errors are less impactful. Break up every single company into little pieces.

          • terminalshort 3 hours ago

            That doesn't really help because the complexity isn't just internal to the companies, but also exists in the network between entities that make up the industry. I may well even make it worse because it is much harder to coordinate. e.g. If I run into a bug cause by another team at work, it's massively easier to get that fixed than if the bug is in vendor software.

            In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.

            We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.

        • imiric 3 hours ago

          I'm not talking about unpredictable tradeoffs and consequences.

          I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.

          [1]: https://en.wikipedia.org/wiki/Tobacco_Institute

          [2]: https://en.wikipedia.org/wiki/Purdue_Pharma

          • terminalshort 2 hours ago

            Well those get handled. Perdue was sued into bankruptcy and the Tobacco Institute was shut down when the industry was forced to settle for $200 billion in damages.

            • imiric 2 hours ago

              So human lives have a price tag, and companies can kill millions for decades as long as they pay for it. Gotcha.

        • asdff 3 hours ago

          Pretty predictable what happens when you deny coverage for a treatment someone needs

          • terminalshort 3 hours ago

            But do they need it? How do you know? And don't say because the doctor said so, because doctors disagree all the time. When my grandfather was dying in his late 80s, the doctor said there was nothing he could do. So his children took him to another doctor, who said the same. And then another doctor, who agreed with the first two. But then they took him to a 4th doctor, who agreed to do open heart surgery, which didn't work, and if anything hastened his inevitable death due to the massive stress. The surgery cost something like 70 grand and they eventually got the insurance company to pay for it. But the insurance company should not have paid for it because it was a completely unnecessary waste of money. And of course there will be mistakes in the other direction because this just isn't an exact science.

            • asdff 3 hours ago

              At that point, why cover anything at all if the doctor could always be wrong?

              • terminalshort 3 hours ago

                Stupid question. If you have a better way to make decisions on insurance coverage then state it.

                • asdff 2 hours ago

                  Why is it on me to come up with a new model for healthcare? I can acknowledge shortcomings of the present system without having to come up with solutions for them.

          • quesera 3 hours ago

            It would be a clean and compelling narrative, if Luigi or someone he loved was denied coverage for a necessary treatment!

            But that doesn't seem to be true at all. He just had a whole lot of righteous anger, I guess. Gotta be careful with that stuff.

            • asdff 2 hours ago

              Why does it matter if it personally occurred to him or someone related to him? It happens to plenty of people. You can have empathy for people not bound by blood.

              • quesera 2 hours ago

                Of course you can. But where does it stop?

                There is a great deal of injustice in the world. Psychologically healthy adults have learned to add a reflection step between anger and action.

                By all evidence, Luigi is a smart guy. So one can only speculate on his psychological health, or whether he believed that there was an effective response to the problem which included murdering an abstract impersonal enemy.

                I'm stumped, honestly. The simplest explanations are mental illness, or a hero complex (but I repeat myself). Maybe we'll learn someday.

      • windowpains 4 hours ago

        You say “a CEO” like it’s just a fungible human unit. In reality, a CEO is much much more valuable than a median human. Think of how many shareholders are impacted, many little old grey haired grannies, dependent on their investments for food, shelter and medical expenses. When you think of the fuller context, surely you see how sociopathic it is to shrug at the killing of a CEO, let alone a CEO of a major corporation. Or maybe sociopathy is the norm these days, for the heavily online guys.

        • asdff 3 hours ago

          The CEO literally is a fungible human unit. Any job can be learned.

          • terminalshort 3 hours ago

            In that case it also accomplishes nothing to kill him because another will just take his place. So either way you lose.

            • asdff 3 hours ago

              A message is certainly sent in the process that previously was going unheard.

              "Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”

              Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "

              https://www.beckerspayer.com/payer/one-year-after-ceo-killin...

        • quesera 3 hours ago

          CEOs are not special humans. They know lots of people, but that's not an unusual trait.

          When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.

          But yeah, shooting people is a bad decision in almost all cases.

  • BrenBarn 3 hours ago

    But what undisclosed ties might this study itself have?