The Human Cost of Our AI-Driven Future

(noemamag.com)

37 points | by gmays 4 hours ago ago

37 comments

  • throwup238 2 hours ago

    > Sarcasm, cultural context and subtle forms of hate speech often slip through the cracks of even the most sophisticated algorithms.

    I don't know how this problem can be solved automatically without something that looks a lot like AGI and can monitor the whole internet to learn the evolving cultural context. AI moderation feels like self driving cars all over again: the happy path of detecting and censoring a dick pic - or self driving in perfect California weather - is relatively easy but automating the last 20% or so of content seems impossibly out of reach.

    The "subtle forms of hate speech" is especially hard and nebulous, as dog whistles in niche communities change adversarialy to get past moderation. In the most subtle of cases, there are a lot of judgement calls to make. Then each instance of these AGIs would have to be run in and tailored to local jurisdictions and cultures because that is its own can of worms. I just don't see tech replacing humans in this unfortunate role, only augmenting their abilities.

    > The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world.

    This part never really changed. Mechanical turk is almost 20 years old at this point and call center outsourcing is hardly new. What's new is just how much human-generated garbage we force them to sift through on our behalf. I wish there was a way to force these training data and moderation companies to provide proper mental health care .

    • hcurtiss an hour ago

      I think there's a genuine conversation to be had about whether there even is such a thing as "hate speech." There's certainly "offensive speech," but if that's what we're going to try to eliminate, then it seems we'll have a bad time as the offense is definitionally subjective.

      • danans 9 minutes ago

        > I think there's a genuine conversation to be had about whether there even is such a thing as "hate speech."

        It may be fuzzy on the far edges, but any speech that calls for the elimination, marginalizes, dehumanizes or denies human or civil rights of a group of people is right in the heart of the meaning of hate speech.

        That definition still leaves huge amounts of space for satire, comedy, political and other forms of protected speech, even "offensive speech".

      • o11c an hour ago

        I'm not sure "offensive" is actually subjective. Rather, I dare say it's morally obligatory to be offensive at times, but different communities put the line in different places.

        Stating the position "torture is bad" is enough to get you banned from some places (because it's offensive to people who believe that it's okay as long as the victims are less-than-human).

      • yifanl an hour ago

        Is the claim there some special property that makes it impossible to convey hate as opposed any other type of idea through text?

        That seems extremely wrong, especially in this context, given that LLMs make no attempt to formalize "ideas", they're only interested in syntax.

        • mewpmewp2 31 minutes ago

          Maybe the name for the "hate speech" is poorly chosen, since it's not necessarily about "hate".

      • szundi an hour ago

        There is hate speech, like when someone tells poeple how other people are not human and must be eliminated. Happened a lot, happening now in wars you read about.

        • epicureanideal 16 minutes ago

          But when "hate speech" becomes censorable and a crime, then people are incentivized to interpret as broadly as possible their opponents' statements and claim they should be interpreted as dehumanizing or encouraging violence.

          This can be done from both sides. Examples:

          Not sufficiently (for whoever) enforcing immigration laws? "Trying to eliminate the majority population, gradual ethnic cleansing".

          Talking about deporting illegal immigrants? "The first step on the road to murdering people they don't want in the country."

          And if the local judiciary or law enforcement is aligned with the interests of one side or the other, they can stretch the anti hate speech laws to use the legal system against their opponents.

      • mewpmewp2 an hour ago

        There is a definition for hate speech though.

        • reginald78 43 minutes ago

          Actually, I think the problem is there are many definitions of hate speech.

          • mewpmewp2 37 minutes ago

            I think there's only 1 main definition. Which is clear in spirit, but it's of course possible that people may interpret the definition definitely.

            • jacobr1 12 minutes ago

              That may be so, but the denigrate (and common case as this thread suggests) is to expand the notion to any offensive speech that is disliked by the offended person. That is much more subjective and hard to define. The fact that we have some (better) definitions doesn't really help. The desire to censure speech is widespread, for different reasons, many conflicting. And the fact that there might be a rough academic consensus on where to draw lines (at least theoretically if not practically) isn't good enough in practice to actually define clear rules.

    • whiplash451 2 hours ago

      The difference between adult material detection and self driving is that the former is fundamentally adversarial.

      Humans will spend a lot of energy to hide porn content on the internet while self-driving might benefit from a virtuous circle: once enough waymos are out there, people will adapt and learn to drive/bike/walk alongside them. We have a fundamentally good reason to cooperate.

      I am not a self-driving fanatic but I do believe that a lot of edge cases might go away as we adapt to them.

      • nradov an hour ago

        Animals, small children, and random objects dropped on the road will never "adapt" to self-driving. Good enough solutions will eventually be found for those scenarios but it's exactly those millions of different edge cases which make the problem so hard. A step ladder that falls off a work truck (like I saw on the freeway yesterday) isn't exactly "adversarial" but it will sure ruin your car if you run over it.

        • shadowgovt 11 minutes ago

          Animals, small children, and random objects dropped on the road don't adapt to human driving either; they aren't generally considered the core concern space (in the sense that if it is physically possible for a self-driving car to do better than a human in those contexts, it will, but the project isn't designed around doing better than a human in such corner cases. Doing worse than a human is not acceptable).

    • datadrivenangel 2 hours ago

      There's also the issue of things that are true and mean/hateful.

      If my GP says that I'm overweight, which is associated with negative health outcomes, that's factual. If someone on twitter calls me a fatso, that's mean/hateful.

    • tomjen3 30 minutes ago

      By definition a dogwhistle appears to not mean anything specific to anyone but the target group. So even human moderators can't moderate it.

    • hn_throwaway_99 2 hours ago

      > In the most subtle of cases, there are a lot of judgement calls to make.

      IMO there is the even more important point that beyond being a "judgement call", humans are far from being in agreement with what the "right answer" is here - it is inherently an impossible problem to solve, especially at the edge cases.

      Just look at the current debate in the US. There are tons of people screeching from the right that large online social networks and platforms censor conservative views, and similarly there are tons of people screeching from the left about misinformation and hate speech. In many cases they are talking about the exact same instances. It is quite literally a no-win situation.

    • drivebyhooting 2 hours ago

      What is a dog whistle? Is it just an opinion people disagree with and so rather than engage with it they assume malice or ill intent?

      I really don’t get it.

      • kevingadd 2 hours ago

        https://en.wikipedia.org/wiki/Dog_whistle_(politics)

        To simplify, dog whistles make a sound that's too high pitched for most humans to hear, but only dogs can hear it.

        So it's speech that the speaker's ingroup recognizes as meaning something other than what the literal interpretation would mean. It's coded speech, usually for racist, sexist or even violent purposes.

        An adjacent concept is giving orders without giving orders, i.e. https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...

      • drivebyhooting 2 hours ago

        Yes I too can google. And in fact I did. Evidently questioning the validity of dog whistle is a dog whistle itself.

        Meanwhile a small consolation is that https://slatestarcodex.com/2016/06/17/against-dog-whistles/ agrees with me. So I’m in decent company.

        • arp242 14 minutes ago

          > Evidently questioning the validity of dog whistle is a dog whistle itself.

          I don't see anyone saying that.

        • Lerc an hour ago

          Ever watch the West Wing? The first episode should do it.

          One of the signs of dog whistle use is the situations in which the term is used that raises probability beyond the credible level of coincidence.

          • drivebyhooting an hour ago

            I don’t watch TV.

            To my mind, this dog whistle moniker is more of a tool for suppressing dissenting views than identifying covert bigotry.

            Apparently all the critical thinking has already been done off stage and now only those whom we agree with are tolerated. The others are shunned as racists or worse.

            • neaden 23 minutes ago

              This is a world where Donald Trump, the man who calls his political opponent mentally disabled among other things is a serious front runner for the highest office in the nation. I have no idea how you can look at contemporary America and say people aren't allowed to say offensive things.

            • shadowgovt 7 minutes ago

              > Apparently all the critical thinking has already been done off stage

              In general, yes: there is a long history of conversation on various topics, actions that have caused trust levels to be preset among various groups, and meta-symbols constructed atop that information. Those new to the conversation may be unaware of the context.

              > and now only those whom we agree with are tolerated

              I'm not sure who "we" is in that context. In the US, currently, the polity is very divided because sevaral key events have, in a sense, caused "mask off" to occur in the mainstream of both political parties that makes it difficult for anyone to believe one of them is willing to share power.

              (as a side note: rhetorical questions don't usually convey well through text media. If you didn't literally mean "I really don't get it" when you said you didn't get it, making clear you are being rhetorical could be considered polite).

  • klabb3 28 minutes ago

    This is a really poorly informed article, almost unbearable to read due to the conflation of issues. Content moderation existed before modern AI. Then the article claims that most moderation decisions are actually (exploited) human labor, which I find extremely difficult to believe – even with simpler classifiers. Yes, Amazon used human labor for their small-scale (later shut down) stores. We have seen that trick to drive product hype, it happens. That does not mean FB, Instagram etc uses human labor for “nearly all decisions”. But even if they did, “AI” did not create the gore/csam/abuse content (again, yet), nor the need to moderate public cesspool ad-driven social media. You’re talking about a different issue with different economics and incentives.

    There are a million things to criticize AI for, but this take is domain-illiterate – they’re simply drawing a connection between the hyped and fancy (currently AI) and poor working conditions in one part of the tech sector (content moderation).

    Look, I’m sure the “data industry” has massive labor issues, heck these companies treat their warehouse workers like crap. Maybe there are companies who exploit workers more in order to train AI models. But the article is clearly about human-created content moderation for social media.

    Of all the things AI does, it is pretty good at determining what’s in an image or video. Personally I think sifting through troves of garbage for abusive photos and videos (the most traumatizing for workers) is one of the better applications for AI. (Then you’ll see another sob story article about these people losing their jobs.)

  • sdenton4 2 hours ago

    Sure, there's no 'digital savior' (as the article puts it) - these are tools which can often help triage the great majority of 'boring' cases, focusing human attention where it is most-needed. In that sense, they are multipliers for the effectiveness of human labor, which is exactly what you want out of any given technology.

    • kevingadd 2 hours ago

      It gets tricky though. Let's say that 90% of your 'bad posts' are just basic stuff AI can handle, like insults or spam.

      You deploy an AI to moderate, and it lets you cut your moderation workforce by 80%. Maybe you're a generous person, so you cut by 50% instead and the remaining moderators aren't as overworked anymore. (Nobody's going to actually do this, but hey, let's be idealistic.)

      Costs are down, things are more efficient. Great! But there's a little problem:

      Before, 90% of the posts your moderators looked at were mundane stuff. They'd stare at it for a moment, evaluate the context, and go 'yeah this is a death threat, suspend account.'

      Now all the moderators see is stuff that got past the AI or is hard to classify. Dead bodies, CSAM, racist dogwhistle screeds, or the kind of mentally unhinged multi-paragraph angry rants that get an account shadowbanned on places like HN. Efficiency turns the moderator's job from 'fairly easy with occasional moments of true horror' into 'a nonstop parade of humanity's worst impulses, in front of my face, 40 hours a week'.

      • xtreme an hour ago

        Your conclusion does not follow from your premise. AI moderation should easily catch the worst offenders and the most obvious ones. The examples you gave easily stand out from non-offensive content and are easy to catch with high confidence. So, human moderators will have to look at where AI has low confidence classifying the content. In fact, AI will reduce the likelihood of human moderators ever seeing traumatic content.

      • AIorNot 2 hours ago

        I don't think the articles point is about auto-moderation of harmful content

        instead its about being empathetic of the human suffering this work entails and finding ways to treat their contractors as humans instead of 'far off resources'

        outsourcing this dirty and dingy work to African countries in this way without caring for the 'contractors' is a recipe for de-humanization of people..

        https://www.sama.com/our-team

        their team page is funny reminder of classism and racial disparity in the world white people at the top and black people at the bottom.. lol I know they aren't racially driven and there is real economic value for the contractors as jobs but our current hyper-capitalistic global system is mostly setup to exploit offshore people instead of elevate them

        the world is what it is..

  • zerop 31 minutes ago

    One big cost I can see right away is environmental issues and water shortages in areas where AI data centers would run. No one is bothering about it now.

    Getting better models at the edge would help in this to some extent as it will decentralise the runtime of AI (still model training would happen in data centers)

  • Animats 35 minutes ago

    This is the human cost of non-AI moderation. Not AI. This job is going to be automated, if it isn't already.

  • rob_c an hour ago

    And the loom will kill little old blighty...

    Nothing here to contribute to a conversation of fixing anything which is a shame.

  • Der_Einzige 2 hours ago

    These companies go to places like Africa because they know that the non-existent labor laws would never be enforced anyway. Same reason why so many ships fly the flag of Liberia at sea.

    Make no mistake. It's a strategic choice to choose these individuals to take the brunt force of the trauma. Silicone Valley thinks Africans have no value to life.