The Language of Faces

(domofutu.substack.com)

38 points | by domofutu 2 days ago ago

17 comments

  • ultimoo 2 days ago

    Got it. So smiling and angry faces are part of our pre-training. Whereas making a “thinking face” or “embarrassed face” is part of cultural fine tuning.

    • jareklupinski 2 days ago

      apparently certain fine-tunings see a smile as 'threatening' or scoundrel-like

  • vonnik 2 days ago

    This is simply false, a series of claims that have been disproven but refuse to die. See Lisa Feldman-Barrett’s How Emotions Are Made.

    • dog436zkj3p7 2 days ago

      Lisa Feldman-Barrett is a hack who thinly veils her inability to conduct robust emotion research by almost exclusively presenting her flawed ideas in pop-sci books aimed at the general public and people outside the emotion field. While she certainly likes to present herself as on authority in emotion research, she has in actuality disproven nothing. Instead, her major contribution is endless philosophizing based on incredibly weak cherry picked psychology experiments mostly conducted by other researchers (which, disappointingly, seems to have grown some roots in the public psyche).

      In the meantime, research into the neuroscience of affect is booming, with animal experiments starting to uncover the mechanistic basis of emotion and expression, including discovering both in mice and other animals - certainly without requiring language, culture or any construction whatsoever.

      • 2 days ago
        [deleted]
    • domofutu 2 days ago

      But the post actually lines up with Barrett’s ideas rather than contradicting them; it explores how we often interpret facial expressions as a kind of language, shaped by culture and context, not as hardwired signs. It’s saying that while we might associate certain expressions with emotions, those associations aren’t universal—they’re flexible, just like Barrett describes.

      So in the end, both the post and Barrett agree on the complexity here: facial expressions aren’t a one-size-fits-all code for emotions. Instead, they’re open to interpretation, and that interpretation depends on context, culture, and our own experiences.

  • keybored 2 days ago

    Nice to see some research on human expression, unrelated to technology.

    > Their insights are especially valuable for fields like AI, where understanding these nuances could help build technology that better respects cultural differences in expression.

    Never mind.

    • disqard 2 days ago

      Wow, you had the exact same reaction as I did.

      Methinks "having something to do with AI" is a hard prerequisite to getting any research published today, and the authors of this work were unable to resist this pressure.

    • delichon 2 days ago

      Their command of such nuances makes me fear that AI will be naturally good a persuasion. Sales, politics, seduction, debate, meme making. If we can considerably enhance persuasiveness with software then control of the software is control of society. And by increments that controller will become less wetware and more software.

      When AI can do sincerity and authenticity better than we can, it can spoof our bullshit detectors at the level of the best psychopathic con men. Whether it pulls its own strings or not, that must be wildly disruptive.

      • PittleyDunkin 2 days ago

        We already have massive, industrialized propaganda machines wreaking havoc in our culture. I honestly don't see how this could get worse without people snapping out of it and disconnecting from these machines. Hell we're talking through one such machine now (albeit likely more benign than the next). At worst what we can now automate opinion columns or political ads or casino interactions or automated scams? All of these are reality already. We're already living in the shithole.

      • api 2 days ago

        The most dangerous AI scenario I see is using it to effectively assign everyone a personalized surveillance and big data empowered con artist to follow them around 24/7 and convince them of whatever someone is paying to have delivered.

        This is what demons are supposed to be doing to us. We could be close to inventing demonic influence.

        If a sentient AI wants to take over it will not need to kill us or conquer us. It will persuade us to serve it. Why destroy billions of robust self powering self reproducing robot assistants when you can just indoctrinate them?

        A much more realistic scenario though is humans running this — corporations and governments and think tanks and the like.

        • bongodongobob 2 days ago

          That's 100% hand waving. Give me a plausible example of how this would work. What's it going to do, pester you to buy a different brand of toothpaste?

          If you're talking about controlling the masses, brother, that has been happening for 1000s of years. The masses are fully controlled already. They know we won't actually pull out guillotines so they have been doing whatever they want for quite some time. Western society is already cooked and the elite have already won.

          • api a day ago

            Okay, imagine a scenario like this: A small oligopoly of AI companies come to dominate the marketplace and using regulatory capture manage to make it almost impossible to compete with them through e.g. "AI safety" regulation or just a mess of complicated requirements.

            Over time this small number of companies further consolidates and ends up with interlocking boards of directors while the AIs it runs become more and more powerful and capable. Since they have a market monopoly they actually don't share the most powerful AIs with the public at all. They keep these internal.

            They become incredibly wealthy through being the only vendors for AI assistance. Using this wealth they purchase control of media, social networks, games, etc. They are also powering a lot of this already through the use of their AI services behind the scenes in all these industries, meaning they may not even have to acquire or take control of anything. They're already running it.

            Then they realize they can leverage the vast power of their AIs to start pushing agendas to the populace. They can swing elections pretty trivially. They can take power.

            Fast forward a few decades and you have an AI monopoly run by a bunch of oligarchs governing humanity by AI-generating all the media they consume.

            The key here is using AI as a force multiplier coupled with a monopoly on that force multiplier. Imagine if only one army in the world ever invented guns. Nobody else could make guns due to some structural monopoly they had. What would happen?

            Even without regulatory capture I can see this happening because digital technology and especially Internet SaaS tends to be winner take all due to network effects. Something about our landscape structurally favors monopoly.

            I'm not saying I definitely think this is going to happen. I'm saying it's a dystopian scenario people should be considering as a possibility and seeking to avoid.

          • a day ago
            [deleted]
      • mmooss 2 days ago

        > fear that AI will be naturally good a persuasion

        It seems like many AI programs are built to do that. Look at all the software that makes its output look human - which is usually unnecessary unless the software is intended to con or persuade people.

        For example, AI interfaces could say:

          INPUT:
        
          OUTPUT:
        
        ... instead of looking like, for example, a text chat with a human.

        Also, it seems the human tone could be taken out, such as the engaging niceties, but I wonder: If the model is built on human writing, could the AI be designed to simply and flatly state facts?

      • lo_zamoyski 2 days ago

        > at the level of the best psychopathic con men

        Technology permits scale. The Marquis de Sade already observed the limitations of the theater medium to transmit sexually graphic content (which he saw as useful for social and political control). You could only pack so many people in a theater, and the further away you are, the less visible the stage. The world had to wait for the motion picture and even the VHS tape to give pornography effective delivery mechanisms. The internet only made this more effective, first by making porn even more accessible, but also through targeting by observing user behavior patterns. AI can even better tailor the content by matching behavior data and generating new content to taste. Collecting and interpreting face data only contributes to this spiral toward total domination.

    • a day ago
      [deleted]