The Whispering Earring

(croissanthology.com)

112 points | by ZeljkoS 2 days ago ago

97 comments

  • CoopaTroopa 2 days ago

    "The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."

    https://web.archive.org/web/20121007235422/http://squid314.l...

    • AndrewDucker 2 days ago

      As I said in a comment on that post, 13 years ago: "any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power"

      • ameliaquining 2 days ago

        True, but concerns about LLMs with anything like current capabilities are of the "Truly Part of You" flavor, not the "becoming too powerful" flavor.

      • 2 days ago
        [deleted]
    • kazinator a day ago

      But before stating that intent, he admits "well, that parable didn't work". The strongest interpretation of which is that the parable he wrote didn't succeed in being about "the dangers of becoming too powerful".

      We have to read "was about" as "was (supposed to be) about".

      What the parable ends up being about is any consistent interpretation well supported by the actual text of the parable!

      In the parable, the Whispering Earring is a kind of character. It has autonomy and agency; a mind of its own, separate from that of the wearer. It generates ideas and suggests them to the wearer, eventually rendering most of their brain unnecessary. (The implication being that the individual, as a sentient being, has wasted away and has been effectively replaced by the host, as if possessed in the classical sense).

      Someone who could be just as powerful in making all the right decisions guaranteed to make them happy, but using their own brain instead of taking suggestions from a whispering daemonic oracle, would not waste away and be replaced; their brain would have to be doing remarkable work and developing in the process rather than atrophying.

      I suspect that it would actually be very difficult to repair the parable, while retaining the key element of the Whispering Earring as an autonomous entity, into being about "the dangers of becoming too powerful oneself". (Has the author tried?)

    • bananaflag 2 days ago

      Thanks! Even though I have the whole Squid314 archive, I had forgotten about this follow-up.

  • summa_tech 2 days ago

    A distant relative, no doubt, of Stanislaw Lem's "Automatthew's Friend" (1964). A perfectly rational, indestructible, selfless, well-meaning in-ear AI assistant. In the end, out of nothing but the deepest care for its owner's mental state in a hopeless situation, it advocates efficient and quick suicide.

  • djoldman 2 days ago

    > It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest....The earring is never wrong.

    > There are no recorded cases of a wearer regretting following the earring’s advice, and there are no recorded cases of a wearer not regretting disobeying the earring. The earring is always right.

    > ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.

    > Niderion-nomai’s commentary: It is well that we are so foolish, or what little freedom we have would be wasted on us. It is for this that Book of Cold Rain says one must never take the shortest path between two points.

    The piece implies that

    1. at least occasionally one should choose to do something one will regret.

    2. not knowing what will make one happy is part of what makes one free.

    I'm not sure I agree with these (it seems that 1. is a paradox) but it is an interesting thought experiment.

    • indoordin0saur 2 days ago

      I think it's less confusing when you consider the very first thing the earring says: "better for you if you take me off". The wearer should rationally always regret not following its advice, including that first thing.

      I think the paradox is here, and it comes from cheeky use of misleading language:

      > ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.

      The wearer doesn't really live any sort of life. Once it fully integrates with you your brain is mush, you're no longer experiencing anything. At some fuzzy point in there you've basically died and been replaced by the earring.

    • joshkel 2 days ago

      > at least occasionally one should choose to do something one will regret.

      Not necessarily. My take was that the practice of choosing may well be more valuable than the harm of the occasional regretted choice.

    • munificent 2 days ago

      Statements that involve the future are always linguistically vague.

      In your paradoxical sentence, "will regret" could be interpreted as either "know at that moment that they will regret" or "come to know after the fact that they regret it".

      The former is a paradox, but the latter isn't.

      As life advice, I think it works better when you consider it amortized over a collection of choices instead of a set of serial choices each of which it must be rigidly applied to: One should make a set of choices using a strategy that leads some of them to be likely to be regretful (but presumably without being able to predict ahead of time which ones will be).

    • nine_k 2 days ago

      > at least occasionally one should choose to do something one will regret.

      Negative experience is crucial for learning, unfortunately. "If you never fail you don't try hard enough", etc. This is trivially understood in physical training: you have to get yourself exhausted to become stronger. It's much less of an accepted view in, so to say, mental training: doing thing that you later regret may teach you something valuable that always avoiding such decisions does not.

      I do not necessarily support or reject this view, I'm just trying to clarify the point.

    • a day ago
      [deleted]
    • cjameskeller a day ago

      We are told:

      >"It does not always give the best advice possible in a situation. It will not necessarily make its wearer King, or help her solve the miseries of the world. But its advice is always better than what the wearer would have come up with on her own."

      I think one very simple explanation would be that this comes down to a matter of exploration vs exploitation. Since it is only giving "better" advice, and not even 'locally optimal', there is reason to favor exploring vs merely following the advice unquestioningly.

      A more complex, but ultimately comprehensive answer, is that free will consists, at least in one aspect, in the ability not only to choose one's goals or means, but also what _aspect_ of those various options to consider "good" or "better".

      And if one were to say that all such considerations ultimately resolve back to a fundamental desire to be "happy", to me, this seems to be hand-waving, rather than addressing the argument, because different people have different definitions of the "happy" end-state. If these differences were attributed fully to biology & environment, the story loses its impact, because there was never free will in the first place. If, while reading the story, we adopt a view that genuine free will exists, and hold some kind of agnosticism about the possible means by which that can be so, then it seems reasonable to attribute at least some of the differences in what the "happy" end-state looks like to the choices made by the people, themselves.

      Given that kind of freedom, unless one has truly perfect knowledge (beyond the partial knowledge contained in the advice of the earring), the pursuit of one's goals seems to unavoidably entail some regrets. And with perfect knowledge, well... The kind of 'freedom' attributed, for example, to God by philosophers like Thomas Aquinas, is explicitly only analogous to our own, and is understood to be an unchanging condition, rather than a sequential act.

      (As a final note: One might wonder what this 'freedom to choose aspects' approaches as an 'asymptotic state' -- that is, for an immortal person. And this leads to metaphysical concerns -- of course, with some things 'smuggled in' by the presumption of genuine freedom, already. Provided one agrees that human nature undeniably provides some structure to ultimate desires/"happiness", the idea of virtue ethics follows naturally, and from there many philosophers have arrived at similar notions of some kind of apotheosis as a stable end-state, as well as the contrary state of some kind of scattering or decay of the mind...)

    • a day ago
      [deleted]
  • Jun8 2 days ago

    Compare/contrast the Whispering earring/LLM chat with The Room from Stalker, each one is terrifying in its aspect: One because it eventually coaxes you to become a shallow shell of yourself, the other by plucking an unexpected wish from the deepest part of your psyche. I wonder what the Earring would advise if one were to ask it if one should enter The Room.

  • throw432189 2 days ago

    Two points I liked:

    1. I like that the first bit of advice is to take it off. It's very interesting that in this story very few people take its advice.

    2. It recommends whatever would make you happiest in that moment, but not what would make the best version of yourself happiest, or what would maximize happiness in the long term.

    Solving mazes requires some backtracking, I guess. Doing whatever will make you happiest in the moment won't make you happiest in the long run.

  • tacitusarc 2 days ago

    I think this ignores the internal conflict in most people’s psyche. The simplest form of this is long term vs short term thinking, but certainly our desires pull us in competing, sometimes opposite, directions.

    Am I the me who loves cake or the me who wants to be in shape? Am I the me who wants to watch movies or who wants to write a book?

    These are not simply different peaks of a given utility function, they are different utility functions entirely.

    Soon after being put on, the whispering earring would go insane.

    • 2 days ago
      [deleted]
  • abeppu 2 days ago

    I want someone to try building a variant that just gives you timely cues about generally good mental health practices. Suggestions could be contextually based on a local-only app that listens to you and your environment, and delivered to a wireless earbud. When you're in a situation that might cause you stress, it reminds you to take some deep breaths. When you're in a situation where you might be tempted to react with hostility, it suggests that you pause for a few seconds. When you've been sitting in front of your computer too long it suggests that maybe you'd like to go for a short walk.

    If the moral of the story is that having access to magically good advice is dangerous because it shifts us to habitual obedience ... can a similar device shift us to mental habits that are actually good for us?

    • ryandv 2 days ago

      The moral of the story is that neocortical facilities (vaguely corresponding to what distinguishes modern humans) depend on free will. If you want to merely enthral yourself to voices of the gods a la Julian Jaynes' bicameral man, you can, but this is a regression to a prior stage of humanity's development - away from egoic, free willed man, and backwards to more of a reactive automaton, merely a servant of (possibly digital) gods.

      • abeppu 2 days ago

        I think there's a meaningful difference between a tool to remind oneself to take a beat before speaking vs being told what to say. For example, cues that help you avoid an impulsive reaction of anger I think is a step away from being a reactive automaton.

        • patcon 2 days ago

          My sensibility is that agency is about "noticing". The content of information seems perhaps less important than the attention allocation mechanism that brings our attention to something.

          If you write all your own words, but without an ability to direct your attention to what needed words conjured around it, did you really do anything important at all? (Yes, that's perhaps controversial :) )

        • ryandv 2 days ago

          Anger is just another aspect of the human condition, and is absolutely justified in cases of grave injustice (case in point: Nazis, racism). It's not for some earring to decide when it is justly applied and when it is not; that is the prerogative of humanity.

          In either case none of this cueing or prompting needs to be exogenous or originate from some external technology. The Eastern mystics have developed totally endogenous psychotechnologies that serve this same purpose, without the need to atrophy your psyche.

          • abeppu 2 days ago

            Absolutely anger is sometimes justified. But people are also angry when e.g. someone cuts them off in traffic. The initial feeling of anger may not be appropriate. A cue to help you avoid reacting immediately from hostility isn't so much deciding whether anger is appropriate but giving you the space to make that judgement reflectively rather than impulsively. Even if anger is appropriate, the action you want to take on reflection may not be the first one that arises.

            "The eastern mystics" managed to do a lot of things, but often with a large amount of dedicated practice. Extremely practiced meditators can also reach intense states of concentration, equanimity etc, but the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.

            • ryandv 2 days ago

              > the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.

              I would posit that the only faculty developed in wielding such supportive tooling, is skill at using those tools; when the real goal is the cultivation of character, the construction of a "virtual engine" [0] that governs action. Consider analogously that brain training apps' claims to improve general intelligence are specious at best, and don't seem to develop anything other than the ability to use the app.

              Since, in the case of the earring, this virtual governor has already been outsourced to an external entity, there is no need to cultivate one for one's self. Not only does this miss out on the personal development attained in said process, it also risks enthralling you to a construct of someone else's design; and one should choose carefully at which pantheon they pour libations, for its value systems might not always align with one's own.

              [0] https://www.meaningcrisis.co/episode-6-aristotle-kant-and-ev...

              • throwanem 2 days ago

                "Will certainly never align," I would say. But what matter? Long enough and "one's own" becomes specious, of course.

                • ryandv 2 days ago

                  "The king is dead; long live the king!" is the feudal manifestation of the archetype of the slain and resurrected god. In similar fashion one's own virtual governor requires constant renewal and revision.

                  • throwanem 2 days ago

                    Don't you think Leary's term would land better here? Or have you avoided it precisely for such connotation? I remember having some trouble with that for a while in my twenties.

                    • ryandv 2 days ago

                      I have not actually familiarized myself with Leary's work. The closest approach would have been via Robert Anton Wilson and the rest of the ramifications through the occult and western esoteric corpus.

                      • throwanem 2 days ago

                        Well, Wilson is a preferable source for everything anyway. Less credulous, and so far as I know all he ever sought to sell was books.

            • throwanem 2 days ago

              It is strictly necessary not to have supportive tools in order to develop these skills. Sentience and the ability to learn from experience are all that is essentially required. Past that there are no crutches and no shortcuts, because you have mistaken for disability the refusal to grow.

    • Barrin92 2 days ago

      >can a similar device shift us to mental habits

      I think the moral of the story is that instead of reaching for something to "shift you" start doing the shifting. You are a living mind, rather than asking for some device to affect you, assert your own will. Don't avoid stress and conflict, embrace it, that's life. This instinctive demand for some therapeutic, external helper is what's wrong in the story, people craving to be put into passivity.

      This need not even be a technology, The moral version of this could be some priest lecturing you on good and evil, some paternalistic boss making your decisions for you, the crux here is submission, being acted upon rather than actively embracing your own nature.

  • kybernetikos 2 days ago

    I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available. I would say it's the goals that are important, not the limited way in which I work out how to achieve them.

    It would certainly be horrifying if I were slowly tricked into giving up my goals and values, but that doesn't seem to be what is happening in this story.

    Perhaps if I were to put the earring on it would tell me it would be better for me to keep wearing it.

    • ToValueFunfetti a day ago

      You surrender your self in exchange for your goals. With the right goals, that could be a worthy sacrifice. But of course it is a sacrifice.

      Imagine doing a crossword while a voice whispers the correct letter to enter for each cell. You'd definitely finish it a lot faster and without making mistakes. Crossword answers are public knowledge, and people still work them out instead of looking them up. They don't just want to solve them; they want to solve them theirselves. That's what is lost here.

      • kybernetikos a day ago

        This is associating the self with the thing that decides how best to achieve goals (the earring / the part of your brain that works out how to achieve a goal), while I'm saying that I think I would associate the self much more with the thing that decides what the goals are.

        > they don't just want to solve them; they want to solve them theirselves. That's what is lost here.

        I think in this story, the earring would not solve the crossword for you, if for some reason your goal was to solve the crossword yourself.

    • naasking 15 hours ago

      > I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available.

      The goals you form depend on your values, and your values are formed by learning from trial and error by what you like and what you regret. If the earring removes all chance of regret, then you also remove all chance of learning and all possibility of forming values or meaningful goals. You effectively erase yourself, hence why the brains were atrophied.

  • Mithriil 6 hours ago

    Anyone else sees an interesting analogy of "exploration vs exploitation"'s effects on the learning brain?

  • GarnetFloride 10 hours ago

    This sounds a lot like Jane from Speaker for the Dead and the rest of the series. The main character had an earring connected to an AI named Jane that helped a lot.

  • kens 2 days ago

    I read a science fiction story (maybe 40 years ago in Analog?) about a somewhat similar device that provided life guidance. This device would detect if the choice you were currently making would likely result in your death and would flash a red light to warn you. (Something about using quantum multi-worlds to determine if you die.) Does this story ring a bell with anyone?

  • y-curious 2 days ago

    This reminds me of another story that I saw posted on HN and has provided lots of fodder for idle conversations: Manna[1]. It's a less mystical version of the whispering earring.

    1: https://marshallbrain.com/manna1

    • kirtakat 2 days ago

      This is the story I keep thinking back to with the rise of LLMs, but I could not think of the name for it, so thank you!

  • woodruffw a day ago

    This parable reminds me a bit of Nozick's "tale of the slave"[1] in its rhetorical sleight of hand: the reader is meant to be mesmerized by the slow transition to an "obvious" conclusion, which obscures the larger inconsistency.

    In both cases, the outcome is only convincing if the story makes you forget your grounding: democracy isn't slavery (contra Nozick) and the earring is clearly not always right unless you're the most basic kind of utility monster.

    [1]: https://rintintin.colorado.edu/~vancecd/phil215/Nozick.pdf

  • gsmt a day ago

    This reminds me a lot of what was described in the book "The Feminine Mystique." There, they defined a “problem that has no name” that was the sense of emptiness and dissatisfaction experienced by women who, by all external measures, were living the ideal life: successful, comfortable, with all needs met by their husbands.

    Both cases show how following an externally provided script, even one that reliably produces “good” outcomes, can lead to a hollowed-out sense of self

  • AndrewDucker 2 days ago

    It's a classic, and the recent rise of AI will hopefully make it a more widely-known one.

  • adamgordonbell 2 days ago

    Wow, small world, I just made a podcast episode about the dangers of turning your brain off when using Agentic coding solution and referenced the whispering earring as my metaphor.

    I feel like if you use the agentic tools to become more ambitious the you'll probably be fine. But if you just work at a feature factory where you crank out things as fast as you can AI coding is going to eat your brain.

    Link: https://corecursive.com/red-queen-coding/#the-whispering-ear...

  • munchler 2 days ago

    There’s a good Rick and Morty episode with a similar premise: a crystal that shows how the user will die in the future. Morty uses it mindlessly to guide him to the fate he thinks he wants, but there are some unintended consequences.

    https://rickandmorty.fandom.com/wiki/Death_Crystal

  • gnramires 2 days ago

    That's a cute story, I certainly like its tone of mystery.

    However, the premise seems a bit wrong (or at least the narrator is wrong). If your brain actually degenerates from usage of the ring (and is no longer used in daily life, acting only reflexively), the premise that you are the happiest from following the ring might be flat out wrong. I think happiness (I tend to think in terms of well-being, which let's say ranks every good thing you can feel, by definition -- and assume the "good" is something philosophically infinitely wise) is probably something like a whole-brain or at least a-lot-of-brain phenomenon. It's not just a result of what you see or what you have in life. In fact I'm sure two persons can have very similar external conditions and wildly different internal lives (for an obvious example compare the bed-ridden man who spends his day on beautiful dreams, and the other who is depressed or in despair).

    What the ring seems to do is to put you in situations where you would be the happiest, if only you were not wearing the earring.

    The earring that actually guides you toward a better inner life perhaps offers only very minimal and strategic advice. Perhaps that's what the 'Lotus octohedral earring' does :)

    • jerf 2 days ago

      Suppose you had a concrete definition of "happiness", as some seek a concrete definition of "consciousness".

      That implies that it could be maximized. Perhaps along a particular subdimension of a multidimensional concept, but still, some aspect of it could be maximized.

      What would such a maximization look like in the extrema?

      Could a benzene ring be the happiest thing in the universe? It's really easy to imagine some degenerate case like that coming to pass.

      You can generally play this game along any such dimension; "consciousness", "agony", "love"... if you have a definition of it, in principle you can minmax it.

      • gnramires a day ago

        Also, more to the point of your observation: we should be indeed very careful about any extreme and any maximization, because I presume when we maximize a lot we tend to bump into limitations of the metric or theories employed. So we should only maximize up to a region of fairly high philosophical confidence, and this is why we need progress in philosophy, psychology, philosophy of arts, philosophy of culture, neurophilosophy, etc.. in lockstep with technological progress -- because technology tends to allow very easy maximization of simplified models of meaning, which may rapidly break down.

        I think one example might be that in medieval times maximizing joy and comfort could be a pretty good heuristic in a harsh life of labor. Those days we actually perhaps have to seek out some discomfort now and then, otherwise we'd be locked in our homes or bed ridden with all affordances some of us have; we have to force ourselves to exercise and not eat comfort food all the time; etc.. I think some hard drugs are a good example as well, a kind of technology that allows maximizing desire/pleasure in a way that is clearly void and does not seem associated with overall good experiences long term. An important fact is that our desires do not necessarily follow what is good; our desires are no omniscient/omnibelevolent oracles (they're simply a limited part of our minds).

        We need to put thought/effort into discovering and then enacting what is good in robust, wise, careful (but not too careful), etc. ways. Let's build an awesome society and awesome life for all beings :)

      • gnramires a day ago

        Good question, and I've spent a few years investigating this sort of question :)

        It led me to investigate formalizing ethics and if that would be even possible (so we don't fall into traps like you've mentioned)

        I think I've gotten pretty good results which I've sketched here: https://old.reddit.com/r/slatestarcodex/comments/1iv1x1m/the...

        More to come. In summary, I became confident that we can, if we're careful, know those things and do something like 'maximize happiness' (as I said, I prefer more general terms than happiness! There's a whole universe of inner experiences beyond just the stereotypical smiley person, I think -- I tend to think of 'maximizing meaning' or 'maximizing well-being').

        The basic fact that allows this process to make sense, I guess, is that our inner experiences are real in some sense, and there are tools to study them. Not only are our inner lives real, they make part of the world and its causal structure. We can understand (in principle almost exactly, if we could precisely map our minds and brains) what causes what feelings, what is good and what isn't (by generalizing and philosophically querying/testing/red teaming/etc.), and so on for every facet of our inner worlds.

        In fact, this (again in principle) would allow us to make definite progress on what matters, which is our inner lives[1]. I think Susanne Langer put it incredibly well: (on the primacy of experiences as the source of meaning)

        "If nothing is felt, nothing matters" (Susanne K. Langer)

        This is an experimental fact, we as conscious beings experimentally see this fact is true. So in a way the mind/brain is kind of like a tool which allows us to perceive (with some unreliability and limitations that can be worked with) reality, in particular inner reality.

        We can actually understand (with some practical limitations) the world of feelings and what matters. To that we simply experiment, collect evidence and properties about feelings and inner lives, try to build theories that are consistent, robust to philosophical (that is, logical in a high level sense) objections; and then we simply do what is best, or if necessary try out a bunch of ways of life and live out the best way according to our best theories.

        ---

        Addendum: Let's take the benzene ring as an illustrative example of our procedure. Someone claims 'a benzene ring is the happiest thing in the Universe, and we therefore must turn everything into a sea of benzene rings. Destroy everything else.'. Is that claim actually true? Let's explore.

        It isn't, I claim: if "Nothing is felt, nothing matters". When you are asleep (and not dreaming or thinking at all, let's suppose) or dead, you don't feel anything. No, thoughts are, and must be, associated with activity in our brain. No information flowing and no brain activity, no thoughts. No thoughts, no inner life. Moreover, thoughts require a neural (and logical) infrastructure to arise. It's logically consistent with how we don't observe ourselves as rocks, gas clouds, mountains, benzene molecules, or anything else: we observe ourselves as mammals with actually large brains. There are immensely more rocks, gas particles, benzene molecules, etc.. then there are mammals in the universe. Yet we experience ourselves as mammals. Benzene molecules, rocks and gas clouds just don't have enough structure to support minds and experience happiness.

    • indoordin0saur 2 days ago

      The tone of mystery is very much Jorge Luis Borges' writing style. My take is that it is probably a kitschy and playful take on Borges' style at least.

  • f00lsg0ld 2 days ago

    > It is well that we are so foolish, or what little freedom we have would be wasted on us.

    The fool has the possibility for success because he is willing to try. If he knew better he would not even make the attempt.

  • throwanem 2 days ago

    He warned himself?

  • nullc a day ago

    Close but no cigar. Turns out the earring doesn't have to be right or better than a person would do on their own to snare people and atrophy their brains... people are earrinning themselves out today to fairly garbage LLM models because it's easier and because when the model screws up many people don't suffer any ego loss.

  • JohnKemeny 2 days ago

    [flagged]

    • rwnspace 2 days ago

      He has some great essays and research pieces and has fostered a generally nice community of people who grew out of LessWrong. There aren't many places online to talk about those things in a certain way without it devolving rapidly.

      • throwanem 2 days ago

        Strangely, a popular formulation of utilitarian ethics attracts utilitarian ethicists, some of whom were eugenicists and the like already by inclination, the rest merely having become so under the suasion of this fringe theory's false axioms.

        When your scheme of rules for how human societies should run does nothing to exclude or even discourage the worst atrocities of human history - when that scheme is fairly evaluable on its own terms as declaring those atrocities insufficient! - you've already made a catastrophically terrible mistake. To advocate it thereafter through persuasion rhetorical and otherwise is contemptible, but unsurprising.

        • klipt 2 days ago

          Anything that causes huge suffering, like forcing eugenics on people, is obviously not utilitarian because suffering has negative utility.

          Unless you mean things like parents choosing to screen for Down syndrome, which is not what most people call "eugenics" since it's completely voluntary.

          • throwanem 2 days ago

            That's just, like, your utility function, maaaaaaan.

            I joke, but that actually is the problem. I mean, look at you! Even trying to disclaim eugenics you can't manage not to espouse it, just in the "positive" or "voluntary" or "new eugenics" or "liberal eugenics" variety that bothers people less than all the others.

            I mean, I get why a programmable system of ethics appeals to programmers, just as a mathematical one to mathematicians, and for like cause in both cases. But you are required to acknowledge reality has the permanent right of veto, not merely pay lip service to the concept of it possibly for the moment holding that privilege.

            • klipt a day ago

              So you're saying it should be illegal to test for Down syndrome in utero?

              Because that's a common practice in most developed countries. And it hasn't led to Nazi Germany in those countries.

              • throwanem a day ago

                I'm saying you should own yourself a eugenicist and at least be honest about that, rather than strive to advance a heterodox and unappealing, actually rather amoral and inhuman, ideology through instrumental deceit. You certainly should not do so on the backs of parents facing what I understand can be one of a lifetime's more difficult decisions.

                You know as well as I do suffering under utilitarianism has exactly the value the advocate of the moment cares to give it at the moment, whether that be negative, neutral, or vastly to the greater good. Why even attempt such a trivially obvious lie?

                • klipt a day ago

                  > parents facing what I understand can be one of a lifetime's more difficult decisions.

                  Do you believe that parents who choose to terminate a downs fetus are evil/utilitarian?

                  Or are the doctors who perform the testing/abortion the evil/utilitarian ones?

                  Or the lawmakers who allow such testing/abortion to be legal?

                  Where exactly is the responsibility for being evil/utilitarian in your mind?

                  • throwanem a day ago

                    What? How does any of this follow from anything I've said? What's utilitarian about demanding a scapegoat for something that isn't even indictable?

                    I'm not criticizing parents' decisions, but yours. This specifically includes your using the sorrow of others, in this case parents faced with a harrowing dilemma, as an excuse for your own behavior, rather than demonstrate anything resembling the courage of your supposed convictions.

                    • klipt a day ago

                      > I'm not criticizing parents' decisions, but yours.

                      I don't know man, if you think it's fine for parents to abort downs babies, I think that means you're the utilitarian.

                      You can try deny it, but in your heart of hearts you think it's fine to value avoiding the inconvenience of a downs baby higher than the value of a fetus's life.

                      That's pretty damn utilitarian.

                      • throwanem a day ago

                        > I don't know man, if you think it's fine for parents to abort downs babies, I think that means you're the utilitarian.

                        > You can try deny it, but in your heart of hearts you think it's fine to value avoiding the inconvenience of a downs baby higher than the value of a fetus's life.

                        > That's pretty damn utilitarian.

                        Thank you for conceding that utilitarianism trivially entails arrogating unto oneself the right to decide universally who lives and who dies, in quite literally every imaginable case - this being obviously true to so reflexive and unreflected-upon an extent that you can only conceive of even an overtly hostile and disdainful interlocutor arguing he should instead be given that power, rather than that no one should.

                        Is there anything you'd care to add to that, or are you content with having revealed your vicious ideology in all its bare-fanged, blood-soaked glory?

                        • klipt a day ago

                          > be given that power, rather than that no one should

                          So you believe that red states which disallow aborting downs babies are more moral than blue states that allow aborting downs babies?

                          Since the red states are the only ones that try to ensure that "no one has that power" (to abort downs babies)

                          • throwanem a day ago

                            Would you like at any point to argue claims of your own, rather than inventing ones to falsely attribute to me? Not that you'll disembarrass yourself at this point, but one would hope to see you show the sense at least to stop digging.

                            • klipt a day ago

                              You're not making any coherent points. You vaguely refer to aborting downs fetuses as evil and "full of bloodshed" but then you're unwilling to support laws that would prevent that.

                              Do you actually believe in anything except misunderstanding what utilitarian means?

                              • throwanem a day ago

                                I'm not going to answer arguments you make up and assign me as though they were my own. Do people usually tolerate that in your experience?

                                • klipt a day ago

                                  Look in the mirror pal. You've been nothing but rude and assuming this whole thread

                                  • throwanem a day ago

                                    Well, I haven't been willing to take you at your word when you showed up to tell me how wrong I am based on nothing but your say-so. Sure, I'll give you that.

                                    Were you not expecting to have to convince anyone? If that really is so, then again I have to ask, do you imagine this sort of thing normal? Are you in the habit of letting harangue stand in for conversation in ordinary life also, or is this a special occasion?

                                    • klipt a day ago

                                      Okay, tell me what your moral framework is, and I'll tell you how it's just a variant of utilitarianism.

                                      • throwanem a day ago

                                        For the sake of this argument, let's assume my moral framework is identical with that of a priest of Tezcatlipoca, at the height of the Aztec Empire. Astonish me.

                                        • klipt 11 hours ago

                                          > For the sake of this argument, let's assume my moral framework is identical with that of a priest of Tezcatlipoca, at the height of the Aztec Empire. Astonish me.

                                          Easy, your utility function is an indicator on WWPTD (What Would a Priest of Tezcatlipoca Do)?

                                          +1000 when your actions are in accordance with a priest of Tezcatlipoca, and -1000 when they are not.

                                          Like string theory, you can make utilitarianism fit anything. So on its own, it is neither good nor evil.

                                          It entirely depends on which utility function.

                                        • JohnKemeny a day ago

                                          What began as a complaint about HN’s interest in Scott Alexander devolved into a prolonged, hostile, and circular argument about whether certain reproductive choices are a form of eugenics, whether that’s compatible with utilitarianism, and whether utilitarianism itself is morally bankrupt.

                                          Neither side persuades the other, and the thread becomes more about rhetorical sparring than the original topic.

                                          • throwanem 19 hours ago

                                            > There aren't many places online to talk about those things in a certain way without it devolving rapidly.

                                            Now we have an illustration of precisely what that means, more or less entirely in spite of those irritated into furnishing it.

                                            Oh, I understand why rhetoric gets a bad name. No fool ever likes being made to look foolish. That's worth doing, in public, as often as possible, with the kind of person it takes to look utilitarianism full in the face, 'repugnant conclusion' and all, and still embrace it.

        • rwnspace a day ago

          As it happens I'm neither an EA nor much of a utilitarian, in the traditional sense, probably closer to a Christian "post-rat". I'd be hard pressed to say that it's worth killing a bunch of people to say, save the universe. I still have had a good time reading Scott and occasionally engaging with other people in the community.

          • throwanem 19 hours ago

            > I'd be hard pressed to say that it's worth killing a bunch of people to say, save the universe.

            Well, thank God for that, at least. Do you honestly not realize how you sound? That's a genuine question, if not an especially friendly one. You can answer it out loud if you like, but no one really needs you to.

            I strongly suspect we agree on nearly nothing, including the moral value of your desiderata. But it makes some sense out of how people in this community go dangerous, if that's the sort of "architecture astronaut" philosophy you people are pairing with the assumption you can think yourselves out of anything and everything including human imperfection and emotion. The lack of oxygen up there has gone to your head.

            • rwnspace 19 hours ago

              While I'm detecting you feel strongly about trolley problems, I think it's fine to discuss them, and I really don't see why you need to try and shame me for providing an example of where I tend to sit on them.

            • rwnspace 17 hours ago

              Right, it's not like such "do this awful thing or do that awful thing" scenarios are constantly relevant in medical, the government, or dare I say, military decisions.

              You need to start actually having conversations with real individuals, I was giving you a chance, but I'm not bothering with it any more, except to say this: I am not "you people", I am not whatever you are projecting me to be, at all, least of all someone with a lack of moral disgust. Because I am capable of engaging in deliberate thought about awful things does not make me awful too, and I have no idea why you think that Scott Alexander is somehow the boogieman in the current ethico-political landscape, go look at Curtis Yarvin.

              • throwanem 17 hours ago

                > medical, the government, or dare I say, military decisions

                Which of these are you making?

                > Because I am capable of engaging in deliberate thought about awful things does not make me awful too

                Are you sure? Being able to entertain ideas you don't agree with is necessary, but there is a sting in the tail. There was an old warning about that, something about how it looks back. I don't believe people nearly often enough understand what was meant. I don't believe you do.

                > I have no idea why you think that Scott Alexander is somehow the boogieman in the current ethico-political landscape, go look at Curtis Yarvin.

                Yarvin's never been anything but a jackoff wannabe who dresses up in big bro's old rocker drag to indulge his humiliation kink on stage, and reliably struggles to achieve even that much.

                I'm more interested in people capable of being taken seriously, and "Alexander's" whole schtick is making the implausible, nonsensical, and unconscionable go down easy. Eugenics is one example. I don't hold against him that he's justified it once long ago; anyone in their 20s or early 30s is likely to say dumb shit occasionally. It's just that I see no compelling evidence he has in the interim developed an understanding of what was wrong with his prior thinking, only learned in the manner of an incompletely socialized child what makes people angry to hear talked about. Unfortunately, he is no child; like anyone his actions have consequences in the world, all the more in his case for his outsized profile and persuasive qualities. That makes him a somewhat dangerous person in his own right, and as anyone else's 'useful idiot' potentially a good bit more so. As such he merits interest, which as a public figure he also recruits. And, of course, it is always interesting to observe who lately seems to be recruiting him, which appears pretty simple to do.

                What a shame. As an author of fiction the man has a genuine gift, I think, yet he will always in the end be "better known for other work."

                • rwnspace 15 hours ago

                  Again you speak completely arrogantly, you have no idea what decisions I've taken in my life, no, none of the list, and yes, ones that involve significant threat to life. I specifically did not use the word "entertain" for those ideas because I have firsthand experience of the aforementioned sting. There's a reason I described myself as "post-rat". Yarvin was taken seriously for quite a time and is far more well-aligned with the sorts of people and views you're alluding to. Scott's Jewish, for god's sake.

                  I can only conclude you're a troll with far too much time on your hands. Goodbye.

                  • throwanem 15 hours ago

                    You ask me explicitly to address myself to the individual, and then object when I do so. I was trying to do you a favor with "don't agree with," and I have no idea what anyone's ethnicity is meant to do with anything or why you bring it up, but according to you I'm arrogant and unreasonable. You want credit for not being a "rationalist" or an "EA" any more, now that the other Scott has officially declared those words smell funny, while changing only your taste in axioms and leaving the ubiquitously and shamelessly tendentious method of ratiocination wholly intact. You call yourself a Christian and know nothing of humility. Sure. So long.

    • mock-possum 2 days ago

      Guess we’ll have to just take your word for it - I found this one to be a nice little read, reminds me a bit of Borges.

      • JohnKemeny 2 days ago

        Hey, don't take my word for it. Do you own research into worse ideologies you'd like to adapt.

        I'm just planting a seed.

        • JohnKemeny a day ago

          Don't take my word for it. Do your own research into whose ideologies you'd like to adapt.(*)

    • dafelst 2 days ago

      What is EA in this context?

      • y-curious 2 days ago

        I had to ask AI (ironically), it means Effective Altruism in this context. I'm not really sure what the parent's hate for EA comes from, but I don't hang out in those circles

        • erikerikson 2 days ago

          You are correct about what EA references. There was corruption in the EA community (e.g. Sam Bankman) and some extremist factions as well. It was fashionable for a bit to point to the worst sections and dismiss the whole movement which largely has centered on being financially efficient in improving the world. For example, supporting malaria nets, direct cash transfers, and other basically not wasteful altruism.

        • throwanem 2 days ago

          The social scene has an extremely vindictive vibe, very much suited to such soi-disant prophets. If you need anything from these people, my sense has always been, it isn't really safe to even let them think you might someday harbor the desire to cross them.

          Of course butter wouldn't melt in their mouths to hear them tell it, but when ever in a case like this isn't that true?

        • AnimalMuppet 2 days ago

          The hate for EA comes from a bug in it.

          If you put your time horizon out far enough, you can calculate an almost infinite positive value as the end result of your actions today. From an infinite positive value, you can justify doing almost infinite amounts of damage now in order to reach that golden future.

          But like most "ends justify the means" philosophies, we never seem to actually get the claimed ends, but we do get the damage from the means. If you do terrible things to create a wonderful future, your theoretical wonderful future never seems to show up, but you still do the terrible things.

          I'm not the one you were asking, but I think this summarizes most peoples' objection to EA.

          The problem is that EA isn't true to its label. If you really wanted to do effective altruism, you'd make sure that it actually works. I could get behind that.

  • tempodox 2 days ago

    I would recommend Steely Dan’s “Green Earrings” instead. No whispering required!

    https://www.youtube.com/watch?v=3wvH1UzhiKk

    And the original is fully analog.