A new book about the origins of Effective Altruism

(newrepublic.com)

91 points | by Thevet 17 hours ago ago

121 comments

  • keiferski 16 hours ago

    The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.

    It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.

    That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.

    • internet_points 2 hours ago

      "I Work For an Evil Company, but Outside Work, I’m Actually a Really Good Person"

      https://www.mcsweeneys.net/articles/i-work-for-an-evil-compa...

    • nonethewiser 16 hours ago

      >The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."

      I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.

      • some_guy_nobel 15 hours ago

        You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.

        Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?

        • RobinL 15 hours ago

          > many of EA's most notorious supporters.

          The fact they're notorious makes them a biased sample.

          My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:

          - For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's

          - We should carefully decide which charities to give to, because some are far more effective than others.

          That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.

          I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere

          • mcv 4 hours ago

            I agree. I think the criticism of EA's most notorious supporters is warranted, but it's criticism of those notorious supporters and the people around them, not the core concept of EA itself.

            The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.

            Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.

            Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.

          • Eddy_Viscosity2 11 hours ago

            I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.

            • estearum 7 hours ago

              A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.

              > they could have been 13% more effective

              If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.

              The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).

              The sheer scale of the spread is the impetus behind the entire train of thought.

              • mcv 3 hours ago

                It's absolutely worth looking at how effective the charities you donate to really are. Some charities spend a lot of money on fundraising to raise more funds and then reward their management for raising to much funds with only a small amount being spent on actual help. Others are primarily known for their help.

                Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.

                I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.

                And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?

                But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.

        • btilly 8 hours ago

          The OP's interpretation is an inaccurate summary of the philosophy. But it is an excellent summary of the trap that people who try to follow EA can easily fall into. Any attempt to rationally evaluate charity work, can instead wind up rationalizing what they want to do. Settling for the convenient and self-aggrandizing "analysis", rather than a rigorous one.

          An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.

          But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.

          EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.

        • socalgal2 5 hours ago

          this feels like “the most notorious atheists/jews/blacks/whites/christian/muslims are bad therefore all atheists/jews/blacks/whites/christian/muslims are bad

        • cortesoft 6 hours ago

          Well, in order to be a notorious supporter of EA, you have to have enough money for your charity to be noticed, which means you are very rich. If you are very rich, it means you have to have made money from a capitalistic venture, and those are inherently exploitive.

          So basically everyone who has a lot of money to donate has questionable morals already.

          The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?

          In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.

          • nl 6 hours ago

            > you have to have made money from a capitalistic venture, and those are inherently exploitive.

            You say this like it's fact beyond dispute, but I for one strongly disagree.

            Not a fan of EA at all though!

        • stickfigure 4 hours ago

          > tracks perfectly with many of EA's most notorious supporters

          Just wait until you find out about vegetarianism's most notorious supporter.

        • jandrese 15 hours ago

          It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.

          • glenstein 15 hours ago

            The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.

            I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.

            • Eisenstein 15 hours ago

              When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.

              • glenstein 14 hours ago

                I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.

                I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.

                • Eisenstein 14 hours ago

                  I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.

                  • mitthrowaway2 3 hours ago

                    Some very bad people believe that the sky is blue. Does that incline you towards believing instead that it's green?

                    • Eisenstein 2 hours ago

                      My claim is not that people abandon beliefs but that they abandon labels when the label takes on connotations they do not want to be associated with.

                  • glenstein 14 hours ago

                    You risk catastrophe if you let social dynamics stand in for truth.

                    • Eisenstein 13 hours ago

                      You risk catastrophe if you ignore social indicators as a valid heuristic.

                  • int_19h an hour ago

                    If people really believe in something, it stands to reason that they aren't willing to just give up on the associated symbolism because someone basically hijacked it.

                    Coincidentally, libertarian socialism is also a thing.

      • WhyOhWhyQ 4 hours ago

        The op and your reply are basically guaranteed text on the page whenever EA comes up (not that your reply is unwarranted, or the op's message is either, but it is interesting that these are guaranteed comments).

      • glenstein 15 hours ago

        I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.

        I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.

      • klustregrif 15 hours ago

        > EA is about donating your money effectively

        For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.

        • Aunche 15 hours ago

          If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.

      • ghurtado 15 hours ago

        I don't see anything in your comment that directly disagrees with the one that you've replied to.

        Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.

        It's the 21st century version of Papal indulgences.

    • alsetmusic 3 hours ago

      That guy who went to jail believed in it, so it has to be good.

      I hope SBF doesn’t buy a pardon from our corrupt president, but I hope for a lot of things that don’t turn out the way I’d like. Apologies for USA-centric framing. I’m tired.

    • phantasmish 15 hours ago

      I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.

      Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.

      • glenstein 15 hours ago

        I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.

        I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.

        You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.

      • jrochkind1 15 hours ago

        What does "virtue ethics" mean?

        • TimorousBestie 15 hours ago

          The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.

          The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.

          • glenstein 15 hours ago

            Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.

            And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.

            • TimorousBestie 14 hours ago

              Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.

              > Utility has the advantage of sustaining moral care toward people far away from you

              Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.

        • keiferski 15 hours ago

          One of the three traditional European philosophy approaches to ethics:

          https://en.wikipedia.org/wiki/Virtue_ethics

          EA being a prime example of consequentialism.

          • phantasmish 15 hours ago

            … and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)

            Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.

    • Sporktacular 27 minutes ago

      That's not what it's about. Exploiting people to make money is not fine. Causing harm while mitigating it elsewhere defeats the point. Giving is already about the kind of person you are.

    • anonymousiam 7 hours ago

      EA should be bound by some ethical constraints.

      Sam Bankman-Fried was all in with EA, but instead of putting his own money in, he put everybody else's in.

      Also his choice of "good causes" was somewhat myopic.

      • jahnu 3 hours ago

        Some might suggest that he wasn't an EA at all but just used it for cover.

    • Aunche 15 hours ago

      > It's the perfect philosophy for morally questionable people with a lot of money.

      The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.

    • 1vuio0pswjnm7 9 hours ago

      There's the implication that some altruism may not be "effective"

      • btilly 8 hours ago

        What makes it absurd?

        If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.

        If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.

        Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.

        • keiferski 4 hours ago

          I don’t think anyone has an issue with being efficient with donation money. But it isn’t called Effective Giving.

          The conclusion that many EA people seemed to reach is that keeping your high-paying job and hiring 10 people to do good deeds is more ethically laudable than doing the thing yourself, even though it may be inefficient. Which really rubs a lot of people the wrong way, as it should.

      • 1vuio0pswjnm7 6 hours ago

        https://www.sierraclub.org/sierra/trouble-algorithmic-ethics...

        "But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."

    • downrightmike 15 hours ago

      Its basically the same thing as the church selling indulgences. Didn't matter if you stole the money, pay the church and go to heaven

    • nxor 15 hours ago

      SBF has entered the chat

      • AgentME 15 hours ago

        I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.

    • ChadNauseam 4 hours ago

      You'll never find a single prominent EA saying that because it's 100% made up. Maybe they'll remark that from an academic perspective it's a consequence of some interpretations of utilitarianism, a topic some EAs are interested in, but no prominent EA has ever actually endorsed or implied the view you put forward.

      To an EA, what you said is as laughable of a strawman as if someone summarized your beliefs as "it makes no difference if you donate to starving children in africa or if you do nothing, because it's your decision and neither is immoral".

      The popularity of EA is even more obvious than what you described. Here's why it's popular. A lot of people are interested in doing good, but have limited resources. EAs tried to figure out how to do a lot of good given limited resources.

      ou might think this sounds too obvious to be true, but no one before EAs was doing this. The closest thing was charity rankings that just measured what percent of the money was spend on administration. (A charity that spends 100% of its donations on back massages for baby seals would be the #1 charity on that ranking.) Finding ways to do a lot of good given your budget is a pretty intuitively attractive idea.

      And they're really all about this too. Go read the EA forum. They're not talking about how their hands are clean now because they donated. They're talking about how to do good. They're arguing about whether malaria nets or malaria chemotreatments are more effective at stopping the spread of the disease. They're arguing about how to best mitigate the suffering of factory farmed animals (or how to convince people to go vegan). And so on. EA is just people trying to do good. Yeah, SBF was a bad actor, but how were EA charities supposed to know that when the investors that gave him millions couldn't even do that?

  • skybrian 5 hours ago

    The book is titled "Death in a Shallow Pond" and seems to be all about Peter Singer. (I don't see a table of contents online.)

    The way I first heard of Effective Altruism, I think before it was called that, took a rather different approach. It was from a talk given by the founders of GiveWell at Google. (This is going off of memory so this is approximate.)

    Their background was people working for a hedge fund who were interested in charity. They had formed a committee to decide where best to donate their money.

    The way they explained it was that there are lots of rigorous approaches to finding and evaluating for-profit investments. At least in hindsight, you can say which investments earned the most. But there's very little for charities, so they wanted to figure out a rigorous way to evaluate charities so they could pick the best ones to donate to. And unlike what most charitable foundations do, they wanted to publish their recommendations and reasoning.

    There are philosophical issues involved, but they are inherent in the problem. You have some money and you want to donate it, but don't know which charity to give it to. What do you mean by the best charity? What's a good metric for that?

    "Lives saved" is a pretty crude metric, but it's better than nothing. "Quality-adjusted life years" is another common one.

    Unfortunately, when you make a spreadsheet to try to determine these things, there are a lot of uncertain inputs, so doing numeric calculations only provides rough estimates. GiveWell readily admits that, but they still do a lot of research along these lines to determine which charities are the best.

    There's been a lot of philosophical nonsense associated with Effective Altruism since then, but I think the basic approach still makes sense. Deciding where to donate money is a decision many people have! It doesn't require much in the way of philosophical commitments to decide that it's helpful to do what you can to optimize it. Why wouldn't you want to do a better job of it?

    GiveWell's approach has evolved quite a bit since then, but it's still about optimizing charitable donations. Here's recent blog post that goes into their decision-making:

    https://blog.givewell.org/2025/07/17/apples-oranges-and-outc...

  • m463 7 hours ago

    You know, I wonder if this is an idea that has been twisted a bit from people who "took over" the idea, like Sam Bankman-Fried.

    I remember reading the original founder of (MADD) Mothers Against Drunk Driving, left because of this kind of thing.

    "Lightner stated that MADD "has become far more neo-prohibitionist than I had ever wanted or envisioned … I didn't start MADD to deal with alcohol. I started MADD to deal with the issue of drunk driving".

    https://en.wikipedia.org/wiki/Mothers_Against_Drunk_Driving#...

  • hexator 16 hours ago

    I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.

    > A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.

    Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.

    • mikkupikku 15 hours ago

      Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.

    • nullc 16 hours ago

      > through rationalism,

      When they write "rationalism" you should read "rationalization".

      • chrisweekly 15 hours ago

        Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".

      • XorNot 15 hours ago

        It's a variant of how you instantly know what a government will be like depending how much democracy they put in their name.

  • nonethewiser 15 hours ago

    Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:

    >Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.

    What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.

    • ngruhn 15 hours ago

      > I think the initial assumption that that definition is good and harmless is just wrong.

      Why? The alternative is to donate to sexy causes that make you feel good:

      - disaster relief and then forget about once it's not in the news anymore

      - school uniforms for children when they can't even do their homework because they can't afford lighting at home

      - literal team of full time body guards for the last member of some species

      • chemotaxis 15 hours ago

        That's a strawman alternative.

        The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.

        If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

        The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.

        And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.

        • zozbot234 2 hours ago

          There's EA initiatives that focus on helping locally, such as Open Philanthropy Project's US initiatives and GiveDirectly's cash aid in the US. Overall they're not nearly as good in terms of raw impact as giving overseas, but still a lot more effective than your average run-of-the-mill charity.

        • glenstein 14 hours ago

          >it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

          I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.

          So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.

          The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.

    • Lammy 15 hours ago

      It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.

      Anyone who has to call themselves altruistic simply isn't lol

    • pfortuny 15 hours ago

      On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.

      Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.

      ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.

      Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.

  • protocolture 6 hours ago

    >here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems.

    TBH I am not like, 100% involved, but my first exposure to EA was a blog post from a notorious rich person, describing how he chose to drop a big chunk of his wealth on a particular charity because it could realistically claim to save more lives per dollar than any other.

    Now, that might seem like a perfect ahole excuse. But having done time in the NFP/Charity trenches, it immediately made a heap of sense to me. I worked for one that saved 0 lives per dollar, refused to agitate for political change that might save people time and money, and spent an inordinate amount of money on lavish gifts for its own board members.

    While EA might stink of capitalism, to me, it always seemed obvious. Charities that waste money should be overlooked in favor of ones that help the most people. It seems to me that EA has a bad rap because of the people who champion it, but criticism of EA as a whole seems like cover for extremely shitty charities that should absolutely be starved of money.

    YMMV

  • throw4847285 15 hours ago

    The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.

    If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.

  • philipallstar 16 hours ago

    > In the past, there was nothing we could do about people in another country. Peter Singer says that’s just an evolutionary hangover, a moral error.

    This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.

    • weepinbell 16 hours ago

      Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.

      givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.

      GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).

      It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".

      • gopher_space 15 hours ago

        > giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong

        When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.

        • weepinbell 14 hours ago

          GiveWell actually benchmarks their charity recommendations against direct cash transfers and will generally only recommend charities whose benefits are Nx cash for some N that I don't remember off the top of my head. I buy that lots of charities aren't effective, but some are!

          That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.

          • zozbot234 2 hours ago

            The Open Philanthropy Project is one major actor in EA that focuses mostly on "less easily researched goals" and riskier giving (but potentially higher-impact on average) than GiveWell.

    • cm2012 16 hours ago

      You can pretty reliably save a life in a 3rd world country for about $5k each right now.

      • tavavex 15 hours ago

        How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?

        • cm2012 15 hours ago

          Here is a detailed methodology: https://www.givewell.org/impact-estimates. It convinced me that $5k is a reasonable estimate.

          • bombcar 7 hours ago

            A weird corollary to this is that if you work for one of these charities, you’re paid in human lives (say you make $50k, that’s ten people who could have been saved).

            • lmm 6 hours ago

              That's an extremely weird way to think about it. The same logic applies to anyone doing any job - whatever money you spend on yourself could be spent saving lives instead, if you really want to think about it that way. There's no reason that people working for an effective charity should feel more guilty about their salaries than people working for any other job - if anything it's the opposite, since salaries usually do not reflect the full value of a person's work.

              • philipallstar an hour ago

                > That's an extremely weird way to think about it

                Perhaps, but it's exactly the type of thinking the article is describing.

    • jimbokun 15 hours ago

      Peter Singer is the LAST person I would go to for advice on morality or ethics.

  • matt3D 16 hours ago

    Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.

    I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.

    • throw4847285 15 hours ago

      That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.

      Effective altruism is a political movement, with all the baggage implicit in that.

      • Vinnl 15 hours ago

        Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.

        • edent 6 hours ago

          "Overhead" is part of the work. It's like saying you want to look at the impact of your coding, rather than the overhead spent on documentation.

          An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.

          • lmm 6 hours ago

            > An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.

            Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.

            • edent 5 hours ago

              > costs of fundraising

              How does a charity spend money unless people give it money?

              They need to fund raise. There's only so far you can get with volunteers shaking tins on streets.

              If a TV adverts costs £X but raises 2X, is that a sensible cost?

              Here's a random UK charity which spent £15m on fund raising.

              https://register-of-charities.charitycommission.gov.uk/en/ch...

              That allowed them to raise 3X the amount they spent. Tell me if you think that was unnecessary?

              Sure, buying the CEO a jet should start ringing alarm bells, but most charities have costs. If you want a charity to be well managed, it needs to pay for staff, audits, training, etc.

              • lmm 5 hours ago

                > If a TV adverts costs £X but raises 2X, is that a sensible cost?

                Maybe, but quite possibly not, because that 2X didn't magically appear, it came out of other people's pockets, and you've got to properly account for that as a negative impact you're having.

            • throw4847285 6 hours ago

              That's what an organization like Charity Navigator is for. Like a BBB for charities. I'm sure their methodology is flawed in some way and that there is an EA critique. But if I recall, early EA advocates used Charity Navigator as one of their inputs.

              • lmm 5 hours ago

                The "Program Expense Ratio" is pretty prominent in Charity Navigator's reports, and that's almost exactly a measure of "overhead".

  • Sporktacular 11 minutes ago

    The origins of EA were never in question, nothing new there. It was Peter Singer's work on maximising value for charitable outcomes. Comment section seems to be about something else altogether.

    Maybe a book clarifying what it really is is a good idea.

  • jimbokun 15 hours ago

    > Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.

    Congratulations you rediscovered tithing.

    • skybrian 3 hours ago

      They deliberately copied tithing.

  • libraryofbabel 16 hours ago

    I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!

    (Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)

    • dang 16 hours ago

      Ok, we've de-recovered the origins in the title above.

  • metalcrow 7 hours ago

    A lot of these EA comments seem to be using their own definition of EA that they've imagined. It really sounds a lot like people judging Judaism because of what Bernie Madoff did.

    • Sporktacular 22 minutes ago

      Yes! Commenters seem to have jumper onto The Guardian's vibes about it rather than Singer's entirely reasonable logic.

  • cassepipe 7 hours ago

    I never expected EA to get so much flak in this comment section.

    Most comments read like a version of "Who do you think you are?". Apparently it is very bad to try to think rationally about how and where to give out your money

    I mean if rich people want to give out their money for good and beyond are actually trying to do work of researching whether it has an impact instead of just enjoying the high-status feeling of the optics of giving to a good cause (see The Anonymous Donor episode of Curb your enthusiasm), what is it to you all ?

    It feels to me like some parents wanting to plan the birth of their children and all the people around are like "Nooo, you have to let Nature decide, don't try to calculate where you are in your cycle !!! "

    Apparently this is "authoritarian", "can be used to justify anything" like eugenics but also will end up "similar to communism" but also leads to "hyperindividualism ?

    The only way I can explain it is no one wants to give out 1% of their money away and hate the people who make them feel guilty by doing so and saying it would be a good thing so everyone is lashing out

    • Sporktacular 18 minutes ago

      I think it's a case of judging a band by its fans. Enough dodgy billionaires have jumped on to create a poor image. Singer never said donating buys you a license to be evil.

  • mvkel 8 hours ago

    The ends do not justify the means

  • renewiltord 4 hours ago

    People get wrapped up in a lot of emotion about this but the idea seemed sound: you want to make some change in the world? It makes sense to spend your money to maximize the change you desire.

    The GiveWell objective is lives saved or QALYs or whatever. Others have qualia maximized or whatever. But the idea is entirely logical.

    I think part of the problem with popularization is that many people have complex objective functions, not all of which are socially acceptable to say. As an example, I want to be charitable in a way that grants me status in my social circle, where spending on guinea worm is less impressive than, say, buying ingredients for cookies, baking them, and giving the cookies to the poor.

    Personally I think that’s fine too. I know that some aspect of the charity I do (which is not effective, I must admit) has a desire for recognition and I think it’s good to encourage this because it leads to more charity.

    But for many people, encouraging stating one’s objective function is seen as a way to “unearth the objective functions of the ones with lesser motives” and some number of EA people do that.

    To say nothing of the fact that lots of people get very upset about the idea that “you think you’re so much better than me?” and so on. It’s an uphill climb, and I wouldn’t do it, but I do enjoy watching them do it because I get the appeal.

  • CactusBlue 16 hours ago

    > I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.

    I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.

  • chaseadam17 16 hours ago

    Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.

    Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.

    The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.

    Just try to help the people around you a bit more. If everyone did that, we'd be good.

    • mk12 15 hours ago

      If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.

    • PaulDavisThe1st 15 hours ago

      > Just try to help the people around you a bit more. If everyone did that, we'd be good.

      This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.

      It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.

      Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.

      • chaseadam17 15 hours ago

        If you live in a wealthy society it's possible to travel or move or get to know people in a different society and offer to help them.

        • skybrian 3 hours ago

          There might be a bit of a language barrier, so you’ll need a translator. Also a place to stay, people to cook for you, and transportation. The tourist infrastructure isn’t all that developed in the poorest areas.

          Tourism does redistribute money, but a lot of resources go to taking care of the tourists.

        • PaulDavisThe1st 13 hours ago

          The GP said:

          > Just try to help the people around you a bit more. If everyone did that, we'd be good.

          That's why I was replying too. Obviously, if you are willing to "do more", then you can potentially get more done.

    • keiferski 16 hours ago

      That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.

      • phantasmish 15 hours ago

        Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”

        Which obviously has great appeal.

    • jimbokun 15 hours ago

      What studies can you point to demonstrating your approach is more effective than donating to a GiveWell recommended non profit?

  • TimorousBestie 16 hours ago

    > . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.

    The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.

    Alas.

  • jmount 16 hours ago

    Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."

    The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.

    • lesuorac 16 hours ago

      > The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor

      Is that an actual EA argument?

      The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.

      I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.

      But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.

      • 8note 14 hours ago

        > I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.

        it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.

        • JohnFen 14 hours ago

          "The ends justify the means" is a terrible, and terribly dangerous, argument.

          • Sporktacular 15 minutes ago

            But if it's a net positive, the point is made.

          • jmount 12 hours ago

            That is the point. Much clearer than I was. Thank you.

      • hobs 16 hours ago

        When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.

        • lesuorac 15 hours ago

          It still depends on the marginal contribution.

          A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.

          It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).

      • Eisenstein 16 hours ago

        > Is that an actual EA argument?

        you missed this part: "The arguments always feel to me too similar"

        > The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.

        That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.

  • jmyeet 16 hours ago

    I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.

    Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.

    "What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.

    I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.

    • omnimus 15 hours ago

      Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.