By Adrian Liu
Effective altruism has many enemies, and while there are certainly philosophical arguments against it, much of the opposition is not intellectual but visceral.
At its core, effective altruism suggests we should seek to maximize the good we can do in the world. Notably, it espouses “earning to give”: the influential idea that the best thing to do with one’s life is to earn as much money as possible, by any means necessary, and to donate the money to the most worthy cause.
But effective altruism is premised upon a utilitarian ethical theory that many find intuitively unpalatable. Why is this? I will suggest that the utilitarian arguments of effective altruism stretch certain ethical intuitions until they become unintuitive, and that — regardless of its philosophical strengths — this is where the theory will lose most of the people it tries to persuade.
According to the strictest forms of effective altruism, we should stop giving only when giving more would hurt us more than it would help others. Such an argument challenges our notion of a distinction between something that is required and something that is praiseworthy but not obligatory. It dramatically expands the circle of morally required actions, and so we might chafe against the glare of moral condemnation that we feel from the effective altruist.
For instance, effective altruism suggests that not donating to disaster relief is morally just as bad as walking past a drowning child in a pond without saving it. We might bristle at this claim. A theory that implies that I am morally bad or morally just as bad as the heartless person who doesn’t save a drowning child — clearly such a theory is false, myopic, ludicrous even. So we tell ourselves.
An outraged reaction to effective altruism differs, however, from the reaction we would have to a theory that claims lollipop production is morally good and everything else is morally bad. Effective lollipopism is clearly false — we laugh it off and go about our day. But we can’t laugh off the effective altruist’s argument, because he has touched a nerve. He starts with intuitions we accept, and reasoning we accept, and ended up with a conclusion we don’t like. “According to your own intuitions,” the effective altruist tells us, “you are morally lacking.”
Our intuition is that we morally ought to save the baby in the pond, that there would be something morally wrong about walking past. The effective altruist derives from this intuition the principle that if we can prevent something bad from happening without doing something comparably bad, we are obligated to prevent the bad thing from happening.
However, if we try and flesh out exactly what principles our intuition implies, it grows messy. Indeed, our intuition about the baby seems to imply a principle that there is something inherently valuable about life, and that putting a price on it is unseemly. By this logic, the fact that I would ruin my expensive suit or miss my job interview isn’t a reason not to save the baby. Similarly, the fact that the Coast Guard would have to rack up thousands of dollars to save a stranded family is not a valid excuse not to save them. Human life cannot be compared to money or other gains.
On the other hand, human life can be compared to human life. Suppose there are two ponds, with one baby drowning on the left, and another with two babies drowning on the right. You can either save the left baby or the right babies. Intuition tells us that saving the two babies is preferable because more lives are saved, and this suggests human lives can be compared. At the same time, it’s unclear that we’ve done a better deed by saving two babies and a worse deed by saving one baby. And it’s certainly unintuitive to say we’ve done something morally bad by saving one baby when we could have saved two.
What has gone wrong here? We have intuitions that don’t agree. Something has to give. That something is the thought experiment. Our intuitions are simply not designed for a thought experiment wherein we trade off the livelihoods of a certain number of indistinguishable babies in identical ponds. We haven’t had reason to develop intuitions to deal with this: In real life we are not in fact faced with situations like in the thought experiment.
The effective altruist stops me here. “You are mistaken,” he says. “In fact, we are facing such situations all the time! In the morning, we buy a $4 cup of coffee, when $5 could save the life of a child in a developing country. In the afternoon, we donate $200 to save the smile of one child with a cleft palate, when the same sums could save a great deal more if we bought malaria nets with it.
“The two ponds are not in front of us, but why should that matter? You are faced with children dying, and you are not choosing to save one of them, or two of them — you are choosing to save none of them.”
If we accept that physical distance doesn’t morally matter, that we have an obligation to save a child regardless of proximity, then I have no reply. As I concede the argument to the effective altruist, however, I gaze with doleful wonder at his optimism. Rationally, he may be correct to say that distance should not matter: The child across the world is just as worthy of our help as the child in our inner city. And would that we could internalize this idea.
But convincing the mind isn’t convincing the intuitions. And, to our intuitions, it does in fact matter to us how far the child is from us, and how similar the child is to us, and if we think others would help if we didn’t and if we think we would be censured by others.
The effective altruist, in a feat of optimism, hopes to take our intuitions from a handful of concrete cases — a few babies at a well, some people tied up on trolley tracks — and rationally extend the judgments we make, so we learn to care about things we didn’t intuitively care about before. But if he argues from intuitions his sphere of converts will remain limited. Our intuitions are fuzzy, temperamental, inconsistent, unoptimized for a globalized world. Can we really hope to obtain a rational basis for ethics from them?
This last question is not rhetorical. In fact, much of ethics relies on the answer being “yes.” But how could this work? I begin to explore these questions next week in “The Bent.”
Contact Adrian Liu at adliu ‘at’ stanford.edu.