By Adrian Liu
Most of us have an intuitive inclination to react in ways that we want to characterize as “ethical.” If I see someone being beaten up, I’m inclined to think that this ought not happen. I am also inclined to think that by using “ought” I’m not only saying “I would prefer that this not happen,” (like my favorite TV show being cancelled); or saying “it is a bad thing that this happened” (like some natural disaster). My ought, I want intuitively to say, is specifically ethical.
Ethics as a philosophical field tries to figure out what ethical theories are rigorously correct, defendable or logically coherent. Often I wonder how useful this is for the real person living in daily life. Perhaps for those of us who are not professional philosophers, a more important endeavor is to understand, just a little more, the implications of our intuitions. We don’t explicitly act in accordance with complex theories, and it’s doubtful that if some philosopher were to discover the one truly correct, indefeasible ethical theory, we would then all realize its truth and jump aboard.
One of the vices of ethics as a philosophical field is that it often begins with intuitions and then quickly moves very, very far from them. It creates theories that have none of the instinctive appeal of a theory we would want to guide our lives, or it engages in a primarily destructive enterprise that leaves us with many theories that don’t work — and none that do.
For the person who studies no ethical theory, intuitions are often given free rein, and the paradigm one adopts is whatever seems to make sense in the moment — a “treat others how you want to be treated” golden-rule approach at times, a “what is the greatest good” calculation at others, a relativistic “who am I to judge?” attitude when convenient. This person has an ethics that is only locally coherent, but globally doesn’t make a whole lot of sense.
On the other hand, the student who studies too much ethics to be innocent in ethical theory but too little to be able to pick a theory and defend it adequately is apt to develop a naive ethical anti-realism. Excited at the beginning of class about finally learning the Correct Ethical Theory, they are convinced by the end of the term that no such theory exists. What do they do then? Perhaps some simply give up. But I suspect the more common response is to do exactly nothing: to continue on giving one’s intuitions free rein, adopting one sort of intuition here and another sort there. The difference is merely that the person who has studied introductory ethical theory knows better than to claim that they can fully justify their ethical intuitions.
How would we engage in ethical investigation that doesn’t need the tangled abstractions of theoretical reasoning, but is somehow still useful? I think there are some strategies — we might fill out barebones thought experiments like the trolley problem or the baby in the well with contexts that reveal the myriad conflicting intuitions in play. We might find concrete and plausible situations that call specific intuitions into question.
For instance, what actual intuitions do we have that suggest the view that moral values are relative to cultures, what other intuitions challenge such a view, and what sort of concrete situations might complicate it? Or, if we’re inclined to say that whatever causes the greatest good is the best thing to do, in what domains are we most likely to apply this intuition? When would it commit us to censuring things you don’t want to censure? Are there situations where other intuitions seem stronger than a utilitarian one?
This is the first piece of “The Bent”: a series of explorations about our ethical intuitions and how they interact with ethical theory. What I want to do is to investigate ethical ideas that you probably already find plausible, and that you probably already utilize in your actual life. Intuitions like the Golden Rule, the Platinum Rule and “never lie.” Intuitions like “we ought not judge other cultures” or “it is okay to lie in certain situations.”
Next time, I look at effective altruism, the philosophy that focuses on doing “the most good you can do” given your resources and your potential for obtaining more resources. A popular argument for effective altruism begins with the thought experiment that, if one sees a baby in a well, one morally ought to jump in and save the baby. Starting from this reasonable intuition, however, effective altruism derives a host of “oughts” that seem considerably less intuitive. So have our intuitions gone wrong, or is there perhaps a problem with the thought experiment? I will argue the latter.
Some reflection upon our intuitions is necessary, but — especially for non-philosophers — this will not be the kind of reflection that threatens our ability to take any intuitions seriously, or that quickly leaves intuitions for abstract theory. I want to aim instead for the kind of reflection that allows us to understand the import of our intuitions more thoroughly, to investigate the role they play in our lives and to recognize their power and their limitations. What does this sort of reflection look like? Let’s find out.
Contact Adrian Liu at adliu ‘at’ stanford.edu