Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

Classical utilitarianism has many advantages as an ethical theory. But there are also many problems with it, some of which I discuss here. A few of the most important:
  • The idea of reducing all human values to a single metric is counterintuitive. Most people care about a range of things, including both their conscious experiences and outcomes in the world. I haven’t yet seen a utilitarian conception of welfare which describes what I’d like my own life to be like.
  • Concepts derived from our limited human experiences will lead to strange results when they’re taken to extremes (as utilitarianism does). Even for things which seem robustly good, trying to maximise them will likely give rise to divergence at the tails between our intuitions and our theories, as in the repugnant conclusion.
  • Utilitarianism doesn’t pay any attention to personal identity (except by taking a person-affecting view, which leads to worse problems). At an extreme, it endorses the world destruction argument: that, if given the opportunity to kill everyone who currently exists and replace them with beings with greater welfare, we should do so.
  • Utilitarianism is post-hoc on small scales; that is, although you can technically argue that standard moral norms are justified on a utilitarian basis, it’s very hard to explain why these moral norms are better than others. In particular, it seems hard to make utilitarianism consistent with caring much more about people close to us than strangers.
I (and probably many others) think that these objections are compelling, but none of them defeat the core intuition which makes utilitarianism appealing: that some things are good, and some things are bad, and we should continue to want more good things and fewer bad things even beyond the parochial scales of our own everyday lives. Instead, the problems seem like side effects of trying to pin down a version of utilitarianism which provides a precise, complete guide for how to act. Yet I’m not convinced that this is very useful, or even possible. So I’d prefer that people defend the core intuition directly, at the cost of being a bit vaguer, rather than defending more specific utilitarian formalisations which have all sorts of unintended problems. Until now I’ve been pointing to this concept by saying things like “utilitarian-ish” or “90 percent utilitarian”. But it seems useful for coordination purposes to put a label on the property which I consider to be the most important part of utilitarianism; I’ll call it “scope-sensitivity”.

My tentative definition is that scope-sensitive ethics consists of:
  • Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives (e.g. suffering, betrayal).
  • A tendency to endorse actions much more strongly when those actions increase (or decrease, respectively) those things much more.
I hope that describing myself as caring about scope-sensitivity conveys the most important part of my ethical worldview, without implying that I have a precise definition of welfare, or that I want to convert the universe into hedonium, or that I’m fine with replacing humans with happy aliens. Now, you could then ask me which specific scope-sensitive moral theory I subscribe to. But I think that this defeats the point: as soon as we start trying to be very precise and complete, we’ll likely run into many of the same problems as utilitarianism. Instead, I hope that this term can be used in a way which conveys a significant level of uncertainty or vagueness, while also being a strong enough position that if you accept scope-sensitivity, you don’t need to clarify the uncertainty or vagueness much in order to figure out what to do. (I say "uncertainty or vagueness" because moral realists are often particularly uncomfortable with the idea of morality being intrinsically vague, and so this phrasing allows them to focus on the uncertainty part: the idea that some precise scope-sensitive theory is true, but we don't yet know which one. Whereas my own position is that it's fine and indeed necessary for morality to be intrinsically imprecise, and so it's hard to draw the line between questions we're temporarily uncertain about, and questions which don't have well-defined answers. From this perspective, we can also think about scope-sensitive ethics as a single vague theory in its own right.)

How does the definition I've given address the problems I described above? Firstly, it’s pluralist (within the restrictions of common sense) about what contributes to the welfare of individuals. The three most common types of utilitarian conceptions of welfare are hedonic theories, desire theories and objective-list theories. But each of these captures something which I care about, and so I don't think we know nearly enough about human minds (let alone non-human minds) to justify taking a strong position on which combination of these constitutes a good life. Scope-sensitivity also allows room for even wider conceptions of welfare: for example, people who think that achieving virtue is the most valuable aspect of life can be scope-sensitive if they try to promote that widely.

Secondly, it’s also consistent with pluralism about value more generally. Scope-sensitivity doesn’t require you to only care about welfare; you can value other things, as long as they don’t override the overall tendency to prioritise actions with bigger effects. In particular, unlike utilitarianism, scope-sensitivity is consistent with using non-consequentialist or non-impartial reasoning about most small-scale actions we take (even when we can't justify why that reasoning leads to the best consequences by impartial standards). Furthermore, it doesn’t require that you endorse welfare-increasing actions because they increase welfare. In addition to my moral preferences about sentient lives, I also have moral preferences about the trajectory of humanity as a whole: as long as humanity flourishing is correlated closely enough with humans flourishing, then those motivations are consistent with scope-sensitivity.

Thirdly, scope-sensitivity isn’t rigid. It doesn’t require welfare-maximisation in all cases; instead, specifying a “tendency” rather than a “rule” of increasing welfare allows us to abide by other constraints as well. I think this reflects the fact that a lot of people do have qualms about extreme cases (for which there may not be any correct answers) even when their general ethical framework aims towards increasing good things and decreasing bad things.

I should make two further points about evaluating the scope-sensitivity of existing moral theories. Firstly, I think it’s best interpreted as a matter of degree, rather than a binary classification. Secondly, we can distinguish between “principled” scope-sensitivity (scope-sensitivity across a wide range of scenarios, including implausible thought experiments) versus “practical” scope-sensitivity (scope-sensitivity given realistic scenarios and constraints).

I expect that almost all of the people who are most scope-sensitive in principle will be consequentialists. But in practice, non-consequentialists can also be highly scope-sensitive. For example, it may be the case that a deontologist who follows the rule ”try to save the world, if it’s in danger” is in practice nearly as scope-sensitive as a classical utilitarian, even if they also obey other rules which infrequently conflict with it (e.g. not lying). Meanwhile, some variants of utilitarianism (such as average utilitarianism) also aren’t scope-sensitive in principle, although they may be in practice.

One problem with the concept of scope-sensitivity is that it might induce motte-and-bailey fallacies - that is, we might defend our actions on the basis of scope-sensitivity when challenged, but then in practice act according to a particular version of utilitarianism which we haven't justified. But I actually think the opposite happens now: people are motivated by the intuition towards scope-sensitivity, and then defend their actions by appealing to utilitarianism. So I hope that introducing this concept improves our moral discourse, by pushing people to explicitly make the argument that scope-sensitivity is sufficient to motivate views like longtermism.

Another possibility is that scope-sensitivity is too weak a concept to motivate action - for example, if people claim to be scope-sensitive, but add a few constraints which mean they don’t ever need to act accordingly. But even if scope-sensitivity in principle is broad enough to include such views, hopefully the concept of practical scope-sensitivity identifies a natural cluster of moral views which, if people follow them, will actually make the world a much better place.

Comments

  1. where did u go? I like you posts. please post more!

    ReplyDelete
  2. It seems like there is some interesting overlap between what you describe as "scope-sensitive ethics" and what Richard Yetter-Chappell calls "beneficentrism": https://www.philosophyetc.net/2021/12/beneficentrism.html

    ReplyDelete

Post a Comment

Popular posts from this blog

In Search of All Souls

What have been the greatest intellectual achievements?

Moral strategies at different capability levels