When is rationality useful?

In addition to my skepticism about the foundations of epistemic rationality, I’ve long had doubts about the effectiveness of instrumental rationality. In particular, I’m inclined to attribute the successes of highly competent people primarily to traits like intelligence, personality and work ethic, rather than specific habits of thought. But I’ve been unsure how to reconcile that with the fact that rationality techniques have proved useful to many people (including me).

Here’s one very simple (and very leaky) abstraction for doing so. We can model success as a combination of doing useful things and avoiding making mistakes. As a particular example, we can model intellectual success as a combination of coming up with good ideas and avoiding bad ideas. I claim that rationality helps us avoid mistakes and bad ideas, but doesn’t help much in generating good ideas and useful work.

Here I’m using a fairly intuitive and fuzzy notion of the seeking good/avoiding bad dichotomy. Obviously if you spend all your time thinking about bad ideas, you won’t have time to come up with good ideas. But I think the mental motion of dismissing bad ideas is quite distinct from that of generating good ones. As another example, if you procrastinate all day, that’s a mistake, and rationality can help you avoid it. If you aim to work productively for 12 hours a day, I think there’s very little rationality can do to help you manage that, compared with having a strong work ethic and a passion for the topic. More generally, a mistake is doing unusually badly at something, but not failing to do unusually well at it.

This framework tells us when rationality is most and least useful. It’s least useful in domains where making mistakes is a more effective way to learn than reasoning things out in advance, and so there’s less advantage in avoiding them. This might be because mistakes are very cheap (as in learning how to play chess) or because you have to engage with many unpredictable complexities of the real world (as in being an entrepreneur). It’s also less useful in domains where success requires a lot of dedicated work, and so having intrinsic motivation for that work is crucial. Being a musician is one extreme of this; more relevantly, getting deep expertise in a field often also looks like this.

It’s most useful in domains where there’s very little feedback either from other people or from reality, so you can’t tell whether you’re making a mistake except by analysing your own ideas. Philosophy is one of these - my recent post details how astronomy was thrown off track for millennia by a few bad philosophical assumptions. It’s also most useful in domains where there’s high downside risk, such that you want to avoid making any mistakes. You might think that a field like AI safety research is one of the latter, but actually I think that in almost all research, the quality of your few best ideas is the crucial thing, and it doesn’t really matter how many other mistakes you make. This argument is less applicable to AI safety research to the extent that it relies on long chains of reasoning about extreme hypotheticals (i.e. to the extent that it’s philosophy) but I still think that the claim is broadly true.

Another lens through which to think about when rationality is most useful is that it’s a (partial) substitute for belonging to a community. In a knowledge-seeking community, being forced to articulate our ideas makes it clearer what their weak spots are, and allows others to criticise them. We are generally much harsher on other people’s ideas than our own, due to biases like anchoring and confirmation bias (for more on this, see The Enigma of Reason). The main benefit I’ve gained from rationality has been the ability to internally replicate that process, by getting into the habit of noticing when I slip into dangerous patterns of thought. However, that usually doesn’t help me generate novel ideas, or expand them into useful work. In a working community (such as a company), there’s external pressure to be productive, and feedback loops to help keep people motivated. Productivity techniques can substitute for those when they’re not available.

Lastly, we should be careful to break down domains into their constituent requirements where possible. For example, the effective altruism movement is about doing the most good. Part of that requires philosophy - and EA is indeed very effective at identifying important cause areas. However, I don’t think this tells us very much about its ability to actually do useful things in those cause areas, or organise itself and expand its influence. This may seem like an obvious distinction, but in cases like these I think it’s quite easy to transfer confidence about the philosophical step of deciding what to do to confidence about the practical step of actually doing it.

Comments

Popular posts from this blog

In Search of All Souls

What have been the greatest intellectual achievements?

Moral strategies at different capability levels