The career and the community

This was originally posted on the effective altruism forum, and is aimed at those already involved in the movement.

tl;dr: for the first few years of their careers, and potentially longer, most effective altruists should focus on building career capital (which isn’t just 'skills’!) rather than doing good or working at EA organisations. However, there are social dynamics which push new grads towards working at EA orgs, which we should identify and counteract. Note that there are a lot of unsubstantiated claims in this post, and so I’d be grateful for pushback on anything I’m incorrect about (throughout the post I’ve highlighted assumptions which do a lot of work but which I haven’t thoroughly justified). This post and this post point at similar ideas less lengthily.

Contents: 1. Introduction. 2. Advantages of working at EA organisations. 3. Advantages of external career pathways. 4. Social dynamics and implicit recommendations. 5. Building a community around moonshots. 6. Long-term and short-term constraints.



What are the most important bottlenecks limiting the amount of good the effective altruism movement can do? The original message (or at least, the one which was received by the many people who went into earning to give) was that we were primarily funding-constrained. The next message was that direct work was the most valuable career pathway, and was talent-constrained. This turned out to be a very misleading phrase, and led a bunch of talented people, particularly recent graduates who hadn’t yet built up on-the-job skills, to be unable to find EA jobs and become disillusioned with the movement. And so 80,000 Hours recently published a blog post which tries to correct that misconception by arguing that instead of being bottlenecked on general talent, EA lacks people with specific skills which can help with our favoured cause areas, for example the skill of doing great AI safety research.

I have both a specific and a general disagreement with this line of thinking; I’ll discuss the specific one first. I worry that ‘skills-constrained’ is open to misinterpretation in a similar way as ‘talent-constrained’ was. It’s true that there are a lot of skills which are very important for EA cause areas. However, often people are able to be influential not primarily because of their skills, but because of their career capital more broadly. (Let me flag this more explicitly as assumption 1.) For example, I’m excited about CSET largely because I think that Jason Matheny and his team have excellent networks and credentials, specific familiarity with how US politics works, and generally high competence. These things seem just as important to me as their ‘skills’.

Similarly, I think a large part of the value of getting into YCombinator or Harvard or the Thiel Fellowship comes from signalling + access to networks + access to money. But the more we rely on explicit arguments about what it takes to do the most good, the more likely we are to underrate these comparatively nebulous advantages. And while 80,000 Hours does talk about general career capital being valuable, we’ve already seen that the specific headline phrases they use can have a disproportionate impact. It seems plausible to me that EAs who hear about the ‘skill gap’ will prioritise developing skills over other forms of career capital, and thereby harm their long-term ability to do good compared with their default trajectory (especially since people are generally are able to make the biggest difference later in their careers).

I don’t want to be too strong on this point. Credentials are often overrated, and many people fall into the trap of continually creating career capital without ever using it to pursue their true goals. In addition, becoming as skilled as possible is often the best way to both amass career capital and do good in the long term. However, this is just one example of my general disagreement with standard EA attitudes towards careers: I think that we tend to overrate new grads working at EA organisations or directly on EA causes, compared with entering other jobs which are less immediately relevant to EA but which may allow them to do more good later. I think that people underrating the value of career capital is one reason for this. Another reason, which I’ll return to later on in this post, is the social dynamics of the EA community.

Advantages of working at EA organisations

A third reason is that the arguments in favour of the latter aren’t made often enough. So let’s explore in more detail the cost/benefit analysis facing a new grad who’s trying to decide whether to join an EA org, versus working elsewhere for a few years (or analogously, deciding whether to do a PhD specifically on an EA research area, versus a different topic in the same field). The main advantages of doing the former:
  1. You’ll be able to focus on learning exactly the skills which are most valuable in doing good, rather than whichever happen to be required by your other job. Also, if you’re mistaken about what skills actually matter, you’ll get feedback about that more quickly.
  2. You’ll build a stronger EA network. You may be more motivated by being surrounded by people with the same values as you, which may make your own values less likely to drift.
  3. You’ll be able to do important work while building skills and experience.

I think these are all important points, but I have reservations about each of them. On the first point, my sense is that the most important skills can be learned in a range of positions and are fairly transferable (assumption 2). For example, I think that experience leading most teams is transferable to leading most other teams. I also think that PhDs matter more for teaching people how to do good research in their field than for building expertise on any particular topic.

On point 2, while it’s important to meet other EAs, I’d say that the returns from doing so start diminishing significantly after the point where you know enough people to have second-degree connections with most other EAs. Connecting with people outside the EA bubble is much more counterfactually valuable since it increases the contacts available to the movement as a whole, especially if they’re open to EA ideas or interested in collaboration. This also makes our community less insular. I do think motivation and value drift are serious concerns, though.

On point 3, I claim that the work you do in the first few years of your career is generally much less valuable than what you can do later, and so this shouldn’t be a major priority (assumption 3). This may not have been true a few years ago, when there were many more low-hanging fruit, but EA has now funnelled hundreds of people into careers in favoured areas. And given that as a new grad you cost your organisation money and take up the time of whoever is supervising you, my (very uninformed) guess is that it typically takes anywhere from 6 months to 2 years in your first job to actually become net positive in value. This is not counting the risk of actively doing harm, which I’ll discuss shortly.

Advantages of external career pathways

By contrast, there are quite a few advantages of spending the first few years (or potentially even decades) of your career in non-EA jobs:
  1. Build better career capital (skills, networks, reputation, money).
    1. Almost all of the best mentorship you can get for most skills exists outside the EA community, so you can learn fastest by accessing that. For research, the ideal would be getting a PhD with a professor who cares a lot about and is good at supervising your research. For ops roles, the ideal might be working in a startup under a successful serial entrepreneur.
    2. You can take more risks, because success in your current job isn’t your main goal. Experimenting and pushing boundaries is generally a good way to learn. Taking this to one extreme, you could found a startup yourself. This is particularly useful for the sort of people who learn better from doing than from any sort of supervision.
    3. EA orgs like the Open Philanthropy Project are prestigious in certain circles, but any candidate who actually gets an offer (i.e. is in the top ~1% of their very pre-selected applicant pool) should also be able to access more conventionally prestigious opportunities if they aim for those.
    4. As discussed above, your networks will be less insular. Also, since EAs are generally quite young, you’ll meet more experienced and successful people in external jobs. These are generally the most valuable people to have in your network.
    5. Having exposure to a diverse range of perspectives and experiences is generally valuable. For example, if you’ve spent your entire career only interacting with EAs, you’re probably not very well-prepared for a public-facing role. Working elsewhere gives you that breadth.
  2. Better for EA orgs.
    1. Evaluating job candidates is just a very difficult process. One of the best ways to predict quality of work is to look at previous experience. If candidates for EA jobs had more experience, it would be easier to find the best ones.
    2. There’s less risk of new hires causing harm from things like producing bad external-facing work, not understanding how to interact professionally within organisations, or creating negative publicity. (I think people have quite a variety of opinions on what percentage of long-termist projects are net-negative, but some people put that number quite high).
    3. With more experienced new hires, the most impactful people currently at EA orgs will be able to spend less time on supervision and more time on their other work.
  3. Better for the EA community.
    1. I discuss this point in the Building a community around moonshots section below.
  4. Better for the world.
    1. Lots of good career development opportunities (e.g. working in consulting) are highly-paid, and so new grads working in them can funnel a fair bit of money towards important causes, as well as saving enough to provide a safety net for career changes or entrepreneurship.
    2. Having people working in a range of industries allows them to spot opportunities that EA wouldn’t otherwise discover, and also build skills that EA accidentally or mistakenly coordinated away from. (I discuss this more at the very end of the post).
    3. Having senior people in a range of industries massively improves our ability to seize those opportunities. If you go out into an industry and find that it suits you very well, and that you have good career prospects in it, you can just continue in that with the goal of leveraging your future position to do good.

I think this last point in particular is crucial. It would be really great to have EAs who are, say, senior executives at the World Bank, because such people can affect the trajectories of whole governments towards prioritising important values. But it’s difficult to tell any given student “you should gamble your career on becoming one of those executives”. After spending a couple of years on that career trajectory, though, it should become much clearer whether there’s potential for you to achieve that, and whether anything in this space is valuable to continue pursuing. If not, you can try another field (80,000 Hours emphasises the importance of experimenting with different fields in finding your personal fit). And if none of those work, you can always apply for direct work with your new expertise. How much more valuable will you be in those roles, compared with yourself as a new grad? Based on assumption 3 as flagged above, I think plausibly over an order of magnitude in many cases, but I’m very uncertain about this and would welcome more discussion of it.

One worry is that even if people go out and build important skills, there won’t be EA jobs to come back to, because they’ll have been filled in the meantime. But I predict that the number of such jobs will continue to grow fairly fast, and that they’ll grow even faster if people can be hired without needing supervision. Another concern is that the jobs will be filled by people who are less qualified but have spent more time recently engaging in the EA community. But if a strong internal hiring bias exists, then it seems like an even better idea to diversify our bets by having people working all over the place.

Compared with those concerns, I worry more about the failure mode in which we take a bunch of people who would otherwise have had tremendously (conventionally) successful careers, and then make them veer off from those careers because we underrate how much you can do from a position of conventional success, and how transferable the skills you develop in reaching that point are. I also worry about the failure mode in which we just don’t have the breadth of real-world experience to identify extremely valuable opportunities. For example, it may be that in a given country the time is ripe for an EA-driven political campaign, but all the people who would otherwise have gone into politics have made career changes. (Yes, politics is currently a recommended path, but it wasn’t a few years ago - what else have we been overlooking?) And in any case, we can’t expect the community at large to only listen to explicit recommendations when there are also very strong implicit recommendations in play.

Social dynamics and implicit recommendations

Something that has happened quite a bit lately is that people accuse 80,000 Hours of being incorrect or misleading, and 80,000 Hours responds by pointing to a bunch of their work which they claim said the correct thing all along. (See here and here and here and here). I don’t have strong opinions on whether or not 80,000 Hours was in fact saying the correct thing. What I want to point out, though, is that career advice happens in a social context in which working at EA orgs is high status, because the people giving out money and advice are the people who do direct work at EA orgs, who are older and more experienced in EA and have all the social connections. And of course people are encouraging about careers at these orgs, because they believe they’re important, and also don’t want to badmouth people in their social circle. Young people like that social circle, they want themselves and their friends to stay in it, and so overall there’s this incentive towards a default path which just seems like the best thing based on the people you admire. If you then can’t get onto that path, that has tangible effects on your happiness and health and that of the community, as has been discussed in this post and its comments.

We also have to consider that, despite the considerations in favour of external career-building I listed above, it is probably also optimal for some people to go straight into working at EA orgs, if they’re clearly an excellent fit and can pick up skills very fast. And so we don’t just face the problem of reducing social pressure to work at EA orgs, but also the much harder problem of doing so while EA orgs are trying to hire the best people. Towards that goal, each org has an incentive to encourage as many people as possible to apply to them, the overall effect of which is to build up a community-wide impression that such jobs are clearly a good idea (an information cascade from the assumption that so many other applicants can’t be wrong), and make them even more selective and therefore prestigious. In such a situation, it’s easy to gloss over the sort of counterfactual analysis which we usually consider to be crucial when making big career decisions (how much worse is the 6th best applicant to the Open Philanthropy Project than the 5th, anyway? Is that bigger than the difference between the 5th best applicant spreading EA ideas at a top non-EA think tank, versus not doing so?)

Another way of putting this: saying only true things is not sufficient for causing good outcomes. Given that there’s always going to be a social bias towards working at EA orgs, as a community we need to be proactive in compensating for that. And we need to do so in a way that minimises the perception of exclusivity. How have we done this wrong? Here’s one example: earning to give turned out to be less useful than most people thought, and so it became uncool (even without any EA thought leaders explicitly disparaging it). Here’s another: whether or not it’s true that most of the value of student EA groups is in identifying and engaging “core EAs”, it’s harmful to our ability to retain community members who either aren’t in a position to do the things that are currently considered to be “core EA” activities, or else have different judgements about what’s most effective.

(As an aside, I really dislike portrayals of EA as “doing the UTMOST good”, as opposed to “doing LOTS of good”. Measuring down from perfection rather than up from the norm is basically the textbook way to make yourself unhappy, especially for a group of people selected for high scrupulosity. It also encourages a lack of interest in the people who aren’t doing the utmost good from your perspective.)

Building a community around moonshots

I like the idea of hits-based giving. It makes a lot of sense. The problem is that if you dedicate your career to something, and then it turns out to be a ‘miss’, that sucks for you. And it particularly sucks if your community assigns status based largely on how much good you do. That provides an additional bias towards working at an EA org, so that your social position is safe.

What we really want, though, is to allow people to feel comfortable taking risks (assumption 4). Maybe that risk involves founding a startup, or starting a PhD despite being unsure whether you’ll do well or burn out. Maybe it involves committing to a strategy which is endorsed by the community, despite the chance that it will later be considered a mistake; maybe it means sticking to a strategy which the community now thinks is a mistake. Maybe it just turns out to be the case that influencing the long-term future is so hard that only 50% or 5% or 1% of EAs can actually make a meaningful difference. I think that one of the key ways we can make people feel more comfortable is by being very explicit that they are still a welcome and valued part of this community even if whatever it is they’re trying to do doesn’t turn out to be very impactful.

To be clear, this is incredibly difficult in any community. I think, however, that the higher the percentage of the EA community who works at EA orgs, the more difficult it will be to have that welcoming and inclusive community. By contrast, if more of the EAs who were most committed at university end up in a diverse range of jobs and fields, it’ll be easier for others who aren’t on the current bandwagon to feel valued. More generally, the less binary the distinction between “committed” and “uncommitted” EAs, the healthier the community in the long term (assumption 5).

I particularly like the framing of this problem used in this post: we need to find a default “task Y” by which a range of people from different backgrounds and in different life circumstances can engage with EA. Fortunately, figuring out which interventions are effective to donate to, and then donating to them, is a pretty great task Y. The “figuring out” bit doesn’t have to be original research: I think there’s a pressing need for people to collate and distill the community’s existing knowledge.* And of course the “donating” bit isn’t new, but the problem is that we’ve stopped giving nearly as much positive social reinforcement to donors, because of the “EA isn’t funding-constrained” meme, and because now there’s a different in-group who are working directly at EA orgs. (I actually think that EA is more funding-constrained than it’s often made out to be, for reasons which I’ll try to explain in a later post; in the meantime see Peter Hurford’s EA forum comments.) Regardless of what cause area or constraints you think are most pressing, though, I think it’s important for the community that if people are willing to make the considerable sacrifice of donating 10% or more of their salary, we are excited and thankful about that.

Long-term and short-term constraints

It’s important to distinguish between current constraints and the ones we’ll face in 10-20 years.** If we expect to be constrained by something in 15 years’ time, then that suggests we’re also currently constrained on our ability to build pipelines to get more of that thing. If that thing is “people with certain career capital”, and there are many talented young EAs who are in theory capable of gaining that career capital over the next decade, then we’re bottlenecked by anything that will stop them gaining it in practice. From one perspective, that’s the lack of experienced mentors and supervisors at EA orgs. But from the perspective I’ve been espousing above, our internal culture and social dynamics may be bottlenecks in the medium term, because they stop people from finding the positions where they can best develop their careers.

An alternative view is that in 15 years’ time we’ll still be constrained by a career capital gap - not because young EAs have been developing their careers in the wrong way, but because the relevant skills and connections are just so difficult to obtain that most won’t manage to do so. If that is the case, we should try to be very transparent that the bar to contributing to long-termist causes is very high - but even more importantly, take steps (such as those discussed in the previous section) to ensure that our community can remain healthy even if most of the people in it aren’t doing the most prestigious thing, or tried to do it and failed. That seems like an achievable goal as long as people know what risks they're taking - e.g. as long as they have accurate expectations of how likely they are to get a full-time EA job, or of how likely it is that their startup will receive money from EA funds.

(80,000 Hours is the easiest group to blame for people getting a misleading impression, because they’re in the business of giving career advice, but I think the responsibility is more widely distributed throughout the EA movement, since it seems unlikely that a dedicated EA who’s spent hundreds of hours discussing these topics would get a totally mistaken view of the career landscape just from a few articles. During my job search, I personally had a pretty strong view that getting into AI safety would be easy, and I don’t explicitly recall reading any 80,000 Hours articles which said that - it was more of a gestalt impression, mostly gained from the other students around me.)

Personally I don’t think that the bar to contributing to long-termism is so high that most EAs can’t have a significant positive impact. But I do think that personal fit plays a huge role, because if you’re inspired by your field, if you love it and live it and breathe it, you’ll do much better than if you only care about it for instrumental purposes. (In particular, I work in AI safety and think it’s very important, but I’d only encourage most people to pivot into the field if they are, or think they can become, fascinated by AI, AI safety, or hacking and building things.)

The opposite of choosing based on personal fit is overcoordination towards a small set of options. Effective altruism is a community, and communities are, in a sense, defined by their overcoordination towards a shared conception of what’s cool. That’s influenced by things like 80,000 Hours’ research, but the channel from careful arguments to community norms is a very lossy one, which suffers from all the biases I explained above, and which is very difficult to tailor to individual circumstances. So herd behaviour is something we need to be very cautious of (particularly in light of epistemic modesty arguments, which I find compelling. EAs have been selected for being good at basically just one thing, which is taking philosophical arguments about morality seriously. So every time our career advice diverges from standard career advice, we should be wary that we’re missing things.)

Of course, the ability to coordinate is also our greatest strength. For example, I think it’s great that altruistic EAs have pivoted away from being GPs in first-world countries due to arguments about replaceability effects. But to my mind the real problem with being a GP is that the good you can do is bounded by the number of patients you can see. Hans Rosling had the same background, but leveraged it to do much more good, both through his research and through his public advocacy. So if there’s one piece of career advice I’d like to spread in EA, it’s this: find the field which most fascinates you while also having high potential for leverage if you do very well in it, and strive towards that.


Thanks to Denise Melchin, Beth Barnes and Ago Lajko for commenting on drafts. All errors and inaccuracies are mine.

* As one example, there have been a whole bunch of posts about career trajectories recently. I think these are valuable (else I wouldn’t have written my own) but there’s just so much information in so disorganised a format that efforts to clarify and summarise the arguments that have been raised, and how they relate to each other, would probably be even more valuable.

** As I wrote this, I realised just how silly it is to limit my analysis to 10-20 years given that long-termism is such a major part of EA. But I don’t know how to think about social dynamics over longer timeframes, and I’m not aware of any work on this in the EA context (this is the closest I’ve seen). If there actually isn’t any such analysis, doing that seems like a very important priority.

Comments

Popular posts from this blog

In Search of All Souls

Book review: Very Important People

Moral strategies at different capability levels