Plan for, don’t panic about, AI

The original text of this article is on the website of the Centre for Public Impact.

In 1964, a group of Nobel-prize-winning scientists, economists, and civil rights activists wrote an open letter to then-President Lyndon Johnson to warn of the impending “cybernation revolution”. They heralded an era of “almost unlimited productive capacity” brought about by “the combination of the computer and the autonomous self-regulating machine”.

They were wrong, at least in the short term. Their warning was fuelled by the excitement of the first period of rapid growth in artificial intelligence (AI) – at that time researchers were developing the world’s first machine learning algorithm and had created programmes that could solve the analogy problems on standard IQ tests.

Now, more than 50 years later, we are again faced with the prospect of a productivity revolution driven by machine learning. Unlike earlier predictions, based on the theoretical potential of the concept of machine learning, technologists today have been applying AI techniques to practical problems of economic significance and observing impressive results. Self-driving cars, automatic fraud detection, and AI trading algorithms are all either have prototypes or are already implemented. History teaches us caution in predictions about the future, but at this point only the careless would not at least plan for radical changes driven by AI.

Moreover, technological progress can happen surprisingly quickly. Just a year ago, many university lecturers were still using the difficult east-Asian board game Go as a default example of a problem AI was not expected to solve any time soon. Then in March 2016 a London AI firm, Google DeepMind, defeated the world champion Go player 4-1.

Ripple effect – some good, some bad

The implications of AI are likely to be broadly positive for the economy. AI offers ways to radically boost productivity – to do things that people need to work on today without any human intervention at all. That will free up those people to do other work – potentially more interesting or fulfilling – or to take leisure.

Despite that, a technological revolution from AI poses significant risks which society must manage carefully. We should expect economic and social transitions as well as broader risks if AI research is faster than expected.

There is considerable debate about whether modern automation will primarily complement human labour – making people more productive – or substitute for it – competing with it and driving down the price of human labour. If AI is able to substitute for human labour, we might expect that things could get grim for workers who might not own shares in any of the AI systems that replace them. It is important to make sure that a new economy would fairly distribute the returns from this innovation.

In either case, there is the potential for significant job losses, which will be unevenly distributed and could devastate some communities. Job losses themselves are significantly less bad if people can easily get new work. This is especially hard when a local economy depends heavily on a single employer, perhaps a call centre or an administration centre.

Some civil servants have commented on administrative centres which could probably be nearly completely automated, but since entire local economies depend on employment from those centres with little else for miles around, closing them could mean abandoning hundreds of families. Resolving this difficulty, and unleashing public sector productivity, requires a heavy investment into retraining and the infrastructure for commuting easily to jobs that might be in other places.

Power up?

AI researchers are trying to create software that can solve very general problems intelligently – and possibly more intelligently than any human. As leading AI researcher Professor Stuart Russell puts it, we ought to at least consider the possibility that they might succeed in creating a powerful general intelligence.

Powerful AI systems might exceed human capacities in many or even most domains. This is especially likely if artificial intelligence proves effective at aiding research into more advanced forms of artificial intelligence. If these systems become extremely powerful then it becomes very important to make sure they behave in a way that benefits humanity. This is more difficult than it might seem. Although we can set the AI’s goals, they often pursue them in slightly unpredictable ways.

That is fine when it means an AI making a chess move that no human expert would expect. But as the stakes rise, the risk increases that we might have mis-specified the goals the AI is to achieve. As Professor Nick Bostrom discusses in his book Superintelligence it is startlingly easy to think you were very clear about what you wanted your AI to do but to accidentally get something disastrously wrong. If systems become extremely generally powerful, this technical challenge of setting the goals for AI systems could become existentially important for humanity.

Global Risks: The Wildfire in the Commons

Originally published in Angle Journal.

Technological developments can create new types of global risk, including risks from climate change, geo-engineering, and emerging biotechnology. These technologies have enormous potential to make people better off, but the benefits of innovation must be balanced against the risks they create. Risk reduction is a global public good, which we should expect to be undersupplied. This problem is especially urgent in situations where one group can unilaterally develop and implement a technology with global repercussions. The international community needs to continue to learn how to manage unilateral global risks like those from some forms of biotechnology and geo-engineering.

Global interconnectedness lets new discoveries help more people, faster, than ever before. However, this global reach creates a new type of global risk. Historically, disasters were mostly geographically confined. An earthquake is a tragedy, but it is a local tragedy. Regular trade across the Atlantic made a global pandemic possible for the first time in the 16th century, but the probability remained small. The efficiency of modern technologies, like cheap air travel, makes those risks bigger. Climate change is already a truly global risk – burning fossil fuels in one city causes climate change world-wide. In coming decades we may have to face new global technological risks, as geo-engineering or biotechnology become more powerful, for example. Innovation in these fields is very promising but needs to be approached carefully.

Global technological risks get less attention than they deserve. Countries are not properly rewarded for doing things that help everyone else too. The risks often affect the poor or future generations who do not get a say in decisions. They seem unlikely because many are unprecedented. It is politically hard to spend money on the risk of something that has never happened before.

Risks from new technologies come primarily in two types. The first, of which climate change is an example, is when almost everyone must do something together for a collective reward. The second, even more difficult to manage, is a ‘wildfire’ scenario – where it only takes one person or group to make a move, to drop the match, as it were, for everyone to suffer the consequences. Below, I describe some of the risks, explain why they are systematically neglected, and suggest some ways to address ‘wildfire’ risks.

What are the risks?

Climate change, geo-engineering, and emerging biotechnology are three important potential sources of global risk. Climate change is the best understood of them. Unless we work out how to remove carbon dioxide from the atmosphere, the world needs to cut emissions significantly within the next few decades. If we do not, there will be long-term losses.1 Of course, significantly cutting emissions makes energy more expensive, which could affect all parts of the economy. One of the most worrying ways climate change could hurt us is if panicked political leaders make a rushed and ultimately botched attempt at geo-engineering.

Geo-engineering was for many years closer to science fiction than reality. However, it is slowly starting to become potentially practicable. In 2008, the Royal Society devoted an issue of their journal to the mechanics and ethics of global geo-engineering to avert climate change.2 China spends hundreds of millions of dollars on weather manipulation.3 In principle, geo-engineering gives us ways to manage climate change without a radical restructuring of our energy economy.

A leading proposal is to increase the concentration of sulfates in the upper atmosphere. This would reduce the amount of heat energy from the sun entering our atmosphere. The sulfate process is naturally occurring as volcanoes sometimes increase sulfate emissions substantially. This means scientists understand it better than most geo-engineering alternatives.

Although geo-engineering has potential, there are big risks. Knowing that releasing sulfates is an option could make countries think cutting carbon dioxide emissions is less important. This could lock us in to using more sulfates. Moreover, there could be effects we do not understand. No one has ever increased emissions of sulfates by 15-30 times their natural rate for extended periods before, which is what scientists think we might need to do in order to offset approximately a doubling of CO2 concentrations in the atmosphere.4 Volcanoes rarely increase global emissions of sulfates to more than twice their natural rate. This could cause irreversible shifts in weather patterns or ecosystems as well as harming human health directly. Although sulfate use looks safer than many examples of geo-engineering, researchers remain hesitant to recommend it.

We do not need global coordination for geo-engineering. Scientists estimate that initial sulfate programmes would cost a few billion dollars.5 That is expensive, but affordable for many countries. Hundreds of people are rich enough to afford single-handedly a major programme. If a single actor went ahead with such geo-engineering, the effects would be almost instantly global. Other, riskier, forms of geo-engineering can be even cheaper.

Emerging biotechnology is possibly even more promising but creates similar risks. Scientists in the Netherlands and the US stirred up controversy in 2011 by modifying the highly pathogenic H5N1 virus to be transmissible between mammals.6 They did this to learn about transmission mechanisms in viruses. It helped us understand how few mutations were needed to allow transmission between mammals and showed that there was more than one way these mutations could happen. This is valuable information for epidemiologists, influenza researchers, and organisations preparing for pandemics.

However, many thought that publishing the details of these experiments might make it too easy for rogue agents to make a deadly pathogen in the future. (In fact, it was later shown that the changes made were specific to the strain of virus used by that lab.) Much like geo-engineering, only one research lab and one journal needs to publish results in order for the whole world to be able to use the information, even while everyone else abstains from publishing. Similarly, only one group needs to synthesise such a virus to seed a global pandemic. With current technology, it is unlikely that anyone outside a major national laboratory would be able to do this.

That might change in the coming decades. Biotech firms let people without specialised facilities order custom DNA. It is possible to buy custom sections of smallpox DNA, although firms are better at spotting this than they once were.7 In the future, a small team of PhD level workers might be able to assemble those bits without access to a big lab.

Risks grow as the underlying technologies become more widely distributed and powerful. Individuals will be more able to cause global calamities in the same way that someone with a gun can kill more people than a someone with a knife. As a result, we need to work out how to manage global risk before the stakes get too high.

Global risks are neglected

Climate change, geo-engineering and biosecurity risks are not completely new, and there is already a large amount of work that goes into preventing calamities and preparing to mitigate them. Despite this, there is probably less work going into the risks than there should be.8

First, reducing these risks is a global public good – everyone benefits and no one can be left out if they do not pay for risk reduction.9 Public goods are often undersupplied when there is no collective group that acts on behalf of the whole affected population. For global risks, there is in most cases no such group. The United Nations, for example, is not powerful enough to act that way.

Second, managing the risks costs a small group a lot and helps a much larger group a little. Fossil fuel companies and heavily industrial sectors care very strongly about emission regulation. However, most people do not care very strongly and future generations, who are also affected, do not get a say. The loud voices of a few are generally powerful relative to the diffuse concerns of many.

Third, many of these risks are unprecedented. People are better at reacting to things that have already happened once. If no calamity happens, money spent on prevention and mitigation looks like it was wasted even though it might have been the reason no calamity happens. If the calamity does happen, the prevention efforts look unsuccessful and therefore wasted. The mitigation preparations look clever in retrospect, though. It is a hard situation to put decision-makers in, and we should expect it to lead to under-investment in mitigation and especially prevention.

The commons and the wildfire

As a species, we have experience managing the ‘tragedy of the commons’ – in which a shared public resource is good for everybody so long as it is not overused. In a ‘commons’ situation, it is not necessary that absolutely everyone plays along, just that most people (and the important ones) do. Climate change is a challenge of this type – it would not matter if New Zealand, say, refused to curb its carbon dioxide emissions so long as everyone else did.

In the ‘commons’ one group can ‘defect’ – they refuse to play along with the international community. While everyone else cuts emissions, they keep burning coal. This makes it harder for everyone else because they need to cut their emissions further, but saves the defector money. Reducing the number of defectors to an acceptable level is hard.

The situation that geo-engineering and emerging biotechnologies create, however, is much harder to manage. It only takes one ‘defector’ to start a wildfire – only one person needs to drop a match. Similarly, only one country needs to decide that the benefits of stratospheric sulphate outweigh the costs. Its choice will affect everyone. This situation is harder to manage, from an international perspective, because the only acceptable number of defectors is zero.

Other things make it more complicated. Different groups will find risky new technologies more or less valuable. A country that risks yearly hurricanes might want to stop global warming more urgently than any other country. It might be willing to pull the trigger on geo-engineering, for its own sake, even if the cost for humanity as a whole is high.

This is made worse by an effect similar to the ‘winner’s curse’. In auctions the person who wins probably paid too much – everyone else decided that the object at auction was not worth as much as that. The winner may simply have been wrong about the value, rather than benefiting the most from it. In this case, a country could be wrong about the risks and benefits of an unprecedented technology. The groups which overestimate the benefits and underestimate the risks are likely to start using the technology first.10

How can we manage the wildfire scenario? We can restructure the incentives by breaking the path towards the development of a risky technology into smaller steps. One type of treaty draws a bright red line – perhaps it prohibits unilaterally engaging in geo-engineering. This risks being brittle because the first defection starts a wildfire. If we define intermediate steps that progressively lead towards use of globally risky technology, then we create much more flexibility. Groups could be encouraged to progress in certain directions, and not others. As they come closer to being able to engage in unilateral action, they could face increased scrutiny.

Breaking apart the steps can be helpful because it makes it possible for groups to defect along the way without resulting in catastrophe. This makes it more like a ‘commons’ situation in which we can tolerate a few defectors. It forces defectors to be deliberate and fairly public in their choices, which makes it easier for the international community to step in to resolve whatever concerns underlie a group’s approach towards the technology. There needs to be a vigorous response to the little defections – otherwise you allow countries to ratchet up towards a wildfire with a series of small steps.

Breaking the steps down works in situations where there are few groups that need to be monitored and where the programmes of research and development are big enough that they are hard to hide. A big risk, therefore, lies in globally risky technology that becomes so cheap and easy to use that even very small groups are able to use it. At the moment, humanity does not have a way to handle that kind of risk. Until we uncover a solution to that problem, it may be best to focus on research and development that makes risky technologies effective at scale rather than cheap to do for individuals. That way, we can focus on the problem that is already hard enough – making sure that a single irresponsible country does not make use of a wildfire technology that poses risks for us all.

Although existing international systems are able to make some progress towards resolving collective action problems like climate change, we have a lot to learn about situations where a single group can have a significant global effect. As technology develops, humanity will encounter more such ‘wildfire’ technologies. The international community will need to do more work to prepare for these scenarios before technologies become mature, so that we are not left with a hasty and poorly considered response to emerging technologies.


1 IPCC (2014) Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, R.K. Pachauri and L.A. Meyer (eds.)]. IPCC, Geneva, Switzerland, pp 151
2 Launder, B and Thompson, JMT (2008) Theme Issue ‘ Geoscale engineering to avert dangerous climate change’. Philosophical Transactions of the Royal Society 366 (1882)
3 Qian, W (2011) China pours billion into rainmaking. China Daily, 24th March
4 Rasch, PJ et al (2008) An overview of geoengineering of climate using stratospheric sulphate aerosols. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 366(1882): 4007-4037
5 McClellan, J, Keith, DW and Apt, J (2012) Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters 7(3)
6 Fouchier, R et al. (2012) Airborne Transmission of Influenza
7 Randerson, J (2006) Revealed: the lax laws that could allow assembly of a deadly virus DNA. The Guardian, 14th June
8 Bostrom, N et al. (2014) Unprecedented Technological Risk. The Global Priorities Project
9 Inge, K, Grunberg, I and Stern, M (eds.) (1999) Global public goods: international cooperation in the 21st century. Oxford University Press, NY
10 Bostrom, N, Sandberg, A and Douglas, T (2013) The Unilateralist’s Curse: The Case for a Principle of Conformity. Future of Humanity Institute Working Paper

Self-driving cars: a chance to get our relationship with future technology right

Originally posted by the Global Priorities Project and the Oxford Martin School.

Self-driving cars are coming. The Department for Transport has given the go-ahead for the cars be tested legally on any roads in Britain, so long as there is a human driver in the vehicle who can assume control and would take responsibility for any accident. Google’s prototypes have been driving unaided in pre-mapped environments. The firm entered discussions last month with major car manufacturers about getting production lines rolling within the next two to five years.

Pundits gush about the potential benefits, but the effect of driverless cars helping us build a legislative framework for future automation technologies, though often seen as a challenge, may be a huge opportunity.

Many of the benefits of self-driving cars are well known. First, car crashes kill a lot of people. 1,700 people die in traffic accidents each year in Britain and worldwide the total is 1.2 million¹, almost as many as die from HIV/AIDS. These are overwhelmingly caused by human error, and it seems likely that the technology could soon become reliable enough to eliminate a large proportion of these.

Second, people spend a lot of time driving. Some of this is enjoyed, but much is merely endured. Freeing people to use their travel time in work, study or leisure would be a substantial boost to the economy and to wellbeing. Moreover, the effects on congestion, emissions and accessibility could be substantial.

But there is another big reason to push ahead adopting driverless cars that gets neglected. We are living in a world of increasing automation. We need to adapt to that: technically, legally, and socially. Self-driving cars seem like a big step today, but we can expect much greater automation in our lifetimes, for example in medical diagnosis. A recent paper estimates that around half of current jobs in developed economies may be automated away in the coming decades.

We have a lot to learn. How can we produce systems which are robust to unknown errors in their code? Which are resistant to any remote sabotage? How we can structure our laws so that liability is clear when things fail? How can we structure the incentives so that it is in everyone’s interest to prevent them from failing? What new areas of employment can we create to take advantage of this liberation of human time and expertise?

All of these will take time, and will almost certainly not be done right from the beginning. The longer we wait on grappling with the practical issues that driverless cars present, the longer before we get an opportunity for our society to adapt to the next great force to shape humanity. That will make us less prepared for future increases in automation, which may be much more dramatic and essential to get right. If this were the last time we had to face this challenge, it might make sense to take things slowly. But because there is so much yet to come, the societal knowledge we gain from experimentation is very valuable.

Slowing things down now might make it easier for us in the short term. It is likely, however, that it will make it harder for society to adapt to a more sudden and alarming change in the future, when we can no longer hold back the rising tide of technology.


1 The UK  figure, from the Department for Transport, is from their 2013 annual report as 2014 data have not yet been finalised. The world-wide figure, provided by the World Health Organisation, is for the year 2010. It is likely that this represents a significant underestimate as car ownership in less economically developed nations, which often have poor road safety, has risen in that time.

Breaking down DALYs into YLLs and YLDs for intervention comparison

Originally posted at the Global Priorities Project. Sebastian Farquhar and Owen Cotton-Barratt


Global public health remains a top contender for the best way to improve welfare through aid. Within health interventions, it is natural to allocate marginal spending to avert the most expected DALYs (disability adjusted life-years) per dollar.1

However, not all DALYs are the same and there are important differences between years of life lost (YLLs) and years lived with disability (YLDs). Not accounting for these categories separately may introduce a bias in decision-making because DALYs do not address non-health outcomes for individuals and the effects of outcomes on others. These effects are different in situations which involve primarily YLLs as opposed to YLDs. This analysis is of particular practical importance to effective altruists because two of the most promising interventions address different types of DALYS – deworming primarily averts YLDs and bed nets to prevent malaria primarily avert YLLs.2 This make us more inclined towards deworming as a top intervention than a naive evaluation would suggest. More sophisticated analyses incorporate terms that address many of these effects, but may still undervalue deworming relative to malaria nets.

This document is primarily written for people who already place a high weight on DALYs in identifying top public health opportunities. It does not argue for the use of DALYs, or consequentialist reasoning in identifying public health opportunities.

Overview of YLLs and YLDs in DALY calculations

The burden of disease, as calculated by the World Health Organisation in its 2010 Global Burden of Disease (GBD) assessment, divides DALYs into two categories – YLLs and YLDs. YLLs accrue when an individual dies. The gap in life-span between their age at death and the maximum life expectancy3, shown in dark blue in the graph below, contributes to the YLLs. The gap, while the person is alive, between full health and the degree of disability attributed to the condition, shown in light blue, contributes to YLDs. Note that the conditions which are assigned a ‘disability weighting’ and create YLDs include but are not limited to conditions that are traditionally thought of as ‘disabilities’.4

YLLs in dark blue and YLDs in light blue

Graph adapted from Gold M., Stevenson D., and Fryback D. 2002

YLDs and YLLs in malaria and schistosomiasis

Although malaria causes a great deal of morbidity, its largest contribution to the burden of disease comes from lives lost, primarily of children under five. Indeed, when evaluating the health impact of a charity distributing bednets, charity evaluator GiveWell explicitly only addresses under-5 mortality. The best estimate for YLLs caused by malaria is roughly 20 times greater than that for YLDs.

Parasites of the sort that deworming addresses, however, cause more harm by making people ill than by killing people. For schistosomiasis, a parasitic worm, the best estimate for YLDs caused is roughly 10 times greater than that for YLLs, although there is substantial uncertainty.

Malaria and schistosomiasis have different DALY profiles. Source: Global Burden of Disease 2010

DALYs are only about the individual

DALYs measure the impact of conditions on the health of individuals. However, if we are choosing between health interventions, and care about their benefits to broader society, we should also consider the effects of health interventions on others.

Many health conditions have significant externalities. More severe conditions can have large impacts on the lifestyles of families and friends. Transmissible conditions carry a very direct externality in the chance of spreading the condition to others. Death can be traumatic for those who knew the deceased. Even where one of these obvious mechanisms is not in place, a health condition which reduces the productivity of an individual will reduce the ability of a community to provide for its needs. Caring for individuals can also consume social resources that could have been spent elsewhere if those individuals were healthy. These effects are typically acknowledged as important, but are then ignored because they are hard to measure.

Without any good ways to measure these externalities, it might be reasonable to use the impact on the individual as a proxy for size of the total effect. However, it seems likely to us that the average size of the externality will depend on the type of direct health effect, and in particular may vary for YLLs and YLDs.

YLDs have larger associated externalities than YLLs

YLDs can have very large externalities. The cost of care and treatment can be very high even on a per person per year basis. In the developed world, life-time costs associated with managing conditions can be enormous. The life-time care costs related to the amputation of a leg in the US have been estimated at $509,275.5 The Global Burden of Disease 2010 assigns a weighting of 0.021 for amputation of a leg, long term, with treatment. So the health burden of an amputated leg would take more than 47 years to accumulate a single DALY, which is longer than the life expectancy of the average amputee. Since the marginal cost of DALYs, even in rich healthcare systems, is much less than this, the full impact of the condition will be substantially underestimated by only considering the health effects. In the developing world, although absolute costs of care are lower, the presence of conditions can dramatically reduce family incomes, both because individuals find it harder to work and because family members find their options for work restricted by the need to care for the individual.6 Because the costs of managing the conditions which create YLDs can be so high, the non-health benefits of avoiding that YLD can be high as well. This saving is not included in a cost-effectiveness analysis which only examines DALYs per dollar.

It is very difficult to weigh up the magnitude of the impact of the trauma and loss associated with YLLs. It seems likely, however, that these are often lower than the externalities we need to consider for health conditions which cause the same number of YLDs. An intervention that prevents the need for an amputation, for example, would avert as many DALYs as extending a life by less than a year. It is unlikely that the typical externalities of a death occurring a few months earlier than otherwise are on the order of $500,000 in scale, even in rich countries. Although there is clearly variation, as a rule of thumb YLDs are likely to be associated with larger externalities. Therefore, all else equal, we should prefer interventions that reduce YLDs. Because deworming primarily averts YLDs, this argues for preferring deworming to bed-nets when the costs per DALY are similar.

The externalities depend on age differently for each category

Health economists used to age-weight DALYs such that both YLDs and YLLs for the very old or very young were treated as less significant as those in the middle of life. However, in the current GBD framework there is no age-weighting. This is because DALYs represent only intrinsic health losses, and there is no principled reason why health should be more important at different ages. While this may be the correct approach to take for DALYs themselves, once we consider the indirect effects of health on other things we value, it becomes clear that age is relevant in choosing programmes. However, the effects are quite different for YLDs and YLLs.

YLDs in youth often have larger externalities

Where health conditions are present in young people the externalities are often larger. Young people are still developing rapidly and ill-health may negatively impact this development.7 These conditions can affect both individual well-being and also the process of accumulating human capital. Even where health conditions last for only a year in youth, they may have substantial effects throughout life which are not captured in health calculations. By contrast, a year with a condition which impairs development late in life will typically have a much smaller effect on the flourishing of the individual over their whole lifetime. Therefore, where interventions avert YLDs, they are better when they avert them younger, all else equal, and particularly in childhood. Deworming mainly averts YLDs in young people. The evidence for developmental effects of deworming in children could be improved by further investigation – but there is no disagreement that any such effects are stronger for children than adults.

YLLs in middle age often have larger externalities

The age-weighting used in previous versions of the Global Burden of Disease placed a high weight on DALYs created for young and middle-aged adults and a lower weighting on the very young and the very old. There are a variety of reasons for this sort of weighting, but many of these reduce to instrumental claims about the value of lost labour productivity, about the social disruption, or about the grief caused by a death.

Bereaved parents report stronger feelings of grief when losing older children than younger children, especially in resource poor contexts. Similarly, bereaved widows report feeling more grief when losing younger spouses.8 And adults of working age are more likely to have dependents, who may have a hard time adapting after their death. So the impact of death on others can depend on age, which should be a consideration in prioritising interventions.

From an economic perspective as well, the externalities of death in youth and middle age are higher than those at either tail. This is because society has expended some resources in developing human capital, but individuals still have a substantial period of continued productivity in front of them. It is not appropriate to consider this as a health effect of the death. However, when deciding which interventions to pursue it can be important to consider productivity losses, especially in comparatively poor contexts where economic growth has comparatively large welfare benefits. In these environments, parents may also depend on their children to look after them in old age, which means a child’s death later in life can be a pressing concern for their welfare.

As a result, where interventions mostly avert YLLs, one should prefer interventions that mostly avert deaths in young and middle-aged adults, all else equal. Malaria net distribution mainly averts YLLs due to infant and under-5 deaths. This makes us think it is a little less effective than the simple DALY calculation suggests.


An assessment of promising health interventions that cares about broader societal impact requires separate treatment of YLLs and YLDs. YLLs and YLDs cause different kinds of externality. This is relevant for a decision between two contenders for top interventions – deworming and malaria bed-nets – and may encourage us to prefer deworming. However, some of these effects are already modelled in intervention analyses which place less emphasis on DALYs averted.

1. For example, a simple reading of the work of the Disease Control Priorities Project 2nd edition, suggests such an approach. Some evaluators, such as GiveWell, place much less emphasis on averting DALYs.

2. You can examine the data at GBD Compare.

3. The maximum life expectancy used in GBD 2010 is based on the lowest mortality rate found world-wide for each age-bracket. It is therefore, at 86 years, higher than the current at birth life expectancy in most countries.

4. Indeed one of the complications in assigning the current DALY weightings is that, as people are instructed to consider only ‘health’ effects, conditions such as complete hearing loss, are assigned comparatively low weightings because they represent different ableness rather than ill health as such.

5. MacKenzie EJ, Jones AS, Bosse MJ, Castillo RC, Pollak AN, Webb LX, Swiontkowski MF, Kellam JF, Smith DG,Sanders RW, Jones AL, Starr AJ, McAndrew MP, Patterson BM, Burgess AR. Health-care costs associated with amputation or reconstruction of a limb-threatening injury. J Bone Joint Surg Am. 2007 Aug;89(8):1685-92. The costs reported in this study are slightly out of date and based on the USA. In general, treatment costs have not fallen over the period, although it is possible that shifts to lower cost interventions may have reduced the representative cost since publication.

6. For example see this survey by the World Bank.

7. Of particular relevance to deworming, for example, is the indication that increased deworming substantially increases future earnings and future health. Baird et al. 2011 found that between 2 and 3 additional years of deworming as a child increased earnings as a young adult by around a quarter, partly because individuals worked more hours and were sick less often and partly because they were able to move into higher earning paid labour.

8. Ball J. Widow’s Grief: The impact of age and mode of death. J. Death and Dying. 1976

Is there such thing as a bad charity?

When we talk about giving to charities there’s a dirty secret we try really hard not to mention. Some charities are “bad”.

When people look at charities, all of which are doing decent things, they don’t want to point out that some do a lot more good than others.

It’s understandable. They’re all well-intentioned, good projects. People care about them. You don’t want to hurt anyone’s feelings. But the truth is that, in the world we live in, we can’t afford not to be picky about our giving.

If all charities were as good as each other, then giving to less effective charities wouldn’t be a big deal. But there are huge differences. A safe and effective cataract operation can completely cure someone of blindness in the developing world for about $20. The same person can be provided with a seeing eye dog for $50,000. There are charities doing each.

Let’s not make any mistakes about this. Both charities are definitely helping people. But one is helping a lot more people with the same resources.

There’s something awkward here. It feels like we’re putting a price on helping people. We’d like to just help charities help people in need without having to think about scarce resources. It feels embarrassing to say to one charity “I’m sorry, I think my money can help more people somewhere else.” It seems much nicer to give a bit to all the good causes.

That’s a very costly attitude to have.

We live in a world plagued by urgent human suffering. A lot of it is suffering that we really can do something about. Neglected Tropical Diseases, for example, affect more than a billion people and cause about half a million deaths a year. Charities like SCI can deliver treatment at a cost of about $0.70 per person to prevent crippling disabilities and death.

Clearly, it’s very important to focus on giving to highly effective charities like SCI. That’s why organisations around the world are turning their attention towards effective giving. Large-scale programmes like the Gates Foundation are using effectiveness data to inform their giving choices.

But it’s also important for individuals to take giving seriously. [Giving What We Can] ( is an organisation that encourages individuals to commit to donating 10% of their income to the most effective charities. They reckon that, on a really conservative estimate, their two hundred members have pledged about $75m over their lifetimes. That will save about 150,000 lives. The Life You Can Save, which only asks for a minimum pledge of 1%, thinks its thirteen thousand members have already given $68m. All those little donations add up.

Personally, I find the idea of being able to help so many people, so easily, exciting. But some people I know get nervous about the idea. Who am I to say which charities are the best? How can you even compare them? Can you say that curing a case of TB is better or worse than defending someone’s civil rights?

They’ve got a point. Comparing charities is hard and we’re bound to make some mistakes. But not trying to compare them is even worse. You aren’t giving to every charity (and you shouldn’t) so you’re already comparing them. But the natural way to choose them, picking the ones that sound good, just doesn’t take the urgency of human suffering seriously enough.

The good news is that there are organisations out there to help. GiveWell has a team of researchers who analyse charities to find the best ones out there.

That sort of project is a pretty new idea, but it’s one that’s gaining influence. Ten years from now, I’d be surprised if that dirty secret isn’t out in the open and we’re all thinking about effectiveness when we think about giving. It won’t be a moment too soon.

Originally posted at TEDx Oxford.

How to choose a university degree

One of the most important early career decisions many people face is what to study at university. Although it is far from the be all and end all, degree choice plays an important role in your ability to make a difference later in life. People probably don’t put enough effort into systematically thinking about degree choice.

My overview of the question so far suggests that, unless you hope to enter certain professional careers, you should bias your choice in favour of more mathematical/technical subjects and pick a degree you can expect to do well in. That opinion is based on an overview of recent graduate employment and employer surveys, interviews, personal experiences, and other cited sources.

There are lots of different systems of university education. In the UK, the system I have the most experience with, students pick a three or four year degree at roughly 17. In the US, students pick courses, building to a selection of majors some years later. Other countries have other systems.

Thinking through your degree carefully matters more if you’re following a UK model, where you can’t easily experiment. But regardless of the model, the tendency is to put less thought into the choice than you ought to.

There are a number of reasons for that – some are good and some aren’t. Bad reasons first. People tend to think in the short term. That means that the unpleasantness of having to do research and think carefully about degree choice is more obvious to you than the massive benefits that come later to your ability to make a difference as well as your lifestyle.

It also seems like a really complicated problem. Our natural tendency is to shy away from complicated problems and just do what feels best. That’s not necessarily a bad idea, our instincts can be surprisingly good, but things sometimes get better when you structure the problem. Structuring the problem means generating a very broad list, breaking your reasons into steps and then eliminating options systematically before comparing your final options with a consistent framework. I’ll outline a structure for this problem in later posts. Even if thinking everything through carefully only improves things by 1%, which this decision seems like it could, it might be worth spending hundreds of hours on it because of the size of the effect.

But there is a good reason people have to avoid thinking this stuff through. Degree choice is hugely chaotic. Your choice doesn’t only affect how future employers see you. It changes who you get to know, what sort of culture you adopt, and ultimately who you become. But the way in which it does this is really unpredictable. When decisions with big impacts have chaotic consequences, it usually isn’t worth putting much effort into the decision. But while degree choice is chaotic on some levels, there are consistent and important patterns that you can use to decide. I’ll pick out and focus on those so that we can get the best possible decision with a reasonable amount of effort.

How important is your degree as a job requirement?

Degree choice is not decisive. It’s relatively easy to move into new areas after the completion of your degree. Particularly in the UK, employers are aware of the fact that you had to make your choice without much life experience. In the US, students change majors fairly often. “The undecided college student: An academic and career advising challenge (3rd ed)” reports that 75% of US students change their majors.

According to the CBI/Pearson Education and Skills Survey 2012 80% of graduate jobs in the UK don’t require a specific degree. While it helps to pick the right degree from the start, you shouldn’t let that choice overwhelm you. Equally, if you’ve already made your choice, and regret it, it isn’t the end of the line. But there are some careers where degrees matter a lot.

In particular, degree choice is important for professional careers (like law and medicine) and academic careers. These are important for people interested in making a difference. Professional careers are often high-earning, although statistics here can be misleading. That makes them good for earning to give. Academic careers can lead to extremely high impact research.

I’ll cover the importance of subject choice for careers that are neither professional nor academic in my next post.

Professional careers

Several careers typically require special qualifications that build on your university course. The exact way this works depends a lot on your country. Some careers have specific undergraduate courses that cater to them (e.g. medicine in the UK and Australia) whereas others are mostly dependent on post-graduate training with some undergraduate requirements (e.g. medicine in the US and Canada, architecture most places or accountancy in most US states). Some jobs require rigorous and expensive training courses (e.g. pilots in the UK) but not university qualifications.

Careers that often depend heavily on specific courses (although not for all roles and not in all jurisdiction) include:
– Accountancy
– Air piloting
– Architecture
– Engineering
– Law
– Medicine (and related professions)

Because of the regional variation of the requirements for licensing in these professions, it isn’t helpful for me to consolidate the data here. I recommend the search phrase “what do I need to qualify as a ____ in ____”.

If you are thinking about these careers, which are (not coincidentally) quite high-earning, be aware of the local job requirements from a relatively early point in time. For example, if you want to be a doctor in the UK, it’s often worth trying to arrange some work-experience in a hospital while you’re in sixth form, to make it easier to get onto a good undergraduate medicine course.

For people who aren’t right at the very top of the talent distribution we suspect that medicine (particularly in the US) has one of the highest expected life-time earnings. It has very high median earnings and relatively low drop-out rates compared with some other high earning careers. We’ll be looking into this more in the future. Some specialisations are even better. (Incidentally, as a doctor, you won’t be saving that many lives directly)

Academic Careers

Unsurprisingly, if you want to be a professional physicist it’s helpful to study physics as an undergraduate. But it isn’t always important to be studying the ‘right’ option. Most top universities are fairly flexible for a number of their postgraduate degree requirements. For example, a Physics PhD at Oxford requires Physics, Mathematics, Chemistry or ‘a related subject’ and similarly their Chemistry PhDs accept students from any sciences. Often, in the UK, a taught masters course accepts students from a wide range of courses (for example, an Economics Masters course requires only “strong quantitative background”) and might lead to a PhD. So doing an undergraduate course in the ‘wrong’ subject might cost you a year or two (plus a good bit of money) but not be as problematic as you’d guess.
It’s much harder to move from an artsy undergraduate course to a mathematical or scientific graduate programme. No one will take you on a physics masters course if you haven’t learned any mathematics since high-school. So, when picking majors/degrees, if you are interested in an academic career, it can be worth biasing your choices in favour of mathematical or scientific courses in order to stay flexible.

Flexibility is valuable because the best opportunities change. Sometimes you’ll get an opportunity you wouldn’t have anticipated. It is good to be able to seize it. The world is also constantly changing, so what seems like a good subject now might be less useful in the future. Your evidence about the world is also always changing. It is valuable to be able to react to changing evidence about the best courses of action.

If you want a research career you should look at suggestions of high impact research topics while you choose your undergraduate study. Clearly, there are some years between you and your high impact research, but finding out the sorts of questions that excite you is worthwhile. If you can identify a high impact research area that you care about, doing a degree in a relevant subject is a good move.

Do something you’ll do well in

It’s valuable to choose a degree you’ll do well in. That will give future employers or business partners confidence that they’re dealing with someone who knows how to excel.

If you’re doing well, it makes it easier to be happy. Happiness is worthwhile in itself and is vital for your ability to be productive, motivated, and make friends. That starts all sorts of positive feedback cycles.

You also leave with a good degree class or GPA. That signals intelligence and the ability to work hard to future employers. According to the CBI/Pearson report, 46% of employers listed degree class as one of the most important criteria in deciding to hire someone making it the 4th most significant criterion. (Ahead of it were “Employability skills”, degree subject, and relevant work experience.)

That doesn’t mean you choose the easiest of easy degrees. For one, future employers will pay attention to the brand your university choice represents. They will also pay attention to what your subject is. You’ll probably find the easiest of easy degrees quite boring.

Having said that, some people suggest doing an easier degree which takes less of your time so that you can focus on career advancing activities in the spare time you free up (this guy takes this thought to its logical extreme). I suspect that for the vast majority of people this is a very bad idea. It requires strong internal sources of motivation. It seems far too likely that you’ll just end up doing your very easy degree and not taking advantage of the time to do productive work. I think you would need extremely good evidence that you’re highly internally motivated before you should do this (and most people are unlikely to get that evidence if they have only done schooling).

What do employers want?

This matters for people interested in Advocacy, Innovation, Improving as well as Earning to Give in non-professional careers. It seems that degrees in more quantitative subjects improve your employment prospects and your flexibility, which is important for making a difference. Picking a degree you expect to do well in is also important.

Employers like Science, Technology, Engineering and Maths

Once again, degree choice is not going to close off most of your options. In the UK, 80% of employers don’t have specific subject requirements for their graduate roles, according to CBI/Pearson Education and Skills Survey 2012. But the subject you choose does make a difference to them. 72% say that they are on the lookout for graduates from certain subjects (compared with 46% who say degree class is one of the most important criteria). Fully 50% of employers say that they are looking for graduates from Science, Technology, Engineering and Mathematics (STEM) degrees. 17% want students with Business degrees. Only 2% want linguists, 2% social scientists, and 1% arts students.

That suggests that quantitative STEM subjects give you far more career flexibility and hiring prospects than other courses. But it is worth remembering that this is only a report of employers’ stated preferences, not a report of their actual behaviour.

Doing more quantitative subjects signals intelligence

There’s also an interesting signalling effect. Many employers want to have intelligent employees. And it turns out that your degree choice does, to an extent, signal the level of intelligence you have. Many of the stereotypes we have here are fairly accurate. This table (part 1 and part 2) shows the average SAT scores of US students by major choice using data from ETS, and maps the SAT scores onto an IQ score. (SAT scores are pretty good proxies for IQ scores.) Subjects like Physics, Math, Engineering and Philosophy lead the pack, with subjects getting less and less quantitative lower down with Social Work at the bottom.

Now this isn’t to say that all people who study Physics are smarter than people who study Social Work. It isn’t actually even saying that you have to be smarter to understand Physics than Social Work. It might just be that smart people tend to go into Physics because they feel like that’s what smart people ought to do. That is, the stereotypes might be self-reinforcing.

But it does suggest that if you know nothing about someone except what their major was, and you then have to guess how bright they are, you can expect that a Physics student is smart. In the case of many potential employers, that’s a relevant consideration. It also means that if you know someone studied, e.g., Physics and they did very well then they are probably very smart. But if you know someone did very well studying Social Work then you still don’t have good evidence that they are very smart (although they might be).

Quantitative subjects lead to more graduate jobs

We also want to look at the employment rates of people who finish various courses. The data table here reformatted from the survey by the Higher Education Career Services Unit here shows the employment profiles of graduates from a range of subjects 6 months after graduation. It assumes that the outcome you want after your degree is to either be doing further study, be employed in a graduate job, or to be doing a combination of study and work. If that is what you want, then a familiar pattern emerges with Medicine- and Engineering-related subjects doing the best, followed by sciences and the harder social sciences like Economics, with Education thrown in there.

That is a little misleading, because not all of the further study is eventually going to lead to a graduate career, and is often not worth it for its own sake. People who study science are more likely to go into graduate degrees than other subjects, and some of those won’t actually end up with graduate jobs.

Looking into the subject-by-subject breakdown, though, you learn that in most artsy degrees about 20-25% of employed graduates are working in Retail, Catering, Waiting and Bar Staff. While that isn’t necessarily a problem, it’s also probably not what you went into the degree to do.

But we have evidence that the average science student is smarter than the average student in other disciplines. That’s a huge confounding factor, since we would expect smarter people to be better at finding jobs (general mental ability is strongly correlated with job performance – for an overview of the evidence see here). That should make us very suspicious about how much of the improved outcome is down to the scientific degree relative to the ability of the student.

All of this is based on data about all people who go to university in the UK. If you are towards the top of the skill distribution, you might have a very different set of outcomes. In the future we’ll look at how this is different for top students.

So although there are some questions about which factors are playing a causal role, it seems that STEM courses and medical courses will help you make a difference more than humanities or arts courses.

A practical step-by-step guide to choosing a degree

I found a lack of practical step-by-step guides to picking the right degree for you. This guide gives you a structured way to gather all the relevant information and to make a decision on your degree. Without a structured process it’s easy to narrow down your options too fast, to ignore important evidence, and to apply your evidence inconsistently.

The steps you take depend on how sure you already are about what you want to do. I’ve broken down a sensible set of steps that you might use in the UK. You should feel in control of your choice. With the power of the internet and email you can find the best information and advice once you know where to look and shake off your shyness. This decision is important for you and you shouldn’t hesitate in imposing on people in order to get the information you need. It will also teach you some skills that are useful for many activities that are important for making a difference.

1- Take a prospectus from a major university you might apply to. Large universities offer broadly similar sets of courses, so you don’t really have to do this separately for each university. At this point don’t worry too much about which university to apply to, that can come later. It is, however, probably worth applying to at least one university that’s a level higher than most people think you can get into.

2- Make a list of all the courses they offer. Cross out all the ones you know you don’t care about. If there are any where you’re not sure what they involve, read the page in the prospectus about the course. This step shouldn’t take more than a couple hours tops, but it will depend on how much stuff you’re interested in and on how much extra information you need to get. Even though it will take a bit of work, it’s one of the most useful steps you can take. It makes sure you didn’t rule anything out too quickly.

3- Go through your list again and cross out the courses you definitely aren’t qualified for. To get information on this, look at the prospectus. It should have a table of the subjects at A-level that are recommended for the courses. In some cases this is fairly simple – you can’t study mathematics at university if you haven’t done maths at A level. In other cases this is a bit harder. At this stage, be generous to yourself. But put a mark next to all the courses you are not sure you’re qualified for.

4- For every choice, think about why it is on the list. What is it about the course that interests you? Do you already do some things that indicate an interest? For example, do you read around the subject in your spare time? If you do, do you actually enjoy it or do you do it because you felt you ought to? If you don’t, why don’t you? Is it because you like the idea of being interested in the subject but you aren’t actually interested? Or is it just because it never really occurred to you? Do you think you could motivate yourself start reading around it? If not, you might not get that much out of a degree in the subject.

These are just some considerations. You don’t really need to be passionate about your degree when you just get started. Most people haven’t already learned their subject before they go to university. That’s why they’re going to university! But if you are passionate about one thing, that’s a strong vote in its favour. If you aren’t, that’s more than ok. Often it isn’t until you really start working at something that you get interested in it. How passionate you end up being can depend whether you have a clear purpose for your studies. It might also be shaped by the style of the learning.

Important: do this for everything left on your list. Don’t close off your thoughts too soon. And be generous. You might not be thinking about studying, for example, chemistry. But when you think about it, you do really enjoy classes and you are always a little interested when you see news articles about it. That’s worth noticing about yourself.

5- Now go back to all the items that had a mark against them because you weren’t sure if you were qualified for them. Are you passionate about them? Are you willing to do some extra work to make the stuff you have already studied relevant to the course? Could you explain to someone why it is that you didn’t choose the A levels that would have been most helpful? Have your interests changed since you chose them? Have you demonstrated that through actions?

Universities make recommendations about requirements for a reason. In their experience people who don’t meet the requirements have a hard time. You should tend to follow their advice. But, and this is important, if it matters at all to you don’t hesitate for a second in emailing the university admissions department and explaining your situation. This will cost you less than an hour of time and will let you stop worrying about whether or not you’re qualified. If you’re lucky you’ll get the answer you want. If you’re lucky in the other direction, you’ll be able to avoid wasting time and an application choice on a spot you weren’t going to get offered.

6- By now, you should have a short-list of 1-4 possible degrees to study. You’ve narrowed it down to choices you’d probably enjoy – now you want to try to get an outside view on whether or not you’d be good at them. Which subjects that you do at school are relevant to the course – how well do you tend to do at them? Make a note of the sorts of marks you get on relevant tests. If you can, try to work out what percentile you score in standardised tests like GCSE’s or AS. If you’re in the top 10% nationally in something then that’s worth knowing.

7- Ask your parents and teachers if they think you would be good at each choice. Listen to what they say, but don’t just accept what they say completely. Parents don’t always have a good picture of what you are capable of at school and they might have a lot of preconceptions. For example, they might always think you’re amazing at everything when less biased evidence disagrees. Or they might not think you’re university material, but only think that because they didn’t go to university and so they don’t really think university is something the family does. Or they might have hated university and assume you would too even though you’re a different person. So you should take their advice into consideration – but don’t just accept it. If your school doesn’t often send students to university, teachers might have a bad picture of what sort of person you have to be in order to go to university. Alternatively, they might not know very much about what it takes to go to university. It’s actually, startlingly, not really their job to advise you on university choices. Even if your school does have a full-time university advisor they can be badly misinformed. I have heard of some who got application deadlines badly wrong, or who gave practice interviews to students that were bizarrely aggressive and unrelated to the subjects being studied just because the advisor though that was how it was done.

8- Ask your friends and look at student forums. This is something you’ll almost inevitably want to do. But you should basically totally ignore what they say. They don’t really know anything about which degrees are good. They don’t really know anything about how to apply to universities. Treat any information you get here with a heap of salt and always refer back to original sources for any information that you’d expect a university to have published, like course requirements or deadlines.

Your friends will know a bit about you, but unless they are way more perceptive than a typical person they probably won’t know you as well as you do. It’s also not particularly important to make your university plans similar to your friends’. My impression is that most people underestimate how easy it will be to make friends at university, and overestimate how long their school relationships will last once they go to university.

9- Ask the universities. These guys are much more trustworthy than your parents and teachers. That’s because they are the only ones who have an incentive to get all the best pupils they can and they don’t have (as many) preconceptions about you. And it’s because they tend to know the contents of their own courses much better. They know much better which parts of the course people struggle with.

It’s here where most people don’t put in as much work as they should. This is usually because it just doesn’t occur to them, they are shy, or they are lazy. This is actually really easy and it is definitely worth it.

Emailing the admissions department about the qualities that tend to make a successful applicant is ok. But the real gold comes from actually speaking to a professor who teaches the courses. But, you’re saying, you don’t know the professor. Of course you don’t! But it’s not hard to change that.

For example, suppose I want to apply for Physics at Oxford. I google “oxford physics undergraduate professor”. The first hit is the page for undergraduates studying physics at Merton College, Oxford. That might not be a college I want to apply to but that doesn’t matter and might even be a good thing. The page lists a bunch of professors and tutors. Three are listed as “Tutors” rather than “Other Merton academics in Physics” so they are probably more involved in the teaching. They all have links to their profiles and those profiles list their teaching interests. The first two tutors list lots of teaching interests, so they are probably doing more teaching. The third doesn’t list any, so he’s either lazy about filling in the form or doesn’t care much about teaching. Either way, he’s low priority. In the absence of anything else to go by, I’m going to email the first guy. His email address is at the bottom of his profile page.

This doesn’t just work for Physics at Oxford. Trying History at Bristol – “bristol history undergraduate professor”. The first hit is the page for the Department of History at Bristol. That gives you a link for prospective applicants. You can go there too, and quickly find an email address for the admissions department and guidelines on requirements, but we’re going to be sneakier. You click on the link for current students. That takes you to a page which includes the course handbook. The course handbook has, after a quick skim of the table of contents, a section titled “Key Department and School Personnel”. It’s not obvious who the best person to ask here is. I’d be tempted by the Director of Student Progress – that title suggests a good understanding of what holds people back in being successful at the course. But you can take your pick.

If that format doesn’t work for you, you can try other keywords. For example, you can search for a course outline, find the introductory lecture course that 1st year students take and email the lecturer who runs that course.

I’ve demonstrated how easy it is to get the email address of the people who don’t normally get admissions questions. But what’s the point? Surely they’ll just ignore my email or tell me to talk to the admissions department. Well, maybe they will. And if they do you have to respect that and then email the next best person. They might be very busy, or on sabbatical or holiday. They might genuinely just think they aren’t well placed to answer your question. But on the other hand they are often helpful and friendly and interested in helping out.

Keep your emails to them short and to the point. Explain that you are thinking about studying x, and you want to work out how good you would be at it. Ask them what they think are the best predictors of success in undergraduates. Ask them if they have any advice on how to judge if you would get a lot out of the course. Ask them questions you have about the teaching style, but only if you have genuine questions which are not covered by published material. Don’t be overly formal, but be respectful. They are doing you a favour, but you don’t want to make it sound like they’re doing you a big favour.

It might be possible to turn that into a really useful and interesting conversation. Or not. Respect the fact that they do have other priorities. You also might have other priorities.

10- Find graduate employment data. A good source for this data is the HECSU survey on where graduates are 6m after graduation. Note down what percentage of graduates get employed in jobs that use the degree’s skills, and the percentage that are unemployed after 6m. Remember that this data is for the whole of the UK and that it doesn’t necessarily apply to you. You’ll want to look at your percentile scores at, e.g., GCSEs to work out where you stand relative to the distribution. (Bearing in mind that 10th percentile at GCSE is not the same as 10th percentile of graduates, because more academically talented people are more likely to graduate.)

Putting it together

Working out exactly how to bring together all of the different bits of information is hard and there is no formula for it. Talking through this with someone you trust to make good decisions is often useful. Look especially for people who gave you advice before that turned out to be good. But remember that they might be biased. If they disagree with you about something ask yourself why. Do they have information you don’t have? Maybe they have valuable life experience you’re missing, don’t neglect that. Or they might just care about different things because they are a different person.

As always, respect the opinions of others and try to understand why they believe what they do. Then try to see what your beliefs are once you’ve taken account of the reasons they had.
But even though there’s no formula, I’m going to suggest a way to use a scoring system to structure your thoughts. The results of this scoring system won’t always be better than your gut instinct. But using a system helps you make sure you don’t miss some of your evidence and makes it more likely that you treat your evidence consistently.

It’s time for a new table. For each degree course you’re still considering, you’ll want a score to reflect the following:
– How qualified am I?
– How passionate am I?
– How good are my scores (e.g. relevant tests and GCSEs, AS)?
– How well do my parents/teachers think I’ll do?
– How well does the university think I’ll do?
– How well do I think I’ll do?
– What intelligence does the degree signal?
– What is the average salary of graduates?
– What percentage of graduates are using their degree after 6m?
– How flexible is the degree?
– Does the degree lead easily to careers that make a difference?

Now you need to assign a score for each degree for each point. This is tricky, but it doesn’t matter too much what your scoring looks like. I could go into details about exactly what sorts of distributions you should use, but it wouldn’t help too much. I’d be tempted to assign each a score between 1 and 5 (you don’t need more levels). In each case, a 3 means “this aspect of the degree gives me no reason to prefer it or not”. A 2 means “this is a reason to not choose the degree”. 1 means “this is a strong reason to not choose this degree”. 4 and 5 are the mirror of 1 and 2.

Now you want to weight each of your points. For example, I might clump the first 6 together and say they are all indicators that I’ll be successful at the degree. The next three are indicators of how much the degree is worth to society. The last two are indicators of how good the degree is for an ethical career. Suppose I care about as much about all three of those ideas. Then I should weigh each of the second clump twice as heavily as each of the first six (so that the total weight of each clump is the same). And each of the third clump should be worth three times as much as each of the first 6. So I add up the first 6, plus two times each of the next three, plus three times each of the next two. That gives me a total score for each subject.

Then I would look at all those scores and see how they strike me. Odds are that the scores do represent what’s been unfolding in your head as your best choice. But maybe they don’t. If they don’t, try to work out why. Is it, perhaps, because you actually value something more highly than it is represented in your formula? Is it because the formula only lets you go up to 5, and you think one of the reasons was so strong it should be a 6 or a 7? Whatever the issue, don’t necessarily just go with your table. The table is a tool for thinking things through carefully, it doesn’t always give the right answer.

Also remember that you should feel free to add in your own indicators if there’s something that applies to you that I’ve left out. And, if you found this useful, share it with friends who are making similar decisions.

Using these 10 steps you should be able to systematically work out which degrees you are likely to do well at. Doing well at a degree is an important step towards doing good after your degree. It is probably more important than doing a degree that seems to lead directly towards making a difference.

Originally posted on the 80,000 Hours blog.

Give a goat this Christmas?

It’s getting closer to Christmas, and we’re running out of time to get presents for friends and family. It can be hard to work out what presents people will actually enjoy. An increasingly popular option is to make a donation on behalf of someone else as a present. You might be interested in Giving What We Can’s gift cards that let you donate to the world’s most effective charities: the deadline for sending them is tomorrow!

When it comes to presents, it seems like a thoughtful gift is better than a gift certificate, and a gift certificate is better than a thoughtless gift. It’s a lot like that for charity gifts too. Your best options are likely to be donations towards the most effective charities. Next in line are going to be well-run cash transfer charities. And in last place will be the unasked-for gift of a goat.

Why “give a goat”?

What was originally a fairly uninspired set of options for charity gifts has since turned into a wide range of charitable choices for the holidays. Oxfam alone lets you select gifts from 58 different projects.

Christmas goat

But the old stand-by is sending farm animals to developing countries. This seems unlikely to be motivated by effectiveness. I googled “Give a _____ for christmas” for a bunch of farm animals and wrote down the estimated number of search results.

  • Goat: 22,200
  • Cow: 1,630
  • Chicken: 130
  • Pig: 49
  • Sheep: 4
  • (an) Ox: 0

Now, it’s possible that goats are just much much more valuable to rural people living in developing countries than any other sort of animal. Alternatively, it’s possible that the alliteration of “g” in “Give a goat” makes for catchier headlines. I don’t know for sure, but I have my suspicions.

There are also some decent reasons to think, as reported by Givewell, that donations of livestock might be a particularly ineffective way to help the developing world. They seem to be quite a lot like cash transfers – where you simply give money to impoverished people – but with a bunch of extra question marks.

With a cash transfer (which is currently GiveWell’s second recommended charity after AMF) you empower people to make a decision about what they want most and buy it. That is, cash transfers are like the gift certificate of the charity world.

There are some real concerns about cash transfers. For example they can lead to resentment and arguments when some people receive the transfer and others don’t or have distorting effects on local economies. There are also reasons to think that some charities beat cash transfers.

But with animals you lose most of the benefit of cash-transfers – you’re just deciding for people what you think they’ll want.

You also avoid many of the benefits of the best non-cash transfer charities, like the ones that Giving What We Can’s holiday cards benefit. Buying someone a goat for Christmas is like getting someone chocolate. You didn’t ask them what they wanted, you don’t really know them too well personally, so you just got the first thing on the shelf that looked like a present. The difference is that chocolate won’t poop on your floor.

It turns out, you can get a thoughtful gift and donate to the most effective charities at the same time! Giving What We Can has it all. Their website lets you make a donation to one of their top recommended charities – AMF, SCI, or Deworm the World – and sends a charming holiday card to the recipient of your gift. If you order the cards today or tomorrow (by the 13th) you can guarantee that your loved ones will get their present by the 25th.

Originally posted on the 80,000 Hours Blog.

How to volunteer effectively

Lots of volunteering is definitely not actually about helping people. It usually doesn’t hurt, but neither does going for a walk. If what you really want is to volunteer your time to make the world a better place, what should you do?

It’s not all what it seems

You may think I’m being cynical when I say most volunteering is usually not that helpful. But many charities agree with me. FORGE, a charity that looks after refugees, shifted its focus away from volunteers to a model that should have been able to help people more effectively. It turns out, that was a bad move for them – because their volunteers were also their chief fundraisers. Their income plummeted. Many charities have similar experiences – volunteers are an important investment, but they don’t directly contribute to the charity’s good works.

It’s not even usually true that a volunteer is free for a charity. There are substantial costs to managing and engaging volunteers. That’s usually worth it for a charity, because they’re counting on you to donate later. But it’s not a great start.

That’s not even taking into account the peculiar practice of getting lots of people to put huge amounts of time and effort into baking cakes and/or buying things in order to then auction them off (often below cost) in order to donate the proceeds to charity.

Let’s do it right

What I’ve just described sounds awful, but of course it isn’t. Even if that’s all there was on the table, it would still make sense to volunteer, in the same way that it makes sense to engage in any fun activity. Besides, even if the volunteers I’m describing aren’t helping as much as they possibly could, they are still helping. But we can do so much more. What are some ways to be a high-impact volunteer?

Volunteer for cost-effective organisations

Look for organisations that work on high impact causes. And within these organisations, look for those that are transparent and have demonstrable, proven impact.

Even better, look for organisations that have detailed cost-effectiveness evidence and a clear need for more resources.

Volunteer for labor-constrained charities

Some charities actually have plenty of money (they’ll never admit it though!). What they really need are talented people who have a lot of knowledge and enthusiasm about the cause they’re working towards. Some charities could hire the first person who came to them with the skills they need, but they don’t know if that person exists. You might not be looking for a job, but if you’re that sort of person then you’ll be a very high impact volunteer.

Highly skilled volunteers are good

If you have rare and valuable skills, volunteering those skills to an organisation makes great sense. They would often not be able to afford your services otherwise, and would do less well as a result. For example, some small charities spend a substantial portion of their budgets hiring external accountants and lawyers. Particularly if you have really rare knowledge of a particular field, your time might literally be worth more than that of anyone else in the world.

Volunteer your enthusiasm

Nine times out of ten, what a charitable organisation really wants more than anything else is that you tell all your friends about them and try to persuade them that the charity is important. After all, you have something special that no-one else does: the ability to share new ideas with your friends and energise them. Rather than volunteering your time to an organisation, make a point of bringing that organisation up in conversation. You’ll do it much more good that way.

Don’t do replaceable things

The volunteer jobs that do the least good are ones that are easily replaceable. Volunteering for unskilled labour jobs is often fairly low impact, particularly if those jobs are in the developing world. If you’re flying to Africa to build houses you should think again – odds are you’re taking someone else’s job over there, and then doing it badly. Think about donating the cost of air-fare to an effective organisation, and reduce your carbon dioxide emissions.

Volunteer your money

Particularly if you have a high-earning job, there’s an element of silliness about volunteering in a low-skilled role. An extra hour of your work could hire several people to do the low-skilled work, and there’s a good chance they’d all be better at it than you. It can help the cause you care about much more to just donate.

If you’re looking for volunteering roles that fit these rules of thumb there’s some help. The main page on volunteering has a lot of these thoughts and some others, with some specific suggestions. A good shortcut is that organisations that care a lot about their effectiveness (like us) will not take you on as a volunteer unless they think that you’re actually going to make a difference.

Originally posted on the 80,000 Hours blog.

How much malaria is biodiversity worth.

Every day, almost everything we do is about prioritisation. When I pick BLT or egg-mayo, I’m prioritising. When a small business owner decides whether to hire a new worker or install a new machine, they’re prioritising. When we decide to increase the cost of energy in order to reduce future climate change, we’re prioritising.

In fact, one of the most dramatic conflicts of the twentieth century was over how best to prioritise. Capitalists advocated a spread-out strategy, where individual choices combine to guide the priorities of entire societies. Socialists believed that a central organisation is needed to set priorities for societies.

Given how important prioritisation is, and how much people seem to care about doing it right, it’s startling that relatively little research is done into how best to prioritise the most important issues facing society. People do research on, say, the best ways to distribute humanitarian aid. But it wasn’t until 2004 that anyone thought to compare the best opportunities in humanitarian aid with the best opportunities in climate change adaptation technology or in reducing international trade barriers.

There are powerful reasons for avoiding the problem.

  • It’s hard to know where to start. We’d have to, for example, find some way to compare reductions in human rights abuses with reductions in child deaths. It seems unlikely we’ll ever find a rigorous way to do that.
  • It’s steeped in value judgements. No matter which way you go, you’ll have to make some ideologically loaded decisions. How much do you value unborn children? Animals? Artistic expression? Happiness? Some people will disagree with you no matter which attitude you take.
  • It is hugely complex to analyse. How much good can you do by delivering humanitarian aid to conflict zones? Well, are you going to take into account the risk that delivering aid will prolong the conflict? You really have to. But then perhaps a long conflict might be just the thing to force a whole region to take international cooperation and governance issues seriously. You can’t follow the causal chain all the way down the rabbit hole.

But none of those are good reasons. The simple truth is that we have no choice: we have to prioritise our work. Whenever any person, group or government makes any decision about how to spend or what to work on they are implicitly making these comparisons.

And they’re doing it badly, carelessly, and unconsciously.

There are some groups working to tackle the challenge of global prioritisation. Organisations like the Disease Control Priorities Project try to engage with a specific part of the challenge. The Copenhagen Consensus has engaged lots of specialists in a broad range of fields to present the case for many types of opportunity, and has worked on comparing the best of them.

No one, though, is addressing the problem that matters most to the members of 80,000 Hours. No one, that is, until now. Over the next few months, 80,000 Hours will be putting together a rough first pass at answering the question
“Suppose I’m willing to put the next ten years of my work behind any cause there is. Which one will make the biggest difference?”

It’s not going to be perfect, or even close. It will take advantage of modern research, with all the disadvantages and question-marks that entails. It will depend on a lot of assumptions, but where possible we’ll make those assumptions clear to let you decide.
The first step, and one of the most important, is the list of challenges. Here’s where you come in. I’ve got a list I’ll post in a bit, but I don’t want to bias you. What do you think the most important challenges facing humanity are? Post suggestions below.

Originally posted on the 80,000 Hours blog. The thoughts behind this post underlie the creation of the Global Priorities Project – the think tank I run.

The replaceability effect and earning to give

TLDR; Although it is harder, the relevant moral comparison to make for actions is against what would have happened (in expectation) if you did not make a choice. This has consequences especially when evaluating the trade-off between taking a job in an unethical industry in order to donate money to good causes – earning to give.

Part 1: The replaceability effect – working in unethical industries

High earning careers are often perceived as unethical careers. It’s not just that people think earning lots of money is bad, it’s also that a lot of the careers that make you really rich involve things that also seem immoral.

The example of our times is investment banking. It’s hard to be precise about why investment banking is bad, and I’m pretty sure that most people who think bankers do harm don’t really have much of an idea of what bankers do. But it seems plausible that irresponsible risk-taking in some parts of the financial system has had a really negative effect on millions of lives.

Let’s suppose, for sake of argument, that banking is pretty bad. That raises the question: is it bad to become a banker? Banking isn’t just a relevant example because it’s topical. It’s also one of the highest earning salaried careers available today. If I’m picking a career with the intention of giving away a large portion of what I earn, can I pick a job that causes harm because I think the money I give away will do more good?

One interesting thing you can do is to work out the total harm of the investment banking industry and compare it against the good you can do with your donations directly.
But that’s only a small part of the issue. This article will look at something called the replaceability effect. It’s the idea that, often, if you don’t take a job, someone else will take it. For some types of jobs, this is a very safe assumption, and it makes the harm you do by taking a job in an unethical industry much smaller than you might first guess. This effect corresponds to the concept economists call marginal elasticity.

The second part of this article will address a potential problem of this line of reasoning: collective action. If everyone acts assuming that someone else will do the dirty work if they don’t, you get collective action problems where everyone would be happier if they could all agree not to do the job. That article involves a bit of game theory. The third part asks whether we should be making these sorts of arguments at all. Perhaps there are some things you just shouldn’t be party to, no matter what you could bring about if you were.

Suppose I want to be an investment banker. I’ll be earning stupid amounts of money pretty quickly, so let’s say I’m happy to donate 50% of what I earn to the most effective causes. I’ll still be making more than the median household income, in my early 20s. But I’m worried about the harm I’m doing.

The first thing to remember is that what’s most important is not what I do directly, but is what my choices bring about. In Ben’s excellent article, he points out that you’d be crazy if you wanted to push paramedics out of the way to treat your injured parents. In that case, it’s pretty clear that we don’t actually think you should care about what you do yourself, but rather about the difference between what happens after your choice and what would have happened otherwise.

Now, if I decide not to become a banker, what difference does that make? Someone else gets the job. Top investment banking jobs are heavily sought after; there’ll be plenty of people willing to take your place. So what are the differences between me and them?

I care about the ethics of my choices. If we start with the assumption that lots of bankers aren’t particularly ethically motivated, it seems extremely likely that I care more about ethics than the person who would replace me. That means that if a choice ever comes up which I am fairly free to decide on, I’m more likely to make the ethical choice. (If there are no choices where I’m free to decide one, then my being the banker doesn’t do any harm anyhow.)

I’m giving half away. If one of the harms of banking is that it perpetuates wealth inequality, then this is just good on its own. But probably more importantly, the charities I donate to will be able to save literally tens of thousands of lives over the course of my career. The person who replaces me would probably donate fairly little to charities. The top 20% of spenders, a bracket almost all bankers would fall into, donate on average about 0.4% of their spending. That’s a lot less than 50% of income – especially since, if you save, that ends up at more than half of spending. They’re also probably not particularly interested in finding out which charities are the best charities. That matters a lot, because there are huge differences in the cost-effectiveness of charities. It’s hard to say how huge, but a very conservative estimate makes the difference between the best charities and the median charities at least a factor of 100, and it could be many times more than that.

I’m better at it. This is a slightly dodgy assumption. The selection process is competitive enough at the top end that the choice between top candidates is probably pretty random. It’s also not clear if this is a good or a bad thing. We might think that, even though banking could be harmful now, it couId be good if only it changed a few things – like risk-taking incentives. If we think that, then having more skilled bankers might be better. Otherwise, we might prefer to starve the industry of skilled people. But in either case, because the differences in ability are so slight, it’s pretty marginal.

On top of that, there’s a long term point to be made. If good people stay out of banking because it’s harmful, banking isn’t going to go away, but it’s not likely to get better any time soon. Getting people who care about ethics into the industry might be the only way to make the industry ethical.

There’s a bunch of additional points that we might want to bring out. One is that I also need to think about what I would do if I didn’t take the job. If there’s another job that does about as much good and no harm at all, then I should take that one. But that’s pretty obvious.

Another nuance is that we have to think about the chain of events the other guy starts off when he moves into a different job. He pushes someone else out of a job and the whole thing spirals down the chain. But, of course, if I were to take a different job I’d be denying someone else a job. So, it’s a bit more complicated than I’ve made out. But because banking is so competitive all of the effects here are pretty small.

One concern a lot of people have is this: when I take a banking job I’m sending a signal to people around me that I think the banking industry is good. Now, I think it’s hard to be persuaded by that. First, although I’m sure we all have very high opinions of ourselves, realistically no-one else will be particularly moved by your job choice. You’d have to be a particularly messianic figure for your signal to have that kind of effect. Second, you don’t need to be a silent victim to the first impressions other people have from your job choice. If your job choice sends a signal, so be it. You can send other signals too – like joining 80,000 Hours and telling your friends about why you did it.

The basic replaceability argument is only a rough sketch of the whole picture. It shows us that when we look at the consequences of our actions, and inspect the simple choice of whether or not to take a job in a harmful industry, the harm of our taking the job is somewhat less than it first appears. Obviously, though, there is still a harm. So you shouldn’t take the job unless you think you can do something pretty good with it. The argument, so far, has also only looked at the effects of the actual choice. It hasn’t looked at the effects of the sorts of attitudes involved in thinking about the choice this way. That’s what the next article is about. It also hasn’t answered the basic question, which we’ll examine in the third article, about whether weighing up consequences like this is the right way to go about it at all.

Part 2: Collective action

In the first part I looked at how it sometimes the best option is to take a high-earning job, even in an industry one thinks is harmful, in order to donate more to charity. There were a lot of caveats. The job has to earn more than you could have made otherwise to make up for the marginal harm you do by taking it. But for a competitive job market in a mainstream job, that marginal harm is often much smaller than the total harm caused by the job.

At this point, one might raise a second objection – this is a classic collective action problem in which the ‘best option’ for an individual is much worse than the result of longer term co-ordination. The Prisoner’s Dilemma and The Tragedy of the Commons are classic examples.

Here’s how that might go. Let’s consider the pool of young effective altruist (EA) graduates entering the job market and considering professional philanthropy. Suppose that their highest earning job opportunity is in some industry which they all agree is harmful. Each young graduate apparently

  1. If I enter the harmful industry, the harm I cause (due to the reasoning about replaceability in the previous post) will be much smaller (1) than the good I can do through my
  2. The industry is harmful, so it would be better if all of us didn’t work in the industry

Since each young graduate believes (1), they all choose to take the job in the harmful industry and pursue professional philanthropy. But each young graduate also believes that this outcome – where they all work in the harmful industry – is not the ideal outcome. By thinking about how to individually make the most difference, we seem to have ended up in a situation that everyone agrees is not best! This problem extends beyond professional philanthropy – it could apply to all sorts of reasoning we do at 80,000 Hours. What has gone wrong?

I think this story is only convincing because we treat all the young graduates as making a simultaneous decision about whether to enter the harmful industry. In effect, we’re ignoring the possibility for communication between the EA job seekers.

In reality, what would happen? Each young EA would take account of how many other EAs had entered or were planning to enter the industry already. Over a period of years, as more and more EAs become professional philanthropists, it would become less and less good to enter the industry. This is because as the proportion of EAs in the industry increases, the average amount donated by each person in the industry would rise, so each new EA would make less difference. Moreover, the easy opportunities to make the industry less harmful would be taken by other EAs, so the harm done
by entering the industry would get larger and larger. Eventually, it would no longer be best to pursue professional philanthropy in that industry. If each EA does their job, then over time we’ll move towards having just the right proportion of EAs working in the harmful industry.

This process could be accelerated by coordination mechanisms like the 80,000 Hours network. 80,000 Hours can do the work of each individual EA by keeping track of how many people are going into the harmful industries.

So, the truth in the objection is that you need to pay attention to what other EAs are doing. But it doesn’t mean that we should always avoid working in harmful industries, or thinking in general about how to individually make the most difference.

Part 3: Supporting immoral industries

When I tell people that they might want to consider professional philanthropy as a career choice, they react in a lot of different ways. Some people raise an eyebrow. “Seb,” they say as if explaining something very obvious, “if everybody quit their jobs and took a high earning career to give money to charities, then there wouldn’t be anybody to give the money to!”

To put the problem a bit more sympathetically, “80,000 Hours is trying to convince people to do Earning to Give. So if you succeeded, by getting everyone to do it, the world would be worse off.”
But this misses the point entirely. First, so we’re all clear, 80,000 Hours doesn’t necessarily recommend Earning to Give as the best career path. It all depends on who you are and what your strengths are. Even many effective altruists (EAs) who are suited to a high earning career can do better elsewhere. 80,000 Hours is just trying to put Earning to Give on the map of possible ethical career options.

But, more importantly, we only recommend that anyone does Earning to Give because we look at the way the world is and we reckon it makes a positive difference. If the world became different, and lots of people naturally decided to do Earning to Give, we’d recommend something else.

80,000 Hours is about getting people to think seriously about the difference their career choices make. That means you have to react to evidence. There is no risk that a world where everyone is a member of 80,000 Hours would be one where everyone does Earning to give.
This ties into a debate with a long and proud tradition in the field of ethics. A certain brand of ethicist believes that one ought to do only things that you’d be satisfied to have everyone do. For example, you shouldn’t lie because if everyone lied then all communication would break down. (For these people, you shouldn’t lie even when there are compelling reasons to do so. It’s wrong even if it would save lives to lie to a murderer.)

This sort of theory has a lot of problems, and philosophers can spend a lot of time fixing little objections to it. But it’s important to realise that universalisability usually depends on a particular way of phrasing something. For example, “If everyone did Earning to Give the world would be bad” might be true, but replace “Earning to Give” with “effective altruist” and add that many EAs would do Earning to Give, and it doesn’t seem so true at all. (Also note that many things that seem unobjectionable are not universalisable. I ought not give Sally a lollipop because it wouldn’t be good if everyone gave Sally a lollipop. It also isn’t clear where we get our assessment of ‘good’ from when we work out if things would be universalisable.)

What we can agree on is fairly unobjectionable. Keep your options open and don’t rule out careers just because someone else told you they were unethical. The impact your life has on the world around you is complicated and depends on lots of factors. You have to sit down and work out whether your decision to take a job makes the world worse or better. Sometimes, direct harm will be outweighed by other benefits. Sometimes, direct good will be outweighed by other harms.

Originally published on the 80,000 Hours blog.