Why we prioritize the long-term future

At Raising for Effective Giving, we’re driven to fund the best organizations working on the most pressing problems. But what exactly is the world’s most pressing problem? In this post, we want to outline why we prioritize interventions focused on affecting the long-term future, particularly those trying to avert worst-case outcomes that would cause vast amounts of suffering.

We should think about our altruism in terms of expected value.

What sets us apart from other charities is that when looking at different ways of doing good, we explicitly use expected-value reasoning and calculations. It’s the same mindset behind determining your Return on Investment or expected winnings at the poker table. This is a crucial decision-making tool if you want to maximize the impact of your donation.

We also believe that we should be risk-neutral when it comes to charity or altruism. When it comes to money it makes sense to value the first $10,000 more than an additional $10,000. You will have already purchased the most important things at that point. However, saving lives, curing disease, or improving well-being has no such diminishing returns. The 100th vaccination, for example, is as valuable as the first. That means so-called “charity moonshots” could turn out to be really cost-effective. These are interventions that have only a small likelihood of coming to fruition, but would be true game changers if they did. Think of such a strategy as somewhat analogous to venture capitalism.

The future holds astronomical stakes.

If we take expected value reasoning seriously, the future takes on incredible significance from an altruistic perspective. Barring an extinction event, life on earth could continue for many more generations before this planet becomes truly uninhabitable. Current estimates suggest that this would happen around the 1 billion year mark. Given current human population levels of ~10 billion (rounding up) and a life span of ~100 years (rounding up), our descendants in this scenario would number in the vicinity of 10^17 (or one hundred quadrillion).

However, this might be overly optimistic. If we assumed there’s an unavoidable risk of extinction of around 0.1% every year, we’d have about 4,600 more years before the overall risk of extinction reached 99%. In such a scenario, with the same assumptions as above, we would expect another 105,000,000,000 (105 billion) human descendants, which is about as many people as have ever lived so far—still a huge number.

This does not even take into account all the nonhuman animals living on factory farms or in the wild, nor any potential digital lifeforms, or the possibility of human space colonization. Given rapidly advancing technology, such developments are certainly not out of the question in the long run, adding a large multiplier to all such calculations and thereby increasing the significance of the future even more.

It’s important to realize that these future lives may turn out to be incredibly positive, or they might be filled with misery and suffering. So if we were able to change the trajectory of human civilization even to a small extent, this would have huge ripple effects through all the generations yet to come. Therefore, we believe interventions targeted at influencing the long-term future are potentially much more cost-effective than those focused on the current generation.

These are the potential payoffs of influencing the long-term future. However, in order to get the full picture, we also need to look at the likelihood of such improvements materializing in the first place. So how plausible is it that we’re able to significantly alter what the future holds?

The future is in our hands.

If we look at our own history, it seems that there are indeed ways of having a lasting effect on the trajectory of human civilization—for good or ill.


New inventions have repeatedly revolutionized our way of living. The agricultural revolution likely led to the first cities and the development of private property; the industrial revolution allowed a significant fraction of humans to escape the Malthusian trap for the first time; and the digital revolution enabled people all over the world to communicate and share information at the speed of light.

However, these developments are not necessarily all positive. It has been argued that hunter-gatherers were actually happier than their descendants, and that billions of nonhuman animals have suffered as a result of factory farming. In the 20th century, we even unlocked the possibility of causing our own extinction by developing nuclear technology.

It seems very plausible to us that the pace of technological change has accelerated over time. This means there are likely many more such revolutions ahead of us. We currently believe transformative artificial intelligence to be the most likely first candidate for this. So if we bring ourselves in a position to influence how these changes will play out, we can, to some extent, shape how the future unfolds.


Society’s formal structures often have a huge influence over human and nonhuman lives for generations. Once set in place, they are slow to change and have large sway over our behavior because they are usually backed up by strong incentives. The US constitution is a really good example of a sturdy institution that has shaped human interactions for centuries now and was likely a significant positive game changer. However, like technology, institutions also hold the potential for grave harm. The institution of slavery was responsible for the pain and misery of generations; dictatorships still cause suffering and death on a large scale.

Thus, if we were able to change institutions for the better, the benefits could be massive. For example, we might want to strengthen systems of global cooperation or improve institutional decision-making. Unfortunately, we’re still very uncertain how to best achieve this.


If institutions are about changing incentives, then values are about changing what goals people pursue on their own accord. However, they are similar to institutions in that they are quite inert, while having a strong influence over human behavior. Again, it’s possible to conceive of values that are conducive to flourishing lives, as well as values that are antithetical to that very goal. Societal values certainly seem to have improved over the past centuries; just think of the emancipation of women and people of color. However, such change is not inevitable, but rather the result of concerted action.

We could push further in this direction. If we were able to increase concern for nonhuman animals or altruism in general, this could improve the lives of many future generations.


Given such powerful levers, we believe there are targeted interventions at our disposal that have a lasting positive effect on the long-term future. Given how high the stakes are, these interventions don’t need to change our trajectory drastically in order to have a very high expected value— they need only have a good shot at making a significant difference. This is what we’re aiming for.

That’s also why we recently launched the REG Fund. While it’s always been possible to have us handle the allocation of your donations, the REG Fund makes this process more transparent by having a clearer mission and grantmaking process. Its stated goal is to “identify grants that decrease the likelihood of worst-case outcomes that cause vast amounts of suffering as a result of advanced technologies”.  We can imagine nothing that’s more important at this critical period in human civilization.

Addendum September 2020: The REG Fund is now housed at the Center on Long-Term Risk.

Further Resources


Appendix 1: Other Assumptions

While we have good reasons to believe this overall account, we do make some implicit assumptions that are critical for its validity.

We should not discount future lives.

Economists often discount the value of future benefits. You’d probably prefer to have $100 now than $110 in ten years—with good reason. However, most philosophers think that this does not apply to future lives and we agree with them. It strikes us as implausible that our date of birth holds moral significance, or that humans 1,000 years ago ought to have disregarded our fate if they had the possibility of influencing it.

The person-affecting view is likely wrong.

There is a view in population ethics that implies that only existing lives (or lives that will necessarily exist) have moral weight. We believe this position, known as the person-affecting view, to be implausible. Ask yourself: would you rather improve the life of one person now, or improve the lives of 1,000 people who are likely—but not certain—to exist one hundred years from now? (Also see the nonidentity problem.)

Appendix 2: Why we will continue to recommend charities trying to alleviate global poverty and improve animal welfare

We believe that the arguments in favor of prioritizing charities focused on the long-term future are sound. However, we also realize that reasonable people disagree with us—either because they (partly) disagree with expected value reasoning, hold different ethical positions from us, or have a sufficiently different model of the world. We want to accommodate this disagreement by continuing to recommend charities in other cause areas, namely global poverty and animal welfare. For these we will rely to a large extent on the expertise and judgment of GiveWell and Animal Charity Evaluators respectively. Both also offer the option to contribute to a fund.