When considering which problems are largest in scale, most important, and most neglected, it’s crucial to think not just of people alive today, but also of future generations. The lives of future generations are just as important as ours, but they don’t have the opportunity to influence the world into which they are born. Moreover, the number of humans that will exist in the future will likely outnumber the humans who have existed till now. This cause area focuses on global catastrophic risks associated with emerging technologies that have the potential to cause suffering on an astronomical scale to future generations.

It is extremely difficult to identify promising interventions in this cause area because it is not only highly complex and hence difficult to map, but also highly neglected compared to previously mentioned cause areas. The best we can do now is conduct research and raise awareness to ensure we minimise the the likelihood of global catastrophic risks becoming a reality.

We recommend the Machine Intelligence Research Institute and our own fund. These recommendations are made with a focus on scenarios worse than extinction. If you are primarily concerned with extinction scenarios, we recommend the Long-term Future Fund run by the Centre for Effective Altruism instead.

Machine Intelligence Research Institute

MIRI conducts foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact. As more and more human cognitive feats come to be reproduced by artificially intelligent systems, we can expect to encounter a number of unprecedented risks and benefits. These and other under-explored questions will only become more pressing as progress in computer science allows AI systems to act with increasing autonomy and efficiency. Learn more.

Support this charity

Our fund

The fund’s mission is supporting research and policy efforts to prevent the worst technological risks facing our civilization. The potentially transformative nature of artificial intelligence poses a particular challenge that we want to address. We want to prevent a situation similar to the advent of nuclear weapons, in which careful reflection on the serious implications of this technology took a back seat during the wartime arms race. As our technological power grows, future inventions may cause harm on an even larger scale—unless we act early and deliberately.

Priority areas: decision theory and bargaining, specific AI alignment approaches, fail-safe architectures, macrostrategy research, AI governance, as well as social science research on conflicts and moral circle expansion.

Support our fund


Sign up to receive our guide on effective giving or learn more about our donation advice service. We might also contact you directly regarding our campaigns.