Long-term strategies for ending existential risk from fast takeoff
Author's note: I (Daniel Dewey) wrote this paper in 2014. I have not updated the contents for some time, and my views on some points have changed.
The tentative views and guesses expressed in this paper don't reflect the views or guesses of the Future of Humanity Institute as a whole, the Open Philanthropy Project, or any other employer or research group I've worked with.
If you'd like to read the full paper, please email me to ask for a copy!
Summary
In this paper, I propose four possible long-term strategies for mitigating existential risk from superintelligent AI systems. These strategies are specific to superintelligent AI, and would not be appropriate for nearer-term risks from AI.
1. International coordination
At around the time risk from superintelligent AI could begin, governments could coordinate to make sure that any risky aspects of AI development are conducted safely. This could involve the creation of a joint international project.
2. Sovereign AI
A private or government-run project could create an autonomous AI system that is able to prevent further risks from superintelligent AI. Such a system could proactively pursue humane values, or could react minimally to prevent catastrophic harms.
3. AI-empowered project
A private or government-run project could use some capabilities of non-autonomous AI systems to prevent further risks from superintelligent AI. This would plausibly run less risk of catastrophic technical failure than sovereign AI projects would.
4. Other decisive technological advantage
Other technologies might be developed that could be used by governments or private projects to prevent risks from superintelligent AI.