Advances in artificial intelligence (AI) are a double-edged sword. On the
one hand, they may increase economic growth as AI augments our ability to
innovate. On the other hand, many experts worry that these advances entail
existential risk: creating a superintelligence misaligned with human values
could lead to catastrophic outcomes, even possibly human extinction. This
paper considers the optimal use of AI technology in the presence of these
opportunities and risks. Under what conditions should we continue the rapid
progress of AI, and under what conditions should we stop?