Center for Strategic Communication

To expand on the previous entry, I would like to talk a little bit more about the concept of existential risk as a bit of a thought experiment. Existential risk is, by definition, risk that would either annihilate human life or drastically curtail its potential. Existential risk in this discussion is different from “existential threats” in American national security dialogue, which imply a threat to American national survival. The analytical distinction, however, may not completely be valid because some of the existential risks discussed may, depending on the circumstances, comprise existential threats to the United States’ survival and way of life.

The field of existential risk, today, in academic circles is mostly not necessarily something that connects with national security policy as we understand it. The notion of “national security” is a modern one, even if its elements (prevention of attacks against the homeland, management of threats abroad, mobilization of the state’s resources for protection) are in fact very old. The term “human security” has been proposed as a substitute for what has been viewed as an uncessarily state-centric term, but some analysts have argued that human security’s conceptual vagueness makes it difficult to pin down, much less operationalize. As someone who writes primarily about defense policy, human security is not a major interest of mine. However, we want to talk about existential risk as it actually exists today, it is appropriate to note that human security has more relevance to the discussion than national security. Existential risk refers to broader threats to pose dangers to humanity as a whole–which by definition would imply the United States as well. Hence the following discussion will purposefully blur national and human security.

Perhaps the best introduction to the subject can begin with the nuclear threats example. Nikita Khrushchev is (falsely) quoted as saying that the horrors of nuclear war would be so great, that even the survivors woud “envy the dead.” This, from the beginning, implies several gradations of risk categories instead of general extinction, as well as a heavily normative dimension involved in conceptualizing risk. To “envy the dead” is a decision arrived at by an analysis that postwar life is so horrible that death would have been preferable to survival. All discussions of existential risk begin with normative assumptions about the value of life.

Oxford’s Nick Bostrom is one of the foremost analysts of existential risk, and his taxonomy is useful for heuristic purposes. Bostrom’s criteria for analysis is scope, severity, and probability. He makes several other major assumptions: since even global catastrophes such as wars, pandemics, and economic crashes have not diminished human potential for prosperity, an existential risk by definition is one that harms the future. Bostrom also assumes that future human life at least has the possibility of being better in unpredictable ways, much as globalization (for all its downsides) lifted countless millions out of poverty and helped create a global middle class. Since the Earth is potentially habitable for a billion years before the sun overheats (and Bostom cannot rule out the possibility that by then humanity may transcend the problem of earth-dependence), existential risk deals with an extremely long-term time frame.

From such a criteria Bostrom separates existential risks into several categories of global catastrophe. Extinction needs little explanation, but others, like Khruschev’s “envy the dead” comment, are normative evaluations about future human potential. The first, permanent stagnation, is a scenario in which humanity survives but never reaches technological maturity. At first blush, this might seem to be small potatoes, but it could have enormous consequences. Only through greater advances in technology did we overcome the Malthusian trap. Bostrom’s three scenarios of permanent stagnation include unrecovered collapse (a total loss of current technological and economic capabilities), plateauing (a stunting of human potential), and recurring cycles of collapse and recovery. For a visual of the stagnation scenario, imagine Snake Plissken entering code 666 in Escape from LA.

The second scenario, flawed realization, involves reaching technological maturity in a manner that nonetheless is so dismally and irremediably flawed that humanity can only realize a fraction of potential value from technological progress. Such potentials include completion of technology that nonetheless is never put to good use, or completion of technology in a manner that is ultimately unsustainable or unneccesarily wasteful. Finally, Bostrom also posits the risk of attaining technological maturity but subsequently being unable to manage existential risk resulting from those technologies.

If you want to see more on Bostrom’s existential risk project as well his analysis of specific scenarios in every category listed, his Oxford Future of Humanity paper on existential risk scenarios and his explanation of risk analysis are good places to begin. The reason why my lengthy recitation of Bostrom’s risk is a thought experiment is as follows: mainstream security policy discussions in DC are ostensibly concerned with preventing existential risk, but have little to say about these kinds of considerations. Even “lesser” (i.e non-existential but nonetheless extremely harmful) scenarios like the chance of asteroids inflicting large-scale damage barely merit discussion, much less significant analytical or practical investment. As I blogged a while ago:

How many FP specialists flip through the pages of The Astrophysical Journal
or even evince interest in the subject? It’s not like we’ve seen a
COIN-like debate between champions of a kinetic interceptor-based
asteroid deflection approach vs. those who think we should use solar
sails. There is no Gian P. Gentile figure arguing that NASA’s thinking
about asteroid defense is a “strategy of tactics” or that too much focus
on Mars exploration has made NASA forget about the fundamentals of
asteroid defense. And this is not an exception that proves the rule.
There are millions of subtle and overt social and natural forces that
shape our lives that even the most polymathic of us could sincerely care
less about.

None of this is to argue that national security policy or even collective security as it exists today should be radically transformed. The average administration has its hands full making sure its security policy stays valid for one year, much less on the time frames that Bostrom analyzes. Moreover, there’s something to be said for the fact that solutions to some existential problems will probably emerge as a result of bottom-up collaboration rather than central planning. The Industrial Revolution, which enabled us to move beyond the Malthusian trap, was not a program of any one government or some kind of 19th century United Nations Council on Overcoming Malthusian Traps. It resulted from industrial capitalism, something even Karl Marx and his pal Engels saw as an evolutionary step in human history.

But when we discuss existential threats and risk outside of a Cold War context, Beltway rhetoric is completely out of sync with what analysts such as Bostrom ponder at places like the Future of Humanity Institute and the Long Now Foundation. Should be it be in sync? That’s a question bigger than any one blog post can answer. But there is one other purpose to this thought experiment. An alternative view of security outside the normal frame of defense discussion should highlight the significant absurdity in claiming that the Internet and global insurgency are worse than Soviet nuclear-armed bombers, submarines, and missiles. Calculating risk depends on quantative, qualitative, and normative metrics that simply are missing from discussions of existential threats and risk today. Bostrom has laid out his metrics. Those claiming the world is more dangerous than it was 20 years ago should explain theirs.