Globally Catastrophic and Existential Risks, Part I

(Photo by NeONBRAND on Unsplash)

Foreword

Nick Bostrom, a philosophy professor at Oxford University, is pioneering research on existential risks. There is limited literature on this topic, and his is certainly the most comprehensive. This post will therefore be focused primarily on his paper Existential Risk Prevention as a Global Priority, published in 2013.

How do we classify risks?

1000px-X-risk-chart-en-01a.svg

Nick Bostrom classifies risks according to their scope and severity. Following the image above, the scope of a risk can range from personal to cosmic (yikes), and the severity from imperceptible to hellish (even more yikes). His research focuses on risks with the potential for the greatest negative impacts, and coined the classification of two major categories of such risks, namely global catastrophic risks, and existential risks.

For a risk to be classified as one that is globally catastrophic, it must be at least global in scale and more than imperceptible in severity. GCR’s may kill the vast majority of life on earth, but humanity will likely be able to recover.

In order for a global catastrophic risk to become an existential risk, they must be at least terminal in severity and trans-generational in scope. These risks have the potential to either destroy humanity entirely or make it impossible for civilisation to recover.

So yeah, if you haven’t been able to gather already, this is pretty heavy stuff.

Some other relevant classifications of risks include whether it is anthropogenic (human-caused) or natural/external in nature (out of our control). And finally, for existential risks, there are also some categories that delve deeper into the nature of humanity’s demise:

Bangs – sudden catastrophes which may be accidental or deliberate (think nuclear war)
Crunches – humanity survives but civilisation is unrecoverable (i.e. we revert back to our ape selves)
Shrieks – undesirable futures where humanity becomes repressed or dominated
Whimpers – gradual decline of civilisation or our current values over time

Study limitations

The study of existential and catastrophic risks certainly has some limitations. First and foremost (as silly as it may sound), humanity has never been destroyed before, and it is in their nature that extinction events have no observers. So we don’t know what its like! Moreover, probability calculations are largely subjective and speculative and we can only take guesses at what the future is going to look like, given that technology and international relations change constantly. Threats posed by nature are relatively constant such as the probability that we will be hit by an asteroid, but forecasting lengths of time for other risks (and new ones that may pop up) is a tricky business.

The difficulty of making academic predictions and assessments on the topic of risks certainly does not negate its need. After all, these risks relate to all of humanity. It would be silly not to invest in infrastructure that attempts to avoid the termination of the human race, right? Hence emerges the most important area of risk research – the mitigation of existential risk.

Examples of global catastrophic risks (GCR’s)

But before we get into how to avoid existential risks, we have to know what some of the risks we’re facing are. Here’s a handy list of examples that I prepared earlier.

Technological risks

  • engineered pandemics
  • superintelligent AI

Risks from global governance (social and political risks)

  • nuclear winter

Earth system governance risks (FYI, these scenarios have increased in likelihood following human population expansion and the accompanying increase in resource demands)

  • natural pandemics
  • ecosystem collapse

Non-anthropogenic risks

  • a 10km+ astronomical object
  • volcanic super-eruption

Probability predictions

The Future of Humanity Institute of Oxford University analysed a bunch of these risks and their associated probability predictions at the Global Catastrophic Risk Conference in 2008. They came out with the following estimation of the probability of human extinction before 2100: 19%.

Nineteen percent – by 2100! Obviously we have to take this figure with a grain of salt, but that the possibility of humanity’s extinction merely exists at all, and in my lifetime, well, it scares me more than a little.

Here’s another estimate that freaks me out (found in the Global Challenges Foundation 2016 annual report):

An average American is more than five times more likely to die during a human-extinction event than in a car crash.

… 👀

And on that note, I’m going to finish this post off here. Part II will go into greater depth about the moral importance of study in this area, the framework by which GCR’s are analysed, and some examples of GCR’s according to that framework.

Thanks for stopping by!

Leave a comment