2. Underestimating non-obvious risks

          Global risks are divided into two categories: obvious and non-obvious. An obvious risk would be nuclear war, a non-obvious risk would be the destruction of an Earth by the creation of a stable strangelet during a particle accelerator experiment. Non-obvious risks may be  more dangerous, because their severity and probability are unknown, and therefore suitable countermeasures are generally not taken. Some non-obvious risks are known only to a narrow circle of experts who express contradictory opinions about their severity, probability, and mechanism of emergence. A detached onlooker, such as a military general or a head of state, may be completely unable to distinguish between the expert advice given and might as well flip a coin to determine who to listen to regarding the non-obvious risks. This makes inadequate preparation for these risks highly likely, whereas well-understood risks such as nuclear war are better anticipated and ameliorated.

          Making estimates based on past rates of discovery of new global risks, it seems as if the number of new risks expands exponentially over time. Therefore we can anticipate a great increase in the number of global risks in the 21st century, the nature of which may be impossible for us to guess now, and fall into the category of non-obvious risks.

          Obvious risks are much more convenient to analyze. There are huge volumes of data we can use to assess these perils. The volume of this analysis can conceal the fact that there are other risks about which little is known. Their assessment may not be amenable to rigorous numerical analysis, but they are severe risks all the same (for example, risks from incorrectly programmed Artificial Intelligence).