Roadmap: Plan of Action to Prevent Human Extinction Risks

by ,

Let’s do an experiment in “reverse crowd-funding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.

Should more than one person have the same idea, the award will be made to the person who posted it first.

The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.

I may include you as a co-author in the roadmap (if you agree).

The roadmap is distributed under an open license GNU.

Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).

The competition is open until the end of 2015.

The roadmap can be downloaded as a pdf from:

UPDATE: I uploaded new version of the map with changes marked in blue.

http://immortality-roadmap.com/globriskeng.pdf

Email: alexei.turchin@gmail.com

 

The main discussion is going here:

http://lesswrong.com/lw/ma8/roadmap_plan_of_action_to_prevent_human/

Plan of Action to Prevent Human Extinction Risks by Turchin Alexei

The map “Typology of human extinction risks” was published

by ,

In 2008 I was working on a Russian language book “Structure of the Global Catastrophe”.  I showed it to a friend to review, the geologist Aranovich, an old friend of my late mother’s husband.

We started to discuss Stevenson’s probe — a hypothetical vehicle which could reach the earth’s core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60,000 tons – theoretically feasible, but practically impossible.

Milan Cirkovic wrote an article arguing against this proposal, in which he fairly concluded that such a probe would leave a molten channel of debris behind it, and high pressure inside the earth’s core could push this material upwards. A catastrophic degassing of the earth’s core could result, that would act like giant volcanic eruption, completely changing atmospheric composition and killing all life on Earth.

Our friend told me that in his institute they had created an upgraded version of such a probe, which would be simpler, cheaper and which could drill down deeply at a speed of 1000 km per month. This probe would be a special nuclear reactor, which uses its energy to melt through the mantle. (Something similar was suggested in the movie “China syndrome” about a possible accident at a nuclear power station – so I don’t think that publishing this information would endanger humanity.)

The details of the reactor-probe were kept secret, but there was no money available for practical realization of the project. I suggested that it would be wise not to create such a probe. If it were created it could become the cheapest and most effective doomsday weapon, useful for worldwide blackmail in the reasoning style of Herman Kahn.

In this story the most surprising thing for me was not a new way to kill mankind, but the ease with which I discovered its details. If your nearest friends from a circle not connected with x-risks research know of a new way of destroying humanity (while not fully recognizing it as such), how many more such ways are known to scientists from other areas of expertise!

I like to create full exhaustive lists, and I could not stop myself from creating a list of human extinction risks. Soon I reached around 100 items, although not all of them are really dangerous. I decided to convert them into something like a periodic table — i.e to sort them by several parameters — in order to help predict new risks.

For this map I chose two main variables: the basic mechanism of risk and the historical epoch during which it could happen. Also any map should be based on some kind of future model, and I chose Kurzweil’s model of exponential technological growth which leads to the creation of super technologies in the middle of the 21st century. Also risks are graded according to their probabilities: main, possible and hypothetical. I plan to attach to each risk a wiki page with its explanation.

I would like to know which risks are missing from this map. If your ideas are too dangerous to openly publish them, PM me. If you think that any mention of your idea will raise the chances of human extinction, just mention its existence without the details.

I think that the map of x-risks is necessary for their prevention. I offered prizes for improving theprevious map which illustrates possible prevention methods of x-risks and it really helped me to improve it. But I do not offer prizes for improving this map as it may encourage people to be too creative in thinking about new risks.

Pdf is here: http://immortality-roadmap.com/typriskeng.pdf 

How many X-Risks for Humanity? This Roadmap has 100 Doomsday Scenarios

Lesswrong

Typology of human extinction risks by Turchin Alexei

The Roadmap of Roadmap

by ,

This site is dedicated to various roadmaps that I have created. They will cover most topics of transhumanist thinking; first of all they will show the ways to immortality on both personal and civilizational levels.
First, I would like to propose to you a series of maps dedicated to the prevention of global risks:
– “Typology of global risks”
– “Plans to prevent global risks”
– “Possible methods of failure of AI”
– “Ways to create a secure AI”
The purpose of this work:
– To create the most comprehensive list of risks,
– To collect the best ideas for solutions to prevent these risks,
– To arrange them in the simplest and most logical way,
– To give them a probabilistic assessment.
1. The map of “Typology of Human Extinction Risks” classifies the risks by their time of occurrence, and their main operational factors. It includes about 100 different risks, and provides estimations of their likelihood.
2. The map “Plan of Actions to Prevent Human Extinction Risks” consist of plans A, B, and C.
– Plan A consists of five parallel options:
creation of a global system of control, decentralized monitoring, creation of Friendly AI, improvement of indestructibility and resettlement in space.
– Plan B consists of the construction of shelters in order to survive a catastrophe.
– Plan C consists of preserving traces of information for future civilizations.
– Also listed are hypothetical plans and dangerous plans that are not worth implementing.
3. The map “AGI Failures Modes and Levels” describes all the possibilities of how AI can lead to disaster at different stages of its development:
– From combat drones,
– At the moment it starts to self-improve,
– Seizure of power in the world,
– Failure in the friendliness of the system,
– And up to philosophical and technological problems in the later stages of its development, which generally can be called the “AI halting” problem.
4. The map “Ways to Create Safe AI” is based on a joint paper by Kaj Sotala and Stuart Armstrong «Responses to Catastrophic AGI Risk: A Survey», and supplemented by the ideas of Ben Goertzel, Christiano and others, as well as a number of my own proposals.
The principles guiding these maps’ construction is similar to the method of Descartes. This method consists of four items:
– Start with the obvious,
– Break down a complex problem into simpler parts,
– Arrange parts from simple to complex,
– Make very exhaustive lists.
To illustrate the power of his method, Descartes created three sciences, including analytic geometry (and in particular, the remarkable Cartesian coordinate system).
In addition, a visual representation of information in the form of maps greatly enhances the human mind and uses the brain’s “graphics processor”. I am currently working on the map which shows ways of intelligence improvement, and I hope to clarify these principles further.
I plan to expand each item in the form of an explanatory text accompanying the map. In part, this has already been done in the books “The Structure of Global Catastrophe” (published in Russian and English – draft co-authored with Michael Anisimov) and in the book “Immortality” (draft in Russian).
All existing and developing maps are reflected in the pdf-document “The Roadmap of the Roadmaps.” Completed maps are available for download through the links provided in this document.
All maps will also be published in the Facebook group “Immortality Roadmaps”: https://www.facebook.com/groups/816186605163066/
In parallel, I have also created:
– A map showing the ways to personal immortality,
– A map showing the ways to life extension, available now,
– A series of maps dedicated to particular problems:
– Doomsday Argument,
– Simulation Argument,
– How to Survive the End of the Universe.
In addition, in conjunction with the Exponential Technologies Institute, we are developing a series of dynamic maps that will be available for collective editing.
I want to gather feedback that will help me to improve the map. I am primarily interested in factual errors and omitted important ideas. Mail to: alexei.turchin@gmail.com
Maps are distributed under open license: that is, you have the right to freely copy and modify them, while maintaining the attribution of people involved in producing individual maps, but you cannot create commercial or proprietary products based on them. (C) Alexey Turchin, 2015

The Roadmap of the Roadmaps by Turchin Alexei