All newest publication and drafts are on PhilPapers
Scientific publications in English:
- Turchin, Alexey, and Brian Patrick Green. Aquatic refuges for surviving a global catastrophe. Futures89 (2017): 26-37. https://www.sciencedirect.com/science/article/pii/S0016328716303494
- Turchin, Alexey, and David Denkenberger. “Global catastrophic and existential risk communication scale.” Futures(2018), https://www.sciencedirect.com/science/article/pii/S001632871730112X
- Batin, M., Turchin, A., S. Markov., Zhila, A., & Denkenberger, D. (2017). Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence. Informatica, 41(4) http://www.informatica.si/index.php/informatica/article/view/1797
- Turchin, Alexey, and David Denkenberger. Military AI as a convergent goal of the self-improving AI. In edited volume: Artificial intelligence safety and security, CRC, 2018 https://philpapers.org/rec/TURMAA-6
- Turchin, Alexey, and David Denkenberger. “Making a Back Up on the Moon: Surviving Global Risks Through Preservation of Data About Humanity for the Next Earth Civilization”. Acta Astronautica.V.160. May 2018, Pages 161-170. https://www.sciencedirect.com/science/article/pii/S009457651830119X
- Turchin, Alexey, and David Denkenberger. Classification of Global Catastrophic Risks Connected with Artificial Intelligence. AI & Society, 2018. https://link.springer.com/article/10.1007/s00146-018-0845-5
- Turchin A., Green B. Islands as refuges for surviving global catastrophes. Accepted in Foresight. 2018.
- Turchin A. Principles of classification of global risks prevention plans. Accepted in Human Prospect.
- Turchin A. Risks of downloading alien AI via SETI. Journal of the British Interplanetary Society 71 (2): 71-79. 2018.
- Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy. Long-Term Trajectories of Human Civilization. Forthcoming in Foresight, DOI 10.1108/FS-04-2018-0037
- Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”
- Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology
- Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence
- Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons
- Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions
Important blog posts:
Scientific articles in Russian:
A.V. Turchin. On the possible causes of the underestimation of risks destruction of human civilization // Problems of risk management and safety: the works of the Institute of Systems Analysis, Russian Academy of Sciences. – T. 31. – M .: KomKniga 2007.
A.V. Turchin. Natural disasters and the anthropic principle// Problems of risk management and safety: the works of the Institute of Systems Analysis, Russian Academy of Sciences. – T. 31. – M .: KomKniga 2007.pp. 306-332.
A.V. Turchin. The problem of sustainable development and the prospects for global catastrophes // Social studies and the present. 2010. № 1. S. 156-163
Books in Russian
- War and 25 other Scenarios of End of the World, M. Europe, 2008.
- Structure of the Global Catastrophe, М, URSS, 2010.
- Futurology, Moscow, Binom, 2012, together with Michael Batin.