EA - Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research by David Kristoffersson

The Nonlinear Library

Mar 8 2024 • 7 mins

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research, published by David Kristoffersson on March 8, 2024 on The Effective Altruism Forum. Cross-posted on LessWrong. Executive Summary We're excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we've brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics. Together, we're launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness. Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following: Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take. Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research. AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more. In the next three months, you can expect to see the following outputs: Convergence's Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI. Research Agendas for our Scenario Research and Governance Recommendations initiatives. 2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024. Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry. A series of articles on AI scenarios highlighting results from our ongoing research. All Thinks Considered: A podcast series exploring the topics of critical thinking, fostering open dialogue, and interviewing AI thought leaders. Learn more on our new website. History Convergence originally emerged as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures. Through 2021 to 2023, we laid the foundation for a research institution and built a larger team. We are now launching Convergence as a strong team of 10 researchers and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take. Programs Scenario Research There are large uncertainties about the future of AI and its impacts on society. Potential scenarios range from flourishing post-work futures to existential catastrophes such as the total collapse of societal structures. Currently, there's a serious dearth of research to understand these scenarios - their likelihood, causes, and societal outcomes. Scenario planning is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. Such research typically defines specific parameters that are likely to cause certain scenarios, and id...