What part of the alignment problem does this plan aim to solve? Outer alignment, and reaching a safe transformative AI (TAI) without killing everyone Why has that part of the alignment problem been chosen? In our opinion, this is the top-level AI safety problem that ought to be solved anyway to go past the "acute risk period". How does this plan aim to solve the problem? We extend Davidad's Open Agency Architecture (OAA) plan to base it on the existing technologies and to follow from proven incentive systems and governance institutions that can bring together the vast masses of knowledge and adoption muscle required, and tie them together in a stable and resilient way. Much more details: https://www.lesswrong.com/collaborateOnPost?postId=AKBkDNeFLZxaMqjQG&key=94b9202285860fd2023c0cc87a740f What evidence is there that the methods will work? The Gaia Network design has been in the works since 2018; it follows directly from first principles of cybernetics and economics; and it has been developed in collaboration with leading experts in collective intelligence and active inference. However, more importantly, it is a practical design that leverages a proven software stack and proven economic mechanisms to solve the real-world problems of accelerating scientific sensemaking and connecting it to better business and policy decisions. We know it works because we’ve built an early version (“Fangorn”) expressly to solve these problems, and we’ve learned from what worked and what didn’t. What are the most likely causes of this not working? - The timeline risk: we will not be able to scale the Gaia Network sufficiently in time before rogue AI start to pose significant civilisational risks. - The humanity disempowerment risk: it's not yet fully proven that we can prevent the "slow disempowerment of humanity" if we deploy the Gaia Network at scale, even though we believe that Gaia Network significantly reduces this risk relative to the current economic and political status quo nonetheless. Notably, we leverage Davidad's reasoning for addressing the inner misalignment and power-seeking risks (https://www.lesswrong.com/posts/mnoc3cKY3gXMrTybs/a-list-of-core-ai-safety-problems-and-how-i-hope-to-solve), so we don't deem them as significant risks.