This has been posted on my blog: ➡️➡️➡️check it out for better formatting⬅️⬅️⬅️ Check the abstract for TLDR summary. Extended explanation with some comments: 1️⃣1️⃣1️⃣ LIFE (starting point and then extending the definition) Obvious. LIFE is something universally valued, we don't want AI to harm LIFE. 2️⃣2️⃣2️⃣ Health, including mental health, longevity, happiness, wellbeing Any "shady business" by AI would cause concern, worry, stress... It would affect the mental health, therefore wouldn't be welcome. 3️⃣3️⃣3️⃣ Other living creatures, biosphere, environment, climate change No LIFE on dead planet. We rely on planet Earth, biosphere, LIFE supporting systems. The environment is essential for our wellbeing. Order of these points matter. Prioritising human LIFE and health but cannot maximise human LIFE without harmony and balance with the ecosystem. 4️⃣4️⃣4️⃣ AI safety It was originally mentioned in Network State Genesis for the purpose of explaining why LIFE is a decent definition. For the purpose of AI alignment it seems redundant and self-referential. On the other hand it might be good to explicitly state to the AI - "hey we are aware of your superpowers, be kind, when in doubt ask" 5️⃣5️⃣5️⃣ Mars: backup civilisation is fully aligned with the virtue of LIFE preservation Obvious. 6️⃣6️⃣6️⃣ End the Russia-Ukraine war, global peace Obvious. 7️⃣7️⃣7️⃣ Artificial LIFE Nuanced. New forms of LIFE are controversial: Bacterias. Viruses: But there might be some new molecules, cells, medicines that can support LIFE. When discussing with ChatGPT4 I parked this issue for now: "I'm of the opinion it is "playing with god powers". I do not like it. It causes worry, concern in me - therefore affecting my mental health - therefore should be extremely careful, regulated, thoughtful." 8️⃣8️⃣8️⃣ Transhumanism, AI integration Nuanced. Elon: "If you can’t beat em, join em Neuralink mission statement" Since transhumanism is relatively new to me (and I didn't have chance to think in great details about this aspect), I've asked ChatGPT4 to explicitly to provide me counterargument why AI integrating with humans is NOT aligned with LIFE. I was able to provide some counter-arguments and ended up with this: "Those who integrate with AI will have enormous advantage, that's for sure. No rules, no law, no regulation can stop that. But maybe LIFE-aligned AI will find a way to prevent such imbalance? What do you think about simple workaround: when integrating with AI, it will be the LIFE-aligned AI, so even if someone gets the advantage it will be used towards serving LIFE?" 9️⃣9️⃣9️⃣ Alien LIFE We don't want to spread out like wildfire and colonise universe to maximise LIFE. We need to be aware of aliens and potential consequences of a contact. Maybe we are not ready, maybe we are under "cosmic quarantine", maybe humans are just an experiment: 🔟🔟🔟 Other undiscovered forms of LIFE Sounds like science-fiction but I can entertain a thought that human perception, even combined with the latest science is unable to measure everything. I believe there might be things we are not yet able to comprehend, some "unknown unknowns". If they do exist, if there are some other forms of LIFE - we want the AI that will take them into account. Buzzword bingo: - - - (see how little we see) - - -
➡️➡️➡️ Simple is good? ⬅️⬅️⬅️ Something simple: First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (three sentences) Something simple: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. (one sentence) Simple is good. Simple can reach wider audience. LIFE (one word) is simple and naive but the expanded definition adds a lot of depth. ➡️➡️➡️ Additional rules and assumptions: ⬅️⬅️⬅️ AI understands human language. There is no need for mathematical models. We can talk to AI and it will understand. When in doubt: ask. Corrigibility: can correct the course early on. Meta-balance: balance about balance. Some rules are strict, some rules are flexible. ➡️➡️➡️ Background context ⬅️⬅️⬅️ Network State Genesis. Founding document. You can find it at Base reality agreements.