This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.
Robin’s P(doom) is less than 1% while mine is 50%. How do we reconcile this?
I’ve researched past debates, blogs, tweets, and scholarly discussions related to AI doom, and plan to focus our debate on the cruxes of disagreement between Robin’s position and my own Eliezer Yudkowsky-like position.
Key topics include the probability of humanity’s extinction due to uncontrollable AGI, alignment strategies, AI capabilities and timelines, the impact of AI advancements, and various predictions made by Hanson.
21 июл 2024