

If that sounds interesting to you, you can find that conversation wherever you might be following us. If you’re not currently following this podcast series, you can join us by subscribing on Apple Podcasts, Spotify, Soundcloud, or on whatever your favorite podcasting app is by searching for “Future of Life.” Our last episode was with Sam Harris on global priorities. This episode explores the historical and intellectual foundations of AI, how AI systems achieve or do not achieve intelligence in the same way as the human mind, the benefits and risks of AI over the short and long-term, and finally whether superintelligent AI poses an existential risk to humanity. Today, we have a conversation with Steven Pinker and Stuart Russell. Lucas Perry: Welcome to the AI Alignment Podcast. Note: The following transcript has been edited for style and clarity. You can find all the AI Alignment Podcasts here. We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. Stuart Russell's new book, Human Compatible: Artificial Intelligence and the Problem of Control Submit a nominee for the Future of Life Award hereĤ:30 The historical and intellectual foundations of AIġ3:16 Regarding the objectives of an agent as fixedġ7:20 The distinction between artificial intelligence and deep learningĢ2:00 How AI systems achieve or do not achieve intelligence in the same way as the human mindĤ9:46 What changes to human society does the rise of AI signal?ĥ4:57 What are the benefits and risks of AI?Ġ1:09:38 Do superintelligent AI systems pose an existential threat to humanity?Ġ1:51:30 Where to find and follow Steve and Stuart You can take a survey about the podcast here Whether superintelligent AI will pose an existential risk to humanity.The benefits and risks of AI in both the short and long term.How AI systems achieve or do not achieve intelligence in the same way as the human mind.The historical and intellectual foundations of AI.Topics discussed in this episode include: How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two.

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions.
