Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance: https://yahoofinance.com
– MasterClass: https://masterclass.com/lexpod to get 15% off
– NetSuite: http://netsuite.com/lex to get free product tour
– LMNT: https://drinkLMNT.com/lex to get free sample pack
– Eight Sleep: https://eightsleep.com/lex to get $350 off
TRANSCRIPT:
https://lexfridman.com/roman-yampolskiy-transcript
EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
OUTLINE:
0:00 – Introduction
2:20 – Existential risk of AGI
8:32 – Ikigai risk
16:44 – Suffering risk
20:19 – Timeline to AGI
24:51 – AGI turing test
30:14 – Yann LeCun and open source AI
43:06 – AI control
45:33 – Social engineering
48:06 – Fearmongering
57:57 – AI deception
1:04:30 – Verification
1:11:29 – Self-improving AI
1:23:42 – Pausing AI development
1:29:59 – AI Safety
1:39:43 – Current AI
1:45:05 – Simulation
1:52:24 – Aliens
1:53:57 – Human mind
2:00:17 – Neuralink
2:09:23 – Hope for the future
2:13:18 – Meaning of life
SOCIAL:
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Reddit: https://reddit.com/r/lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman