We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Access AI content by logging in
Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Ilya's Twitter: https://twitter.com/ilyasut
Ilya's Website: https://www.cs.toronto.edu/~ilya/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:23 - AlexNet paper and the ImageNet moment
08:33 - Cost functions
13:39 - Recurrent neural networks
16:19 - Key ideas that led to success of deep learning
19:57 - What's harder to solve: language or vision?
29:35 - We're massively underestimating deep learning
36:04 - Deep double descent
41:20 - Backpropagation
42:42 - Can neural networks be made to reason?
50:35 - Long-term memory
56:37 - Language models
1:00:35 - GPT-2
1:07:14 - Active learning
1:08:52 - Staged release of AI systems
1:13:41 - How to build AGI?
1:25:00 - Question to AGI
1:32:07 - Meaning of life
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Ilya's Twitter: https://twitter.com/ilyasut
Ilya's Website: https://www.cs.toronto.edu/~ilya/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:23 - AlexNet paper and the ImageNet moment
08:33 - Cost functions
13:39 - Recurrent neural networks
16:19 - Key ideas that led to success of deep learning
19:57 - What's harder to solve: language or vision?
29:35 - We're massively underestimating deep learning
36:04 - Deep double descent
41:20 - Backpropagation
42:42 - Can neural networks be made to reason?
50:35 - Long-term memory
56:37 - Language models
1:00:35 - GPT-2
1:07:14 - Active learning
1:08:52 - Staged release of AI systems
1:13:41 - How to build AGI?
1:25:00 - Question to AGI
1:32:07 - Meaning of life