We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Access AI content by logging in
Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
AI: A Guide for Thinking Humans (book)
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:33 - The term "artificial intelligence"
06:30 - Line between weak and strong AI
12:46 - Why have people dreamed of creating AI?
15:24 - Complex systems and intelligence
18:38 - Why are we bad at predicting the future with regard to AI?
22:05 - Are fundamental breakthroughs in AI needed?
25:13 - Different AI communities
31:28 - Copycat cognitive architecture
36:51 - Concepts and analogies
55:33 - Deep learning and the formation of concepts
1:09:07 - Autonomous vehicles
1:20:21 - Embodied AI and emotion
1:25:01 - Fear of superintelligent AI
1:36:14 - Good test for intelligence
1:38:09 - What is complexity?
1:43:09 - Santa Fe Institute
1:47:34 - Douglas Hofstadter
1:49:42 - Proudest moment
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
AI: A Guide for Thinking Humans (book)
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:33 - The term "artificial intelligence"
06:30 - Line between weak and strong AI
12:46 - Why have people dreamed of creating AI?
15:24 - Complex systems and intelligence
18:38 - Why are we bad at predicting the future with regard to AI?
22:05 - Are fundamental breakthroughs in AI needed?
25:13 - Different AI communities
31:28 - Copycat cognitive architecture
36:51 - Concepts and analogies
55:33 - Deep learning and the formation of concepts
1:09:07 - Autonomous vehicles
1:20:21 - Embodied AI and emotion
1:25:01 - Fear of superintelligent AI
1:36:14 - Good test for intelligence
1:38:09 - What is complexity?
1:43:09 - Santa Fe Institute
1:47:34 - Douglas Hofstadter
1:49:42 - Proudest moment