We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
The peril (and promise) of AI with Tristan Harris: Part 1
This podcast episode, featuring Tristan Harris, delves into the complexities and rapid advancements of artificial intelligence (AI) and its potential impacts on society. Tristan Harris, co-founder of the Center for Humane Technology, shares insights on the evolution of AI, the ethical considerations surrounding its development, and the need for a balanced approach to harness its benefits while mitigating risks. The conversation explores the history of AI development, the transformative power of large language models, and the societal challenges posed by the attention economy. Harris emphasizes the importance of aligning AI development with human values and the necessity of creating safeguards to ensure a future where technology serves humanity's best interests.
Wondery Plus subscribers can listen to How I Built This early and ad-free right now.
Join Wondery Plus in the Wondery app or on Apple Podcasts.
Our friends at ZipRecruiter recently conducted a survey and found that the top hiring challenge employers face in 2024 is a lack of qualified candidates.
But if you're an employer and you need to hire, here's the good news.
ZipRecruiter has smart tools and features that help you find more qualified candidates fast.
And right now, you can try it for free at ZipRecruiter.com slash built.
ZipRecruiter shows you candidates whose skills and experience match your needs.
Then you can send your top candidates a personalized invite encouraging them to apply.
Let ZipRecruiter help you conquer the biggest hiring challenge, finding qualified candidates.
See why.
Four out of five employers who post on ZipRecruiter get a quality candidate within the first day.
Just go to this exclusive web address right now to try ZipRecruiter for free.
ZipRecruiter.com slash built.
Again, that's ZipRecruiter.com slash B-U-I-L-T.
ZipRecruiter, the smartest way to hire.
Get closer to the best you.
Audible lets you enjoy all your entertainment in one app.
You'll always find the best of what you love.
Or something new to discover.
I personally enjoy learning as much as I can about the health and wellness space.
And as an Audible member, you can choose one title a month from the entire catalog to keep.
I recently listened to Outlive by Peter Attia, and it's changed the way I think about how I eat and exercise.
New members can try Audible free for 30 days.
Visit Audible.com slash built or text built to 500500.
That's Audible.com slash built or text built.
To 500500 to try Audible free for 30 days.
Audible.com slash built.
There is no shortage of business and leadership podcasts, but few explore solutions to solving the world's most complex issues,
which is why I recommend If Then, the new podcast from our friends at Stanford Graduate School of Business.
If Then features in-depth conversations with Stanford GSB faculty professors about their cutting-edge research around topics like AI,
sustainability, and power.
All framed around an If Then statement.
Like their recent episode, if we want to get people back to the office, then we need to find the right reasons to do it.
As you can imagine, this leads to incisive and sometimes surprising takeaways.
So don't wait.
Follow If Then wherever you get podcasts and tell them I sent you.
Hello and welcome to How I Built This Lab.
I'm Guy Raz.
Okay, artificial intelligence is going to change everything.
Not just...
about business or entrepreneurship, but everything.
Or at least we think so.
I do.
And a lot of it excites me.
Breakthroughs and curing diseases or maybe even solving climate change.
But a lot of it really freaks me out, too.
Like how it might become impossible to know what's true and what's not.
What's real and what's fake.
And because trust is the invisible force that allows our societies to work,
AI could completely undermine.
All of it.
Now, we've had a lot of conversations about AI on the show in the past year,
and we're going to have many, many more.
And this week and next, we have a really important guest.
And I really hope you all spend some time really listening to these episodes.
Many years before anyone noticed, Tristan Harris warned about the perils of social media.
And by and large, he was proved right.
He even helped make a film about it.
You might have seen it.
It was called The Social Dilemma.
Tristan is one of the founders.
of the Center for Humane Technology.
And his roots in the tech world run deep.
And today, he's sounding the alarm about AI.
Now, to be clear, he's not anti-AI.
But Tristan is really worried about the pace of its growth.
Because in a very short period of time, he argues we're going to lose control over it.
Today, we're going to run part one of my conversation with Tristan.
And next week, we'll have part two.
Anyway, Tristan grew up in the Bay Area.
And he was a technology kid through and through.
He even wrote fan mail to Steve Wozniak, one of the co-founders of Apple.
And actually, one of Tristan's co-founders at the Center for Humane Technology
is deeply connected to that early obsession with Apple.
And I'm honored today to say that my co-founder, Aza Raskin,
his father was Jeff Raskin, who started the Macintosh project at Apple,
along with Steve Jobs, who later took over the project.
And I think there's an ethos that both he and I come from,
of the original days of Silicon Valley, that is aspirational,
in which computers can be a bicycle for the mind, a bicycle for human creativity,
extending human expression.
I think sometimes we can get classed as doomers.
But the risk side of AI, we only do that because we care about
the vision where technology actually is in service of humanity.
And to do that, we just think we have to fix the incentives.
Yeah.
You eventually started your own company.
This is in the 2000s, the mid-2000s,
called AppSense.
And this was a hyperlink.
You would hover over a hyperlink and you would get other information, right?
More or less.
Can you describe what that company did?
Yeah.
I started Apture in 2007.
And it was going back to the original ethos of what the internet
and hyperlinking was supposed to be about.
You could say, when people click on this button, or click on this image,
or click on this text, I want the computer to speak this out loud,
or to issue a dialogue prompt, or to play this song,
or to play the piano.
And it was the first really creative idea of this interlinked,
what if everything was connected to everything else,
in the most visually expressive, multimedia, educational, inspiring, beautiful way.
So then I learned a lesson, though.
Because while I was interested in kind of re-inventiating this original,
inspiring ethos of multimedia, teaching, and education into the internet,
the way that Apture, which is a for-profit company,
we raised venture capital,
we had investors, and our progress, our success,
was measured not in the number of children who were inspired by cool media
that they got to learn and watch and click on,
but our progress was measured in the form of engagement.
You know, we did deals with the Washington Post, the New York Times, the Economist,
and they would ask us one question,
which was, how much did you increase the amount of time people spent on my website?
AKA, how many ad views, impressions did you increase?
And that's when, you know, I really,
saw, I think, the fallacy that leads and guides both the social media work
that we became known for with the social dilemma and our work with AI,
which is there's all these positive stories we want to tell ourselves about what we're building.
But the question that determines which future we'll end up in is the incentives.
And Charlie Munger, Warren Buffett's business partner, just passed away.
He had a very famous quote we referenced in our work that he said,
if you show me the incentive, I'll show you the outcome.
Yeah.
When people ask themselves the question, like, which way is AI going to go?
Was it going to be the promise?
Or is it going to be the peril?
How are we going to know?
Well, the point is that the way that we'll know which future we get
is by where the incentives, where the profit motive,
where the, you know, status and reputation is all conferred to.
And that's what we have to change.
We're going to talk a lot about AI in this conversation,
but I want to go back to around 2011 when Aptra was acquired by Google
and you started work at Google.
And it was around, I think, around this time that you really started
to focus on this idea of attention and to become more troubled,
something that is now just, you know, it's an article of faith.
We now know that the attention economy is a term,
and we now know that tech companies make money from capturing our attention.
You know, media does, YouTube does, this show does, right?
But you were thinking about this a long time before others were as a potential problem.
How did those ideas begin to kind of percolate in your mind?
Well, again, starting with my experience at Aptra,
I realized that,
you know, for our company to be successful,
we had to increase the total amount of time people spent
on the Washington Post and New York Times,
and that there obviously isn't an infinite amount of human attention out there.
It's almost like planetary boundaries, right?
Can you run infinite economic growth on a finite planet forever?
Well, so long as economic growth has environmental externalities
or depletes finite resources, you can't do it forever,
unless you keep making scientific breakthroughs on every dimension.
So, you know, I think similar to that,
you can't run infinite growth of every company demanding,
more attention per quarter per year forever with a finite supply of human attention.
And at the time in 2007, it was actually the year the iPhone came out.
We started our company before the iPhone came out.
But obviously what mobile phones did is they opened up the attention economy.
Used to be people only spend a few hours a day on computers,
and then they would get offline and go outside, go to a movie, do something else.
And increasingly, every moment of our lives became part of this attention economy,
sold for a commodity.
Like the time you sit in the toilet and you go to the bathroom,
and that's the boom.
Now there's 10 extra minutes in the attention economy
because the smartphone opened up that new piece of real estate.
So I was just seeing the fact that this is where this is all going.
There's only so much, and it's going to get more competitive.
People are going to find more creative ways of creating stickier, more persuasive,
more socially validating, more outrage driven, more addicted, more distracted,
more polarizing, narcissistic, you know, these kinds of things.
Yeah.
And it was pretty daunting to see that in 2013,
when I was at Google and made this presentation.
Because you're starting to see like, if this keeps going,
I can tell what kind of society this is going to create.
And that's not a society that I think we can afford to be in.
We need to turn the tide.
This presentation that you're referencing, it was called,
it was a call to minimize distraction and respect users attention.
You were an employee at Google, and you decided to put together
a 144 page Google slide presentation about what was going on.
And I just want to read a line in there.
The line is, never before in history have the decisions of a handful of designers
working at three companies, Google, Apple and Facebook, had so much impact on how millions
of people around the world spend their attention.
We should feel an enormous responsibility to get this right.
That was written in 2013.
You wrote it internally in Google, and it kind of went internally viral.
How did the leadership of Google respond?
I mean, did they, you know, sort of pay you lip service and say, yeah, this is all you're
saying all the right things or were people annoyed or what they say?
I remember when I sent the presentation to just 10 friends at Google just to say, hey,
can I get your feedback on this?
And I came to work the next day, and there was something like 40 simultaneous viewers.
And I knew that was impossible because I'd only sent it to 10 people.
They were sending it around.
And then later, I looked that day, and there was 150 simultaneous viewers, and then thousands
by the end of the day.
And so I really realized I got to clean this up and finish it basically in less than a
couple hours.
And it was this real moment in time.
I remember hearing that.
Yeah.
I think in the first 24 hours that Larry Page, the CEO of Google, had had several meetings
that day in which people brought up this presentation to him.
And I got emails from around the company, mostly from other employees who just said,
I completely agree.
This is happening to my children.
This is affecting my relationships.
This is this thing that you're naming a thing that I felt, but I didn't know how to put
into words.
And, you know, there was even an executive at Google who decided to host me, basically
offer me a chance to work on this topic.
And he sent me a quote by Neil Postman, that we were all worried about this dystopian future
of 1984, the George Orwell vision of dystopia, where technology limits our access to information,
that there's surveillance, there's control, we ban books.
But he said alongside that vision, there's this other dystopian vision with technology,
which is the Aldous Huxley vision of Brave New World, in which control is not established
by banning books, but by creating a world of so much amusing ourselves to death, that's
what we're doing.
And I think that's what I'm really excited about.
Chris Brennan That's sell to the world.
No, no, that's forwarding decision to us, to all that we're doing about Anne call.
That's what we're working on.
You know, I know that, you know, I say this all the time, but I do think like way back
in the day, theаль costs werecje were here.
Bruce Ramsey Yes.
Pete Eaton Yeah.
Hasn't it piqued attention from the read today?
I mean, just, you know, I don't think much has really been like 15,000 verses, right?
But like,"I don't have to be speaking, I don't connect to the sick people and the sick
people."
That's an amazing passion that's part of it.
Chris Brennan The science green and also from Texas.
Did you know they got it?
Interesting question.
that have made this problem worse
and then combine that with technology and AI
and we're heading into a bigger version of that.
So I'm not trying to paint a dark picture here.
It's just important that we get a clear-eyed view
of what world we're pulling towards
so that we can say,
if that's not the future that we want,
how do we have to change the incentives
to arrive in that future
that we all want for our children?
We're going to take a quick break,
but when we come back,
more from Tristan on how social media
and now AI development are happening too fast
for humans to fully process.
Stay with us.
I'm Guy Raz,
and you're listening to How I Built This Lab.
As a business-to-business marketer,
your needs are unique.
B2B buying cycles are long
and your customers face incredibly complex decisions.
Isn't it time you had a marketing platform
built specifically for you?
LinkedIn Ads empowers marketers
with solutions for you and your customers.
LinkedIn Ads allow you to build
the right relationships,
drive results,
and reach your customers
in a respectful environment.
You'll have direct access to
and build relationships with decision makers,
a billion members,
180 million senior-level executives,
and 10 million C-level executives.
79% of B2B content marketers said
LinkedIn produces the best results for paid media.
And so many of the brands on How I Built This
use LinkedIn to reach customers
every single day.
Make B2B marketing everything it can be
and get a $100 credit on your next campaign.
Go to linkedin.com slash built this
to claim your credit.
That's linkedin.com slash built this.
Terms and conditions apply.
Picture that thing you've always wanted to learn.
Now picture learning it from the person
who's literally the best at it in the world.
That's what you get with Masterclass.
Don't just talk about improving.
Masterclass helps you actually do it.
Masterclass offers over 180 world-class instructors.
So whether you want to master negotiation with Chris Voss,
think like a boss with Martha Stewart,
or learn about the power of resilience
with filmmaker Ava DuVernay,
Masterclass has you covered.
There are over 200 classes to pick from
with new classes added every month.
And one of my favorites is with Sarah Blakely,
who we've had on our show.
She teaches self-made entrepreneurship
and how to bootstrap a great idea.
Every new membership comes with a 30-day money-back
guarantee, so there's no risk.
And right now, our listeners will get an additional
15% off an annual membership
at masterclass.com slash built.
Get 15% off right now
at masterclass.com slash built.
masterclass.com slash built.
Welcome back to How I Built This Lab.
I'm Guy Raz, and my guest is Tristan Harris,
co-founder of the
Center for Humane Technology.
Now, back in 2013, social media
was really starting to explode.
And Tristan was at Google,
and he was sounding the alarm about the dangers
of the attention economy.
And he wasn't the only one concerned
about the power of this new technology.
And I remember reading at the time,
and I don't know if this was apocryphal or not,
but reading that Steve Jobs,
who had died two years earlier,
wouldn't allow iPads in his own home.
He did not allow his children
to have these products that he himself
helped to create.
And it really struck me as
not so much hypocritical,
but that there was something
that people understood,
fundamentally understood about the dangers
of what they were unleashing.
And did you feel, at least at Google,
did you feel like, okay,
you know, they're taking this seriously.
They made a position for you.
They're called design ethicist.
Did you feel like, you know,
maybe I can really make a dent here at Google.
Maybe I can have an impact.
Well, I certainly told myself
that story when I first got started.
And I really want to make sure
that I'm honoring that.
I thought it was generous for Google
to support the fact that I was doing that work.
Right.
And I want people to hear that
so you don't hear me as some kind of reactionary,
you know, the evils of big tech
and all the people at the top are evil.
At a human level,
when you talk to the human beings
who worked in these products,
I had a lot of people who were very sympathetic
with these concerns.
But when it came down to trying to change Android,
like I met with the Android team and say,
what if we could change
how the operating system works
and better protect people's attention?
I met with the Android team.
I met with the Chrome design team.
What if we could change Chrome
to make it better at protecting people's attention?
At the end of the day,
Android and Chrome, you know,
and Google benefit
the more overall screen time there is versus not.
And so I knew that my success
within the company was going to be limited.
And I had to eventually take the concerns
outside of the public world
to try to say,
how do we create public pressure
that will start to change those incentives?
And I had no idea in 2015 or so
when I left Google,
how I was going to do that.
But, you know, here we are today.
And there's some amazing things
that have shifted
in the public consciousness around this.
And a lot more needs to happen.
The first nonprofit that you founded
was called Time Well Spent.
And that was around 2014.
And at that time,
and I mean, up until, you know, recently,
understandably, your focus,
I think increasingly was on social media
as it began to capture
more and more of our time and attention.
And you start to see this happen
at lightning speed around this time.
Yeah, I mean, so Time Well Spent
came from this term phrase, time spent.
Most tech products and advertising
is maximizing time spent.
And we said, but is it maximizing time well spent?
Meaning at an existentialist level
as a human being,
what are the choices that you endorse
as time well spent?
And imagine a world where technology is competing
not to maximize how much time it takes from you,
but to maximize its contributions,
its net positive contributions
to a life that you would endorse
in your deathbed.
Meaning enabling you
and helping you make the choices
that are the lasting, meaningful,
fulfilling choices,
not the full but empty choices.
But certainly inside of the time well spent world
and the attention economy,
the biggest forces that we were dealing with
at that time were the growing revenues
of social media companies
and the growing portion of social media
in that problem.
You know, it's,
and I know you've made this analogy
or versions of it,
but you think about something like
the Gutenberg press, right?
And, you know, that happens,
in the 15th century in Europe.
And all of a sudden,
ordinary people can get access to the Bible
and they can start to learn how to read it.
And it doesn't have to be filtered
through the priests.
And so that creates,
it launches the Reformation.
You know, of course,
a huge innovation,
it changed the world,
but it was very disruptive.
At the same time,
this happened over a long period of time, right?
And so you can basically trace that moment
all the way to the end of World War II,
which was kind of the end of the European,
the era of wars,
but I'm talking about 500 years.
What happened with social media was like five years.
And it was like the Gutenberg press on steroids
and amphetamines and, you know,
everything in between.
It's almost inconceivable how our brains were,
were going to adapt to this massive,
rapid proliferation of information.
Yeah.
In our work, we often reference the E.O. Wilson quote,
who's the father of the field of sociobiology,
that the fundamental problem
that humanity faces is that we have paleolithic brains,
medieval institutions, and God-like technology,
which is to say both in the power and the speed,
our brains and our emotions are not built
to see exponential curves, right?
There's nothing on the Savannah when you see a giraffe
or a tiger and you got to run from, you know,
something that your brain needed to understand
an exponential curve.
It took, you know,
Isaac Newton and calculus and sort of teaching ourselves
education about math to start to understand what exponential
curves are.
We get more practice with that with finance,
savings accounts, things like that,
but our brains are not built for that.
So suddenly you have something like, you know,
social media come along where you totally have an exponential curve
of how many users, how much content, et cetera,
is flowing in the system.
Then you get AI and you get a double exponential.
You get an exponential that accelerates its own exponential,
meaning AI accelerates AI development.
And so we now have something that is moving so fast that,
you know, it's like our brains are,
are completely not comprehending the moment that we're in.
As you said,
social media took,
you know,
five,
six years to get to market penetration and the absorption of
elections and journalism kind of gets sucked into the black hole
of that super organism.
In the case of AI,
you know,
the stuff that we use is it took Instagram,
I think two years to get to a hundred million users and it
took chat GPT two months.
So we're,
we're living with a technology that's changing and undermining
the assumptions about how our world worked.
We can,
humanity can absorb new technologies,
but it takes time.
And the challenge is when the time pressure that you have to absorb that new
change is faster than,
than ever before.
This is the fastest that we've ever had to face it.
Just to provide one quick thought experiment,
a friend of mine who's a AI policy researcher at one of the major labs,
he said,
imagine that we just named that humanity has a finite capacity to absorb new
technologies that undermine our assumptions about how everything works.
And imagine instead of just releasing all the new technology as fast as
possible and just blowing past those,
those finite absorption boundaries that we instead,
what if you had to apply for a license to release a new technology into the
commons?
And so we're basically consciously choosing as a society,
which new technologies we want to absorb and we want to prioritize and which
ones we want to like give ourselves a little bit more time.
It's not yes or no to technology.
It's how do we absorb technologies at the pace that we can get it right.
And I think it's an interesting thought experiment for people to think about.
We're going to take a,
a quick break,
but when we come back,
the story of how Tristan was alerted to the dangers of another nascent
technology,
one that he says could pose an existential threat to the entire world.
Stay with us.
I'm Guy Raz and you're listening to how I built this lab.
I'm Guy Raz.
I'm Guy Raz.
I'm Guy Raz.
I'm Guy Raz.
We actually partnered with the folks over at Miro to create a,
how to build a podcast Miroverse template to help,
you know,
you kickstart your journey on making your own podcast.
Check it out and let me know what you think.
You can find our template at Miro.com slash H-I-B-T.
That's M-I-R-O dot com slash H-I-B-T.
That's M-I-R-O dot com slash H-I-B-T
to check out our Miroverse template for yourself.
An epic matchup between your two favorite teams,
and you're at the game getting the most from what it means to be here with American Express.
You breeze through the card member entrance, stop by the lounge.
Now it's almost tip-off, and everyone's already on their feet.
This is gonna be good.
That's the powerful backing of American Express.
See how to elevate your live sports experience at AmericanExpress.com slash with Amex.
Eligible American Express card required.
Benefits vary by card and by venue.
Terms apply.
Welcome back to How I Build My Life.
I'm Guy Raz, and my guest is Tristan Harris, co-founder of the Center for Humane Technology.
In 2018, Tristan co-founded that nonprofit,
and at the time, it was focused on the dangers of social media.
But then, in late 2022, early 2023,
Tristan started getting messages from people working on a different technology.
You and Aza Raskin, one of your co-founders, were contacted by people working in AI,
and they were sounding the alarm.
And the analogy that you've made about this encounter really alarmed me.
Compared to the Manhattan Project, can you describe or explain that a little bit?
Yeah, well, a lot of people analogize that the invention of AI is as significant
as the invention of the atomic bomb.
Now, that might sound like alarmist or panic-creating or something like that,
and it's not, actually.
And it's because of how the atomic bomb restructured the world,
and it was a new kind of power that whoever had it was clearly sort of the dog on top of the food chain.
And with AI, what people don't understand is if you genuinely build something
that can do full artificial intelligence across all kinds of cognitive labor,
meaning scientific labor, research labor, market analyst labor, financial labor,
if you can out-compete all stock traders on the stock market,
if you can out-compete all military strategy games,
if you can out-compete all military strategy games,
if you can out-compete all military strategy games,
if you can out-compete anyone in writing text to influence people on the internet,
if you have that AI system that can do that,
that is a new kind of power in which your position in the world
will be greatest relative to everybody else.
And so there is a new atomic bomb project in the form of those who are racing to build
towards this fullest expression of AI,
which some people call artificial general intelligence.
Remember that the stated mission statement,
of open AI, anthropic and deep mind from the very beginning,
those three companies have been to create artificial general intelligence.
So when we got calls from some people in those labs,
it felt like getting a call from a scientist working on the Manhattan Project
before you knew what the Manhattan Project was.
Okay, so in 2022, you are contacted by AI researchers
who are kind of ringing the alarm, right,
that there's this AI arms race underway.
And so presumably they wanted you to help spread the word
about what was happening, right?
Yeah, this is late 2022, early 2023.
And frankly, as you said, when we started the Center for Human Technology,
we had all the other issues of how social media is undermining democracies.
We had our plate full.
And I actually, they were asking for our help.
They basically said, okay, you all raised the alarm about social media
and the social dilemma.
You have a huge public platform.
Listen, this race to build AI has gotten out of hand and open AI and anthropic.
DeepMind are now in this race to release things as fast as possible.
And this race is going to go to a dangerous place
if we don't have some outside pressure that can help bend the curve,
maybe slow it down a little bit, create some international norms,
create some safety guardrails.
Would you use your public platform to help raise the alarm?
And you did.
And you and your co-founder, Asa Raskin,
would go on to build a presentation called the AI Dilemma.
I think you first delivered it in March of 2013.
Here in the Bay Area, you presented several times since then.
I've seen it live.
It's available online.
Anybody can watch it.
What was a message that you were trying to send with this presentation?
You know, the main point we made in the presentation
is that social media was really humanity's first contact experience
with a mass deployed AI because it's an AI pointed at your 13-year-old's brain
trying to calculate what's the perfect next TikTok video to show you
or what's the perfect next.
Politically outrageous tweet that will keep you scrolling.
And we saw how that experiment went.
You know, there's a lot of really good things that social media has done,
but the overall picture of what social media has done,
if you really take into account the full scope of impacts and externalities,
is it's also created the backdrop of a more addicted, outraged, polarized,
narcissistic breakdown of truth, you know, lower trust society.
And that that is like undermining the quality of the soil underneath your feet
while you have a few nice,
shiny toys above the soil.
We knew that no matter what positive stories we were telling about social media,
unless you pay attention to the incentives,
which is to maximize attention and engagement,
which produces that race to the bottom of the brainstem,
that that was going to be the driving force
of what would tell you which future we're going to land into.
And if we care about which AI future we get,
we're going to hear a lot of positive stories about AI developing antibiotics
and solutions to climate change and new drugs for cancer.
And of course, I have a beloved,
I'm in a relationship with cancer right now.
I want as many of those solutions as we possibly can get as fast as possible.
But we have to also look at the incentives,
which is that it's a race to roll out AI capabilities as fast as possible,
like handing out new magic wands for new kinds of things
that people can do that they couldn't even do four months ago.
The fact that, you know,
there's a new magic wand that was released last year
that with three seconds of your voice guy,
I can talk to your bank representative or your grandmother
and pretend to be you and get sensitive information out of them.
And society wasn't prepared for that magic wand.
one to be rolled out. I want you to explain, there was a big breakthrough in 2017. Essentially,
Google announced this development, it's called the Transformer model. And this helped kind of
spark this breakthrough in what we now know as large language models. And that's kind of
helped to trigger what's now called generative AI. But help me understand how it works. Because I
mean, I think a lot of people, you know, are thinking about or, you know, hearing about
breakthroughs in scientific developments and cancer research and climate change technology,
things that are going to benefit us, things that will make our lives easier. But at the same time,
from what I understand, based on what you've talked about, a lot of the scientists and
researchers working with large language models are kind of developing a Prometheus, something that
already they know is growing faster and becoming smarter, faster than they ever anticipated.
Yeah.
So just to say that I was pretty skeptical of a lot of the bigger AI risks. When there's a community
in San Francisco called the effective altruist that have been worried about bigger AI risks for a long
time. Yeah, I was incredibly skeptical of a lot of this stuff. In fact, I actually told them,
I actually think you're all are misguided. You're not seeing the AI that's been released right
underneath your feet that has already wrecked the world. And it's called social media. So just want
to say that I'm not walking into this conversation wanting to hype AI capabilities. What changed
though, is in 2017.
There was a paper at Google published called attention is all you need, in which they invented
this new kind of AI paradigm of transformers. I won't go in too much in the technical details,
but we swapped the engine underneath the hood of AI in 2017. And everything that we used to be
skeptical of about AI up until 2017, I would have agreed with everybody on. And you know,
there's AI when Siri mispronounces your mom's name. And there's AI when Google Maps gets the
directions wrong or mispronounces a street address. But that AI that falls over and makes mistakes
is different.
And the AI of transformers in which basically, it's like a big brain neural network, and you're
throwing more data, more training, more compute at it. So for example, GPT-4, they've been $100
million to get a bunch of NVIDIA chips to churn away for something like six months,
and then come out with this weird digital spaghetti brain. This is a massive neural network.
And the thing about it is that as you scale these digital brains,
they end up with a lot of data. And so you're not going to be able to do that.
You're going to have to do it with more emergent capabilities that no one who built them
anticipated. So suddenly, when they scaled, you know, GPT-4, they tested it, and it could explain
jokes. You could give an image of the picture of an iPhone with where the bottom of the port,
there's like an old VGA cable, like those old monitor cables from the 1990s.
Yeah.
And you could say, what's funny about this image? And it explained, well, it's funny,
because an iPhone can't have a 1990s VGA cable at the bottom of the iPhone.
Wow. Yeah.
It's not that it had ever seen that image before. It's not that it was trained on that.
And this is not to sort of conjure this sort of mythical blank check of AI is going to have all
sorts of magic capabilities. It's just that we know that it gains capabilities that people may
not notice for a long time. It took two years, for example, for researchers to figure out that
GPT-3.5, the predecessor to GPT-4, had research-grade chemistry knowledge, meaning you
could ask it questions about how to synthesize dangerous chemicals, and it would tell you
how to do that.
By the way, nobody knew that. Nobody knew that it had developed those capabilities on its own.
That's right. And it's confusing, by the way, because I want to also, for those who are
listening to this and skeptical, because they're like, well, that's true, but look at how many
times it hallucinates and it gets things wrong. That's 100% correct. It's hallucinating and
getting a bunch of things wrong. So we've never been around this new kind of brain that is
simultaneously in certain ways better than humans at a bunch of things, but also makes really dumb
mistakes, like surprising and kind of almost embarrassing.
It's like a weird combo that we're not used to, but it's because it's an artificially intelligent
mind. It's not a kind of mind that we have any previous sort of knowledge about.
That's part one of my conversation with Tristan Harris, co-founder of the Center for Humane
Technology. You can catch the rest of my conversation with Tristan where we discuss
how AI could undermine the foundations of our society and what we can do to prevent that.
That's coming up in part two next week. And thanks for listening to the show this week.
Please make sure to click the follow button and subscribe to our channel.
We'll see you next time.
Bye.
Bye.
Bye.
If you like how I built this, you can listen early and ad free right now by joining Wondery
Plus in the Wondery app or on Apple Podcasts. Prime members can listen ad free on Amazon Music.
Before you go, tell us about yourself by filling out a short survey at wondery.com slash survey.
Hey, everyone, it's Guy Raz here. And I have a new show that I think you're going to love
from Wondery and hosted by Laura Beal, the critically acclaimed podcast Dr. Death is back
with a new season called Doctor Death, Bad Magic.
It's a story of miraculous cures, magic, and murder.
When a charismatic doctor announces
revolutionary treatments for cancer and HIV,
it seems like the world has been given a miracle cure.
Medical experts rush to praise Dr. Sirhat Gumruku as a genius.
But when a team of private researchers
looks into Sirhat's background,
they begin to suspect the brilliant doctor
is hiding a shocking secret.
And when a man is found dead in the snow
with his wrists shackled and bullet casings
speckling the snowbank,
Sirhat would no longer be known
for world-changing treatments.
He'd be known as a fraud
and a key suspect in a grisly murder.
Follow Doctor Death, Bad Magic on the Wondery app
or wherever you get your podcasts.
You can binge all episodes of Doctor Death, Bad Magic
ad-free right now by joining Wondery Plus
in the Wondery app or on Apple Podcasts.