Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This message comes from Insperity. Say
0:02
to stress -free subscription management.
0:04
Easily track, block, or
0:06
cancel recurring charges right from
0:08
the Capital One mobile
0:10
app. Simple as that. Learn
0:13
Learn more at capitalone.com slash
0:15
subscriptions. and Terms and conditions
0:17
apply. message comes message comes
0:19
from Insperity. Providing HR and
0:22
and technology from payroll,
0:24
benefits, HR and HR
0:26
compliance to talent development.
0:28
Learn more at at.com slash
0:30
HR slash HR matters. This is
0:33
the TED Radio Hour. Each week,
0:35
week, groundbreaking TED
0:37
Our job Our job now is to dream big. delivered
0:40
at TED to bring about
0:42
the future we want to see
0:44
around the world to understand who
0:46
we are we are. those talks, we
0:48
bring you speakers and ideas that
0:50
will surprise you You just don't know
0:52
what you're gonna find. find. Challenge you. have
0:54
to act to have to ask is it noteworthy?
0:56
and even change you. I literally
0:58
feel like I'm a different
1:00
person. feel like I'm a Do you feel
1:03
that way? Do you Ideas worth
1:05
spreading. spreading. From Ted and NPR.
1:07
Zamorodi. It was about
1:10
a year after 9-11. It
1:12
was about a year after Suleman.
1:14
In is Mustafa was a
1:17
In 2002, Mustafa was a
1:19
student at Oxford University
1:21
when a friend decided to
1:23
start a helpline for
1:25
young British Muslims. A good good
1:27
friend of mine at the
1:29
time at the just started the
1:31
first prototype One evening session. I
1:33
think it was a Thursday
1:36
evening of offering offering
1:38
counselling services on the
1:40
phone. Mustafa was
1:42
intrigued. He ended up
1:44
joining the project to help grow it. we became
1:47
we became essentially
1:49
co -founders. service their service
1:51
was in hot demand. was a
1:53
know, it a pretty remarkable time time
1:55
because, know, young British Muslims were
1:58
feeling judged. judged by...
2:00
9-11 as though they were responsible
2:02
or somehow complicit. There was a
2:04
lot of Islamophobia. But what they
2:07
found these young people needed was
2:09
really just someone to talk to.
2:11
Most of the challenges that they
2:14
were working through were, you know,
2:16
the sort of conventional things that
2:19
a young person might deal with,
2:21
like relationships, bullying, self-harm, increasing homophobia
2:23
in the community, and... Fundamentally, I
2:26
think people were just looking to
2:28
find support in a language that
2:30
made sense to them, you know,
2:33
like a London accent using sort
2:35
of Islamic terminology and having a
2:37
broader understanding of the, you know,
2:40
whether it was a Bangladeshi or
2:42
Pakistani culture or an Arab culture,
2:45
combined with the religious foundation and
2:47
just being able to reflect back
2:49
that empathy went a long way.
2:53
20 years later, the Muslim
2:55
youth helpline is still going
2:57
strong. And Mustafa Suleiman now
2:59
lives in Silicon Valley, where
3:01
he's been working on another
3:03
kind of helper. I'm the
3:05
CEO of Microsoft AI, and
3:08
I work on building new
3:10
AI systems and products. These
3:12
days, I'm working on an
3:14
AI companion. With artificial
3:16
intelligence and the backing of one
3:18
of the biggest tech companies in
3:21
the world Mustafa Suleiman wants us
3:23
each to have our own digital
3:25
assistant. I can get philosophical creative
3:27
or poetic if you like. Available
3:30
24-7 kind of like a personal
3:32
helpline. I can be your sounding
3:34
board as you work through challenges.
3:36
I can write stories, brainstorm ideas.
3:39
Think of me as your superpower
3:41
in your pocket. How can it
3:43
serve you? How can it support
3:45
you? How can it save you
3:47
time? How can it reduce your
3:50
anxiety? How can it help make
3:52
you feel smarter and more confident
3:54
and more prepared to go about
3:56
your day? With no judgment, no
3:59
pressure. Hey,
4:01
Manouche! Nice to hear from
4:03
you. What's on your mind
4:05
today? Microsoft calls this assistant
4:07
co-pilot, and it was released
4:09
at the end of 2023.
4:11
I've been playing around with
4:13
it. Well, I need to
4:15
interview Mustafa Sulevon, your creator.
4:17
What do you think I
4:20
should ask him? I go
4:22
for topics like his vision
4:24
for AI's role in society.
4:26
the balance between innovation and
4:28
ethics and what he sees
4:30
as the most exciting future
4:32
possibilities for AI. Okay, these
4:34
are good. Now can you
4:36
please write the intro for
4:38
this episode? Sure thing. Here's
4:41
a possible intro. Hey,
4:43
it's Manu Samerode, and today's
4:45
Ted Radio Hour is dedicated
4:48
to a deep dive with
4:50
none other than Mustafa Suleiman.
4:52
Co-founder of Deep Mind and
4:54
author of the coming wave,
4:57
Mustafa has been a pivotal
4:59
force in AI, driving both
5:01
groundbreaking advancements and sparking intense
5:03
debates. In this episode, we'll
5:06
discuss the promises and perils
5:08
of artificial intelligence, and whether
5:10
we're truly prepared for the
5:12
wave of change it's bringing.
5:15
Is AI evolving too fast
5:17
for society to adapt? Are
5:19
we doing enough to mitigate
5:21
the risks? It's an hour
5:24
of insight, reflection, and some
5:26
tough questions. So, let's jump
5:28
in. As co-pilot
5:30
noted, Mustafa Suleiman is a long-time
5:33
pioneer in the field of AI.
5:35
After he dropped out of Oxford
5:37
University, he worked for London's mayor.
5:40
He was even part of an
5:42
NGO's delegation to the Copenhagen climate
5:45
negotiations in 2009. He wasn't impressed.
5:47
There were so many sort of
5:49
difficult egos and entrenched sort of
5:52
interests. And I just felt a
5:54
sense of sort of frustration.
5:57
and I felt
5:59
kind of demoralised
6:01
after that experience. Mustafa
6:04
wanted to change the
6:07
world. Governments and nonprofits
6:09
didn't seem terribly effective.
6:12
But there was one company that was
6:14
managing to change the behavior of
6:16
millions of people. Facebook
6:18
was exploding at that time.
6:21
It had got to a
6:23
hundred million monthly active users.
6:26
in the course of a couple of years. And
6:28
it was pretty clear to me
6:30
that that was gonna have a
6:32
profound impact, more so than anything
6:35
I could do in in sort
6:37
of the world of social activism.
6:39
Around then, Mustafa started hanging
6:41
out with the older brother
6:43
of a friend, a software
6:46
genius named Demis Hassabis, who
6:48
had been designing computer games
6:50
since he was a teen.
6:52
Demis saw how to make
6:54
Mustafa's vision possible. And he
6:56
was coming at the world
6:58
from a completely different perspective which
7:00
I found really interesting. His
7:02
belief was that we would just
7:04
simulate complexity in the world.
7:06
This new suite of methods in
7:09
AI self -learning systems were coming
7:11
up strong and looking likely
7:13
to work. We
7:16
We really just clicked
7:18
on that kind of
7:20
technical and sociocultural view
7:22
of how to positively
7:24
impact the world. Together,
7:27
the two decided to start
7:29
one of the first AI companies
7:31
ever. They called it Mind. It
7:34
was very clear from that, even
7:36
those early stages if we got
7:38
the technology right and it was
7:40
going to be this decade that
7:42
led to major, major breakthroughs, then
7:44
the consequences for humanity were going
7:46
to be significant. I've
7:49
been lucky enough to be working on
7:51
AI for almost 15 years now. Mustafa
7:53
Suleiman picks up the story from the TED
7:55
stage. Back when I
7:57
started to describe it as
7:59
fringe. would be an understatement. Researchers
8:02
would say, say, no, no, we're only working
8:04
on machine learning. learning, because
8:06
working on AI was seen as way too
8:08
out there. out there. In 2010, just
8:10
a very mention of
8:12
the phrase AGI, artificial general
8:14
intelligence, would get you some
8:16
seriously strange looks. looks, and even
8:19
a cold shoulder. shoulder. You're actually
8:21
building AGI, people would
8:23
say. isn't that Isn't that something
8:25
out of science fiction? fiction? People
8:27
thought it was was 50 years years away
8:29
if it was even possible it all. possible
8:32
of AI was, I guess, I guess, kind
8:34
of embarrassing. People generally
8:36
thought we were weird. weird. And I
8:38
guess in some ways we kind of kind of work. The
8:41
The ironic thing is that many
8:43
people still don't think it's
8:45
possible. Many people still think that
8:47
we're crazy. think that we're at the
8:49
time, people really thought we
8:51
were crazy. I I mean, it
8:53
was so far out there. It was
8:55
really was really strange. a And
8:57
we were a strange group of
8:59
people. mean, our you know, D
9:02
'Amis, our third co who, you know,
9:04
who, you know, basically a mathematician
9:06
spent his entire career thinking
9:08
about how to a
9:10
definition of of that we
9:12
could use to measure our
9:14
progress in the company. in the company.
9:16
We were and kind of of outsiders
9:19
and there weren't very many
9:21
people willing to back us. willing
9:23
to back us. So the company the company what
9:25
was it that you pictured in your
9:27
mind that you hoped to achieve? your I
9:29
mean we you hoped business plan in
9:31
the summer of plan in the
9:34
took it to Silicon Valley
9:36
to shop it around a
9:38
bunch of people shop it around a
9:40
bunch of people. And for the business
9:42
plan for the business plan was... you know,
9:44
building artificial general intelligence safely
9:46
and ethically and ethically. And then that
9:48
that evolved into a
9:50
two -line mission which was was
9:52
intelligence and use it
9:55
to make the world a
9:57
better place. the world a
9:59
better place. that dual frame. was
10:02
kind of the foundation of
10:04
the company our belief that science
10:06
and technology was the engine
10:08
of progress and there are some
10:10
downsides but I certainly think
10:12
this is the engine of creating
10:14
civilization in a more healthy
10:16
and sustainable way for the very
10:18
long term. And,
10:20
you know, if you think
10:22
about it, everything that
10:24
is of value in our
10:26
world today is a
10:28
product of us humans being
10:30
able to take materials
10:33
from our environment and recompose
10:35
those into products
10:37
and services and other compounds that
10:39
are useful to us, from our lights
10:41
to carpets to everything that you
10:43
see in front of you today. In
10:46
2014, DeepMind ended
10:49
up being acquired by
10:51
Google. That must
10:53
have been huge for you in terms
10:55
of money, resources, you were off
10:57
to the races. Yeah, it
10:59
was a huge event. mean, it was the
11:01
largest acquisition Google's ever made outside of
11:03
the US. Um, we
11:05
became Google's primary
11:07
AGI bat. Um,
11:10
and we were empowered with
11:12
tremendous resources, um, both people
11:14
and computation to go and
11:16
both do the hard science,
11:18
but also work on really
11:21
important applied practical product problems.
11:23
And that's where I kind
11:25
of really honed my craft,
11:27
if you like, I, you
11:29
know, as as a product
11:31
maker, um, it was just
11:33
the most amazing experience. And
11:35
As early as 2015, I
11:37
actually ran a, a hackathon.
11:40
project in my applied group
11:42
at DeepMind, and the
11:44
theme of our hackathon was
11:46
to find high impact
11:48
ways of using our technologies
11:50
for good. And so
11:52
there were lots of application,
11:54
lots of lots of
11:56
prototype hackathon experiments in healthcare
11:58
or in energy system. both
12:00
of which went on
12:02
to become significant parts
12:04
of of DeepMind applied parts
12:06
of Deep know my group
12:09
published you know my papers
12:11
in three papers in Nature showing human level
12:13
performance on for for example
12:15
classifying eye diseases at
12:17
the top 50 50 diseases
12:19
from from OCT scans three
12:21
-dimensional eye scans eye
12:24
scans that we could
12:26
perform as well as
12:28
a panel of of radiologists in
12:30
identifying cancerous tissues in
12:32
mammograms, showing that we could
12:35
predict the onset of
12:37
sepsis and acute kidney injury
12:39
acute well as the top
12:41
well as the top doctors using
12:43
vast amounts of data. And And
12:45
this was way back in
12:48
sort of 2016, 2017, 2018, and
12:50
and helped to kind of lay
12:52
a foundation for the
12:54
application of large -scale machine
12:56
learning to you know,
12:58
social problems. problems was very much
13:01
my motivation. much my motivation. In
13:03
a minute more of the
13:05
incredible in a minute, more
13:07
of the incredible breakthroughs that
13:09
at Google that had at Google works
13:11
the way medicine works today.
13:14
ended up why the ended up
13:16
leaving the company. just company was
13:18
just being too too slow to get things
13:20
into production. Today the
13:22
show, show. the CEO of Microsoft
13:24
AI, AI, Mustafa and the future
13:26
of artificial intelligence. of I'm
13:28
Manush Zamorodi, and Manush you're listening
13:30
to the TED I'm Hour
13:32
from NPR. Stay with
13:35
us. you're listening to the
13:37
TED Radio Hour from NPR.
13:39
Stay with us. This
13:42
message comes from Capella
13:45
This This message comes from
13:47
Capella University. With Capella's Flexpath Learning
13:49
you can set your own
13:51
deadlines and learn on your
13:53
schedule. A A different future is
13:55
closer than you think with
13:57
Capella University. Learn more more. Support
14:00
for NPR in the following message
14:03
come from Edward Jones. Edward
14:05
does it mean to be
14:07
rich? to be Maybe it's less
14:09
about reaching a magic number
14:11
and more about discovering the
14:13
magic in life. the Edward Jones
14:15
Edward are people you can
14:17
count on for financial strategies
14:19
that help support a life
14:21
you love because a life you key
14:23
to being rich is knowing
14:25
what counts. Learn about this
14:27
comprehensive approach to planning planning at.com/find
14:29
your rich. jones member sipc this
14:32
message is sponsored by green message
14:34
is sponsored by and the debit
14:36
card and money app made
14:38
for families where kids learn how
14:40
to save, invest and spend
14:42
wisely with parental controls built built
14:44
Sign up this holiday season at
14:46
greenlight.com com npr This This message comes
14:48
from NPR NPR sponsor committed
14:50
to exploring the world in
14:52
comfort, the through the heart
14:54
of Europe on an
14:56
elegant Viking of with thoughtful
14:58
service, destination -focused dining and
15:00
cultural enrichment on board and
15:03
on shore. and cultural Viking
15:05
voyage is all and on shore,
15:07
no children and no casinos.
15:09
all Discover more at Viking.com.
15:11
no casinos. Discover more
15:13
at it's Manouche. This past
15:15
year, we have read
15:17
you stories about AI, about AI,
15:19
relationships, climate change, neuro -technology,
15:21
dinosaurs, privacy, human behavior,
15:23
and even one about what
15:25
it means to create
15:27
thriving public spaces. All of these of
15:29
these public spaces that we take for
15:31
granted, you know, all of this social infrastructure.
15:34
We work really hard to bring
15:36
you all of these stories of that
15:38
is our public service. Kind
15:40
of like a park or a
15:42
a public library. And And
15:44
libraries are these beautiful hubs that
15:46
can take on the shape
15:48
of whatever community that uses
15:51
them really needs. needs. Public
15:53
media is infrastructure that we
15:55
all can use. It's free it's
15:57
it's for everyone. and our mission
15:59
is to to help create a
16:01
more informed public. So this
16:03
time of year, I want to
16:05
thank you for listening and
16:07
to say that one of the
16:09
best ways you can support
16:11
this public service is by signing
16:13
up for NPR Plus. With
16:15
one subscription, you are supporting all
16:17
the work that NPR does
16:20
and you get sponsor -free episodes
16:22
and perks across NPR's catalog of
16:24
25 podcasts. It's a great
16:26
time to join because throughout December
16:28
and great to NPR Plus listeners
16:30
have access to even more
16:32
insights from TED speakers to
16:34
help you kick off 2025
16:36
right from making big life
16:39
decisions to being more hopeful,
16:41
to carving out time for
16:43
what is important to you
16:45
we have got you covered.
16:47
Just visit plus .npr .org. You
16:49
can also find the link
16:51
in our episode notes. And
16:53
the other way you can
16:56
give is to make a
16:58
donation at donate .npr .org. Your
17:00
gifts are tax deductible, either
17:02
way. Thank you so much
17:04
for being here. And now let's get back to the
17:06
show. It's
17:10
the TED Radio Hour
17:12
from NPR. I'm Manush Zamorodi.
17:14
Today on the show, we're
17:16
the spending the hour with
17:18
Mustafa Suleiman, the CEO of
17:20
Microsoft AI who was the
17:23
co -founder of one of
17:25
the first AI tech companies
17:27
ever, DeepMind. The company
17:29
was acquired by Google in
17:31
2014. And Mustafa and his
17:33
co -founder, Hassabis went on
17:35
to have numerous scientific breakthroughs,
17:37
including a project called AlphaFold.
17:39
AlphaFold uses AI to figure
17:42
out incredibly complex protein
17:44
structures in molecules, the molecules,
17:46
building blocks of every biological
17:48
process in our bodies.
17:51
It won 2024 Nobel Prize
17:53
in Chemistry. Here he
17:55
is on the TED stage.
17:58
rule of thumb is that it... takes
18:00
one one PhD student, their whole
18:02
PhD, so four, or five years, to to
18:04
uncover one structure. But But there
18:06
are 200 million proteins known to
18:08
nature. So you could just take you know, just
18:10
take forever to do that. we
18:12
And so to managed to actually fold,
18:15
using fold in in one year, all
18:17
those 200 million proteins known to
18:19
science. So that's a billion years
18:21
of PhD of PhD time. Save. Alpha
18:23
fold was just one of
18:25
the ways that computers
18:28
were solving biological problems, says
18:30
problems, says Mustafa I mean, mean, this
18:32
really, AlphaFold really the first first
18:34
time we started hearing about,
18:36
I I guess, the term
18:38
computational biology, this idea
18:40
of using tech and science to
18:42
rethink how biology works and then then
18:44
getting it it out into the world,
18:46
changing the way we treat diseases
18:49
or maybe developing crops that are
18:51
more that more watching. all all
18:53
kinds of ways that what you
18:55
figured out you figured out at would would potentially
18:57
change the world, which was the goal. was
18:59
the That was the core
19:01
mission of the company
19:03
from core one. the could we
19:05
take that process of
19:08
synthesis and prediction and
19:10
turn that into a general purpose
19:12
system that could use all the
19:14
data and all the compute that
19:16
we have in the world try and
19:18
try and solve these very hard
19:20
problems, whether it's growing crops in
19:22
drought -resistant environments, whether it's
19:24
more efficient ways to do
19:26
water desalination, whether it's
19:29
long water, desalination, whether
19:31
it's long-term battery In
19:33
2022, 2022, decided to
19:35
leave leave Deep and Google.
19:37
Google. There were reports that he
19:39
clashed with Google leadership over using
19:41
technology for military projects and
19:43
that some employees had complaints about
19:46
his management style. about his Mustafa
19:48
says he was simply frustrated with
19:50
the pace of innovation and was
19:52
anxious to get an AI product
19:54
out into the world that everyone
19:56
could use. the world You
19:58
know, I, could use. You know, I, um... really
20:00
wanted to scale these these
20:02
large language models models get them into
20:04
production as quickly as possible. as
20:07
quickly as company was just the
20:10
being too slow to get
20:12
things into production. being too slow to
20:14
I felt that it was
20:16
an opportunity to that it of
20:18
have all the resources that,
20:20
of know, all the I needed
20:22
that went out and Like so I
20:24
went out a billion dollars over
20:27
a billion dollars. And know know,
20:29
just extremely fast with
20:31
a very small
20:33
team. small team. And I think at think
20:35
time, I the time I was and I
20:37
am still now very much a
20:39
believer that these AI companions are going
20:41
to become an everyday part of our
20:44
life. part of our lives. Mustafa's
20:46
AI called Pi, debuted debuted in
20:48
2023. have taken the have
20:50
taken the world by
20:53
storm. Several months later, later,
20:55
company company out its
20:57
AI product. ChatGPT.
20:59
for free. ChatGPT answers questions
21:01
and writes essays. They read
21:03
like the work of a
21:05
human. intelligence tool has the
21:07
potential to change the way we
21:09
live. This was the point
21:11
when AI went mainstream. when AI went
21:14
and Mustafa's small Mustafa's didn't
21:16
have the resources of these
21:18
bigger tech companies. The
21:20
truth is the pace of
21:22
big tech had the pace of big accelerated
21:25
in 2023.
21:27
Google, Microsoft, AI, everyone everyone
21:29
was really going for it
21:31
and essentially made these huge
21:33
models available to everybody for
21:35
free, which sort of changed
21:38
our business model. model. that
21:40
very tough moment, Microsoft CEO
21:42
Saty Nadella approached Mustafa made
21:45
him an offer. offer. You
21:47
know the offer he made was, well,
21:49
look, we've got all the computation and
21:51
the data all the distribution that
21:53
you could dream you could dream of,
21:56
come and run products products at Microsoft and
21:58
the future future of AI here and... That was
22:00
a huge offer. So
22:03
the product that we're building at
22:05
Microsoft is called Copilot. And the
22:07
reason why we've called it that
22:09
is because co -pilot is an
22:11
aid, you know, it's a conciliary.
22:13
it's an assistant, it's in your
22:15
corner, aligned to your interests, on
22:18
your team backing you up. Yeah,
22:20
I actually talked to co -pilot
22:22
about my interview with you.
22:24
How do you feel about Mustafa
22:26
Suleiman considering that he is
22:28
your creator? I'd say
22:30
I'm intrigued by Mustafa. Just so you
22:32
know, she said she's intrigued by you,
22:34
just as she is with many innovators
22:36
in the tech world. And his work
22:38
has pushed the boundaries of what's possible. I
22:42
will say it was very
22:44
helpful in terms of suggesting topics
22:46
to bring up with you.
22:48
But it could not check the
22:50
weather. And is that
22:52
simply because of how the access
22:54
or the data? the information
22:57
that the agent or co -pilot
22:59
has access to that it's not
23:01
immediate yet. Yeah, there are
23:03
loads of weaknesses of our co
23:05
-pilot today, but all of those
23:07
things will come. mean, we're working
23:09
on all of those things. It
23:11
will have permanent memory, session
23:13
to session, infinitely across time. It'll be
23:16
able to access your files and
23:18
folders if you choose to make those
23:20
available, your email, your calendar, and
23:22
be able to browse the web for
23:24
you in the future. And all
23:26
of that will be integrated into these
23:28
like seamless conversational, very friendly, polite
23:30
experiences. I literally was talking to it
23:32
last night about what to watch.
23:34
And we were going back and forth
23:36
on whether I would enjoy Pan's
23:38
labyrinth and whether I've got the time
23:40
to watch all of Dune because
23:42
I haven't seen Dune 1 or Dune
23:44
2. I mean, it reminds
23:46
me a little of the
23:48
hotline from Muslim youth
23:51
that you're describing. It's helpful.
23:53
it's infinitely patient, it's
23:55
supportive. Are we talking mostly
23:57
about companionship and mental
23:59
health resources? that this can provide or
24:01
how do you see it? think I
24:03
think cool thing about The cool
24:05
thing about judge you for that
24:07
it doesn't judge you for asking a
24:10
stupid question. have if you have to
24:12
ask that question three times over
24:14
in five different ways, your best you know,
24:16
even your best friend might be
24:18
like, I come on, man. I mean,
24:20
you're asking me this again, seriously? you know,
24:23
Whereas, you know, There's here for it.
24:25
There's obviously some similarities to stuff I've
24:27
done in the past. past and... I guess it's
24:29
it's kind of inspired by nonviolent
24:31
communication, if I'm honest with you. It's
24:33
certainly not like a mental health
24:35
app. certainly know, like a anything like
24:37
that. It's just got a
24:39
little bit just got a little bit of
24:42
and empathy. and It's got some
24:44
emotional intelligence and I think that's
24:46
no bad thing. no bad thing. Gosh,
24:48
is that where we've gotten to, that
24:50
technology has to tell us how to
24:52
communicate with each other better, non with each
24:54
other better, Well, it doesn't. it us
24:57
tell us, it just... demonstrate.
24:59
Yeah, Yeah, exactly. It demonstrates. But
25:01
But that's what technology has always
25:03
done. has The choice architecture, the
25:05
buttons, the the the language,
25:08
the that is shaping our
25:10
behavior, whether it's an infinite
25:12
scrolling feed or whether it's
25:14
an encouragement to go and
25:16
film your lunch, you know,
25:18
for Instagram or create a
25:20
little video for TikTok. mean,
25:22
all of those inputs inputs shape behavior.
25:24
And And so we have
25:27
to be super thoughtful about what
25:29
those inputs inputs actually are, because technology
25:31
shapes us in return and we're in
25:33
this constant cyclical of
25:35
feedback loop and interaction. And
25:38
that And that is kind of what's
25:40
propelling us forward as a a civilization,
25:42
and it's very powerful. And so
25:44
far And so far, so good,
25:46
it's actually been, very, know, very, very
25:48
productive over the last couple of
25:50
centuries. Science Science has massively delivered
25:53
for us, us, but but we shouldn't
25:55
just assume that that's gonna happen going
25:57
to happen naturally. or inevitably. We
25:59
have to. really be deliberate and
26:01
thoughtful about what the consequences are
26:03
ahead of time. In
26:07
the book that you wrote that
26:09
came out in 2023, you really
26:11
tried to put what's happening
26:13
with AI in a historical
26:15
context. So if the printing press
26:17
let people own and share
26:19
information and the personal computer let
26:21
people search and disseminate information,
26:23
tell me how you're thinking. you
26:26
can explain to people what.
26:28
AI will do for people now. Each
26:31
new wave of technology
26:33
is fundamentally a new
26:35
interface. It's a new
26:37
interlocutor, a translator, a
26:40
way of of and
26:42
creating new information, new
26:44
tools, new knowledge. So
26:47
if the last wave of social media
26:50
and web search, help people
26:52
to access information. this
26:55
wave is going to help us
26:57
to invent and create new
26:59
ideas be it in science or
27:01
in culture and media and
27:03
entertainment. And I think everybody is
27:05
ultimately going to have an
27:07
AI companion just as we have
27:09
a search engine or a
27:11
smartphone and just as we use
27:13
a browser. You'll just ask
27:15
your computer in natural language, you
27:17
know, can you write that
27:20
contract and check that it's okay?
27:22
Can you create that new
27:24
piece of software for me and
27:26
you're just going to describe what it
27:28
is. Can you help me plan
27:30
that trip you know, my parents that
27:32
are coming into town? So, you
27:34
know, kind of breakthrough is a change
27:36
in the interface which changes itself
27:38
what we can actually get done and
27:40
I think it's going to be
27:42
pretty transformational. With
27:45
the invention of computers, we quickly
27:47
jumped from the first main frames transistors.
27:50
to today's smartphones and virtual
27:52
reality headsets. Information, knowledge,
27:55
communication. computation.
27:58
In this revolution... Creation has
28:01
exploded like never before, and
28:03
now a new wave is
28:05
upon us. artificial
28:07
intelligence. These waves of history are
28:09
clearly speeding up as each
28:11
one is amplified and accelerated by
28:13
the last. And if you
28:16
look back, it's clear that we
28:18
are in the fastest and
28:20
most consequential wave ever. The
28:22
journeys of humanity and
28:24
technology are now deeply intertwined.
28:27
In just 18 months, over a
28:29
billion people have used large language
28:31
models. We've witnessed one
28:34
landmark event after another. Just
28:37
a few years ago, people said that AI would
28:39
never be creative. and
28:41
yet AI now feels like an
28:43
endless river of creativity. making
28:45
poetry and images and music and
28:47
video that stretch the imagination. People
28:51
said it would never be empathetic. And
28:53
yet today millions of
28:55
people enjoy meaningful conversations with
28:57
AIs, talking about their
29:00
hopes and dreams and helping them work
29:02
through difficult emotional challenges. AIs
29:04
can now drive cars. manage
29:07
energy grids, and even invent new
29:09
molecules. just a few
29:11
years ago. Each of these was
29:13
impossible. And all
29:15
of this is turbocharged
29:17
by exponentials of data.
29:20
and computation. Last
29:22
year, inflection 2 .5. our
29:25
last model. used
29:27
five billion times
29:29
more computation. than
29:32
the DeepMind AI that beat the old
29:34
school Atari games just over 10 years
29:36
ago. That's nine orders
29:38
of magnitude, more computation. 10x
29:41
per year. every year. for
29:44
almost a decade. Over
29:46
the same time, the size of these
29:48
models has grown from tens of millions of
29:50
parameters to then billions of parameters. and
29:53
very soon tens of trillions of
29:55
parameters. If someone did
29:57
nothing but read, 24 hours a
29:59
day. for... their entire life, they'd
30:01
consume eight billion words. And
30:03
course, that's a lot of
30:05
words. But today, the
30:07
most advanced AIs consume more
30:10
than 8 trillion words. in
30:12
a single month of training. And
30:15
all of this is set to
30:17
continue the long arc of technological
30:19
history. is now in
30:21
an extraordinary new phase. I
30:26
think the way to think about this is
30:28
that we're at the very earlier stages of
30:30
development of this technology. Today when you need
30:32
to go and sort of ask a question like
30:34
that, you go to a search engine like
30:36
Google, you type in a query, you get
30:38
these 10 blue links, you have to then
30:40
go to the web page, you have to
30:42
read all of this sort of complicated stuff,
30:44
formatted in 25 different ways, and that takes time.
30:46
And it means that you don't always want
30:48
to invest two or three minutes to go
30:50
and make sense of that. You don't always
30:52
have the energy for it. Whereas you can
30:55
always just like quickly send off text or quickly
30:57
make a phone call or leave a voice
30:59
note. So I think it's lowering the barrier
31:01
to entry to access high quality information. But
31:03
how do we know it's high quality information? How
31:05
do we make sure of that? Well, there
31:07
was an amazing study two
31:09
months ago that was published
31:11
in the journal Science, which
31:13
showed that for over a
31:15
,000 participants who held
31:17
strong conspiracy theory beliefs.
31:19
So this is things like
31:21
flat or all kinds
31:23
of things about the COVID
31:26
vaccine et cetera, et
31:28
cetera. And after a month
31:30
of conversation with
31:32
an AI that had
31:34
been primed to talk about
31:36
these conspiracy theories, it
31:38
reduced the belief in those
31:40
conspiracy theories by 20%. which
31:43
is pretty significant and
31:45
I think shows the first
31:47
glimmers of what it's
31:49
like to have you know
31:51
that kind of patient
31:53
insistent deliberate access to high
31:55
-quality information And in the
31:57
case of this study
31:59
human fact -checkers expert human fact went
32:01
back and read over back and
32:03
read over the transcripts. the value
32:05
of point is, is the
32:07
value of these systems is
32:09
that they actually are to
32:11
more accurate. They're And that's
32:14
only going to continue. They're more accurate
32:16
than your average human are answering any
32:18
question today, we've we've You know, we've sort
32:20
of passed the because they're more, you know,
32:22
and and conversational we ever we ever thought
32:24
they were going to be. Which is
32:26
pretty remarkable. And that's only going to
32:29
continue. So what does
32:31
this mean in practice? Well, just
32:33
as just as the internet gave
32:35
us the browser. and the and the smartphone
32:37
gave us apps. apps, the cloud-based
32:39
supercomputer is ushering in
32:41
a new era
32:43
of ubiquitous Everything will soon be
32:46
will soon be represented by
32:48
a conversational interface. or to put
32:50
or to put it another way, a a
32:52
personal AI. And these AIs will be And these AIs
32:54
will be infinitely knowledgeable. and soon and
32:56
soon. be they'll be factually accurate
32:58
and reliable. They'll have
33:00
near -perfect IQ. IQ. They'll
33:03
also have exceptional EQ. They'll
33:05
be kind, supportive. empathetic.
33:08
These elements on their own would
33:10
be These elements on their own
33:12
would be transformational. if Just imagine
33:14
if everybody had a personalized their
33:16
in their pocket and access to
33:19
low -cost medical advice. advice. A lawyer,
33:21
a and a doctor, a business strategist,
33:23
and coach and coach, in your pocket 24
33:25
hours a day. a day. But things
33:27
really start to change change. they
33:29
develop what I call I call AQ, their
33:31
actions quotient. This is their This
33:34
is their ability to actually get stuff done.
33:36
in the in the digital and physical
33:38
world. And And before long, it won't
33:40
just be people that have people that have AIs.
33:42
Strange it may sound sound, every organization,
33:44
small business to non
33:46
to to national government. national
33:48
will have their own. each will have their own.
33:51
Every town, building, an object. will
33:53
be will be represented by a
33:55
unique interactive persona. persona. and And
33:57
these won't just be be mechanistic
33:59
assistance. There'll be companions. confidants,
34:03
colleagues, friends and partners
34:05
as varied partners as varied
34:07
and unique as we
34:09
all are. convincingly at this point.
34:11
humans at will convincingly imitate humans
34:13
at most tasks. intimate
34:15
of we'll fill this A.I. the most intimate of
34:17
scales. get-together
34:19
for an get together for an elderly
34:21
neighbour. a sympathetic expert make
34:24
you make sense of a
34:26
difficult diagnosis. but we'll but we'll
34:28
also feel it at the largest scales. accelerating
34:30
scientific discovery. discovery,
34:33
cars on the roads. on the roads, in
34:35
the in the skies. They'll They'll
34:37
both order the takeout and and run
34:39
the power station. They'll interact
34:41
with us us and, of course,
34:43
with each other. other. They'll
34:46
speak every language, in
34:48
every pattern of sensor data,
34:50
of data, sites, sounds, and streams
34:52
and streams of information, what any
34:54
one one of us could
34:56
consume in thousand lifetime. When
34:59
we we come back, the the
35:02
remarkable, but also terrifying visions what
35:04
AI may be capable of
35:06
in the future. Distopian scenarios that
35:08
might make you wanna unplug
35:10
all your devices forever. Are
35:12
they realistic Or
35:15
just fear-mongering? CEO of
35:17
Microsoft AI, AI, Mustafa Sulaman, weighs
35:19
in. I'm Zamorodi, and and
35:21
you're listening to the TED Radio the
35:23
Ted NPR. from Be back in a
35:25
minute. be back in a minute. This
35:41
This message comes from Capital One,
35:43
offering commercial solutions you can
35:46
bank on. on. Now more
35:48
than ever, your business faces
35:50
unique challenges and opportunities. That's That's
35:52
why Capital One offers a
35:54
comprehensive suite of financial services,
35:56
all tailored to your short
35:58
your long and goals. goals. by the
36:00
strength and stability of
36:02
a of a top 10 bank. bank.
36:04
dedicated experts work with
36:06
you to build lasting
36:08
success. Explore the possibilities
36:10
at at.com/commercial, a a member
36:13
FDIC. This message comes from comes
36:15
from Better Help. This holiday season, do do
36:17
something for a special person in your
36:19
life. life. You. Give Give yourself the gift
36:21
of better mental health. BetterHelp online
36:23
therapy connects you with a qualified therapist
36:25
via phone, video, or or live chat.
36:27
It's convenient and affordable, and can
36:29
be done from the comfort of your
36:31
own home. your Having someone to talk
36:33
to is truly a gift, a especially
36:35
during the holidays. Visit
36:37
betterhelp.com/ to get get
36:39
off your first month. This
36:41
message comes from NPR sponsor
36:44
Lands Outfitters. from GoodRX. Looking for relief from
36:46
cold and flu symptoms? With
36:48
GoodRx, you can save an
36:50
average of $34 on cold cold
36:52
and flu medications. Plus, find
36:54
savings on everyday prescriptions. GoodRx
36:56
lets you compare prescription prices
36:58
at over 70 ,000 pharmacies and
37:00
instantly find discounts of up
37:02
to of up Even if you
37:05
have insurance or insurance GoodRx
37:07
may beat your may price. copay
37:09
on Save on flu prescriptions
37:11
and more at and.com. comes
37:15
message comes from NPR Mint Mobile. From
37:17
the From the gas pump
37:19
to the grocery store, inflation
37:22
is everywhere. So Mint Mobile is
37:24
offering premium wireless starting at
37:26
just $15 a month. To
37:28
get your new phone plan
37:30
for just $15, phone go
37:32
to for.com to mintmobile.com slash switch. It's the
37:34
TED Radio from NPR. I'm
37:36
Manush Zamorodi. Today on
37:38
the show, the show, The
37:40
of artificial intelligence. We're
37:42
spending the hour with
37:44
the CEO of Microsoft
37:47
Microsoft AI Mustafa Suleman. In 2023,
37:49
Mustafa wrote a book called
37:51
The Coming Wave. Wave, Coming
37:54
Wave, Suleiman delves into
37:56
scenarios where Coming I asked
37:58
Microsoft's co -pilot to help me
38:00
talk through, the reasons why says
38:02
AI could be so
38:05
dangerous. Imagine a world where art
38:07
of official intelligence has seamlessly
38:09
integrated into everyday life. The
38:11
first is what he calls The
38:13
first is what meaning vast amounts
38:15
of data that could get
38:17
into the hands of a
38:20
single person who could go
38:22
rogue. a single person who now
38:24
concentrating knowledge and capability
38:26
into smaller and smaller
38:28
units that are transmissible.
38:30
are you can download You can
38:32
source models. source you know,
38:34
for a few gigabytes and put
38:36
it on a thumb drive drive know, you
38:38
know so that's the kind of asymmetric impact
38:41
because that therefore means that therefore
38:43
actors or smaller groups of
38:45
actors can have a
38:47
massive one -to -many broadcast massive one-to-many
38:49
broadcast This is AI
38:51
unleashing computing power to
38:53
many, to many. kind of like the anonymous hacker
38:56
who could bring hacker who could
38:58
bring down an entire electrical
39:00
grid. care That kind of
39:02
thing. could be systems could be
39:04
overwhelmed. is that right that's right you could just
39:06
that right? That's right, you can
39:08
just sort of self so immediately. two is hyper evolution
39:11
yeah Okay, so number
39:13
two is evolving in hyper real time in
39:15
this software itself just evolving
39:17
in hyper hyper real time this
39:19
scenario a rogue rogue scientist
39:21
or a or a bioter design
39:23
a pathogen with specific traits
39:25
like heightened transmissibility or increased lethality
39:28
new new iterations of weapons
39:30
and surveillance could be
39:32
developed to track people's movements
39:34
conversations and even emotions
39:36
through their online activities and
39:38
put to use before
39:40
authorities have time to test
39:42
them to put any safeguards
39:44
in place. A small A
39:46
error or a or a hat
39:48
system. could result in catastrophic loss of life. How
39:50
could a country build a a
39:52
system if it doesn't even
39:55
know what it needs to
39:57
defend itself against? against? Which brings
39:59
us to the third. dangerous attribute
40:01
of AI, Omni use. One
40:03
single kind of technology will
40:05
be able to do everything.
40:07
Your AI companion will make
40:09
phone calls, call other AIs,
40:11
and will call other humans
40:13
to check on stock or
40:15
sort of availability of in
40:17
a hotel or get some
40:19
advice from a local tour
40:21
guide on where you're going
40:23
to visit, or it will
40:25
send emails. you know, talk
40:27
to another database or a
40:29
back end. All of those
40:31
things are essentially the AI
40:33
learning to act on your
40:35
behalf. Sure, but what if
40:37
your AI also talks to
40:39
another database or makes a
40:42
trade on your behalf and
40:44
decides to shut down the
40:46
entire stock market, which relates
40:48
to the fourth and final,
40:50
maybe scariest, attribute autonomy. AI
40:52
that takes action on its
40:54
own without a humans go
40:56
ahead. Autonomous weapon systems operate
40:58
on battlefields, making split-second decisions
41:00
about targeting and firing without
41:02
direct human oversight. I mean,
41:04
autonomy is one of the
41:06
core characteristics of these systems.
41:08
It's a long road, and
41:10
it's going to be many
41:12
years before. these systems are
41:14
truly autonomous and we want
41:16
to be very careful about
41:18
that because we have to
41:20
ask ourselves like what is
41:22
the added benefit of the
41:24
system operating autonomously and is
41:26
that worthwhile? Is it safe?
41:28
Is it stable? Is it
41:30
controllable? Can we really trust
41:32
that autonomy? So yeah, I
41:34
think it's one that we'll
41:36
have to be very careful
41:39
of. The societal disruption could
41:41
lead to widespread fear, mistrust
41:43
and geopolitical tension. Suleiman and
41:45
other experts argue for robust
41:47
oversight and ethical guidelines to
41:49
ensure that these powerful technologies
41:51
are used responsibly and safely.
41:53
For years we in the
41:55
AI community have had a
41:57
tendency to refer to this
41:59
as just tools,
42:01
but that doesn't really capture what's actually
42:03
happening here. here. Here's Mustafa
42:06
Suleiman on the TED are
42:08
clearly more dynamic, more dynamic, more
42:10
ambiguous. more more integrated
42:12
and more emergent than mere
42:14
tools. which are which are entirely subject
42:16
to human control. control. So to contain
42:18
this wave, to put human put
42:21
human agency at its to
42:23
mitigate the and to mitigate the
42:25
inevitable unintended consequences that are likely
42:27
to arise. to arise. we should start to
42:29
think about them as we as a
42:31
new kind of digital species. Now, it's
42:33
it's just an analogy. It's not
42:35
not a literal description and it's not
42:37
perfect. not perfect. start, they clearly
42:39
aren't biological in any traditional sense.
42:42
in any but just pause for
42:44
a moment. pause for a really think about
42:46
what they already do. what they They
42:48
communicate in our languages. in our They
42:51
see what we see. They They
42:53
consume unimaginably large amounts of
42:55
information. They They
42:57
have memory. They have They
43:00
have They have creativity. They
43:02
They have creativity. reason to some They
43:04
can even reason to some extent
43:06
and formulate rudimentary plans. They can act
43:09
they can act autonomously. them.
43:11
we allow them. do all this at they
43:13
do all of levels of sophistication that
43:15
is far beyond anything that we've
43:17
ever known from we've ever known from a
43:19
mere tool. so so is mainly
43:22
about the math the the code.
43:24
code is like saying we humans.
43:26
mainly about carbon. carbon
43:29
and water. It's It's true.
43:31
it But it completely misses
43:33
the point. point. And yes, I
43:35
get I get it. This is a
43:37
super super thought. thought. But honestly
43:39
think this frame helps sharpen
43:41
our focus on the critical
43:44
issues. focus on the What are
43:46
the risks? issues. What are the What
43:48
are the boundaries that we need to
43:50
impose? that we need kind of AI
43:52
do we want to build? of AI do
43:54
we want be build? This
43:57
is is a story that's
43:59
still unfolding Nothing. should be accepted as
44:01
a given. We all must choose
44:03
what we create, what AIs we
44:06
bring into the world, or not.
44:08
These are the questions for all
44:10
of us here today, and all
44:13
of us are live at this
44:15
moment. You lay out 10 strategies
44:17
for containing A.I. And one of
44:20
the easiest, it seems, is having
44:22
more researchers working on safety. Do
44:24
you have more researchers working on
44:26
copilot safety? I mean, one thing
44:29
that worries me is people using
44:31
your AI to help them do
44:33
destructive things or further their destructive
44:36
views. Is that something you're thinking
44:38
about at Microsoft? Yeah, we have
44:40
a big safety team. We are
44:43
definitely very focused on that. We're
44:45
very focused particularly on the sort
44:47
of tone of the AI. Like
44:50
how do we make sure that
44:52
it isn't? too sick or phantic?
44:54
How do we make sure that
44:57
it isn't over flattering? How do
44:59
we make sure that it doesn't
45:01
mirror you and sort of lead
45:03
to this sort of negative cycle
45:06
of reinforcing unhealthy views? And that's
45:08
a real art and craft in
45:10
trying to sort of engineer that
45:13
healthy balance where, you know, your
45:15
sort of AI companion can push.
45:17
back on your views in constructive
45:20
ways without making you feel judged
45:22
or making you feel angry, make
45:24
you feel heard for your anger.
45:27
It may be the case that
45:29
you are angry about immigration, that
45:31
you feel that you haven't had
45:33
the opportunities and access to jobs
45:36
in your community that you feel
45:38
have been available to new people
45:40
coming into your world. And so,
45:43
you know, it's about being respectful
45:45
and acknowledging that people do. genuinely
45:47
feel aggrieved by that and not
45:50
shutting them down because they don't
45:52
adhere to some met of you.
45:54
So, you know, and I think
45:57
that's a very challenging... to draw.
45:59
It requires care and
46:01
care and attention. what
46:04
role do you role do you see
46:06
yourself playing in terms of pushing
46:08
the tech industry? industry towards the
46:10
public public good. is mean, is
46:12
that a role that you
46:14
sort of are taking on? What
46:16
do your fellow technologists think
46:18
when they hear you talking about
46:20
some of the more pessimistic
46:22
visions you have for how AI
46:24
could be deployed? deployed? Well, I I
46:27
think I'm both a pessimist and an
46:29
optimist, and and that's a bias,
46:31
it's just an observation of
46:33
the... of the landscape before us.
46:35
So most all, I'm of all, I'm
46:37
inspired by science. We practice of
46:39
science. We have to say
46:41
what we see and do
46:43
our best to a hypothesis that we
46:45
have hypothesis that we have with respect
46:47
to evidence. see so I see evidence
46:49
for both trends and that's why
46:51
I wrote about them. why I wrote about them. Look,
46:54
I I think, in you know, in terms
46:56
of how we're shaping the industry, a
46:58
I'm a big fan of many of these, you know,
47:00
you know, work that many of these,
47:02
you know, of NGO activists social activists
47:04
have been doing in order to
47:07
raise questions and to challenge and push
47:09
back. and I think that's healthy. We
47:11
need more of that. We And I'm
47:13
very open -minded to it. I've been
47:15
very sort of encouraging of additional
47:17
regulation for a long time. I think,
47:19
you know, this is a moment
47:22
when going moment when going slowly... and adding friction
47:24
to the system. to the will be
47:26
long -term beneficial. And I think
47:28
it's rational to just be a
47:30
little cautious and increase the burden
47:32
of proof, and know, and
47:34
just of proof, a
47:36
requirement just make it a
47:38
requirement that, for example,
47:40
an AI AI shouldn't just
47:42
be a a straightforward imitation of
47:44
a human. We want to
47:46
create an aid, a an aid, a concieria
47:48
is an amplifier and a
47:50
supporter. and a supporter.
47:53
So, you know, you know, of a of a lot of
47:55
things to think through in terms of how this
47:57
manifests in the world. the world. Do
48:00
you think there should be mandatory
48:02
testing requirements that before any technologies
48:04
released to the public, they should
48:07
have to go through a certain
48:09
series of tests and if they
48:11
don't pass, they don't make it
48:13
to market? I think we're approaching
48:16
that time. Yep. I think sometime
48:18
before 2030, we will need something
48:20
like that. I don't know if
48:22
now is quite the right time.
48:25
You know, if you look back
48:27
on the impact that... these sort
48:29
of chat bots have had in
48:31
the last two or three years,
48:34
it's been unbelievably positive, sort of
48:36
overwhelmingly positive. So had we had
48:38
those checks ahead of time three
48:40
years ago, I think it could
48:43
have slowed things down quite a
48:45
lot. But that doesn't mean that
48:47
it's not right to keep asking
48:50
that same question every year. And,
48:52
you know, reconsider is now the
48:54
right time for pre-deployment testing. I
48:56
think that's the right question. There
48:59
is a sense that Big Tech
49:01
needs to regain trust from consumers.
49:03
There's a lot of people who
49:05
just given up, you know, thrown
49:08
up their hands and said, well,
49:10
the convenience has outweighed all the
49:12
digital privacy problems that we have.
49:14
I give up. This is just
49:17
the world we live in. But
49:19
there are other people who are
49:21
saying... I don't feel great about
49:23
giving a tech company all my
49:26
data, which is what you need
49:28
to run some of these new
49:30
AI tools, especially if you're telling
49:32
me that terrible things could happen
49:35
to it. You're clearly very cautious
49:37
when it comes to the incredible
49:39
powers of technology, but how are
49:42
you balancing that with the demands
49:44
on you to innovate and sell
49:46
these products? Well, I personally think
49:48
that there is going to be
49:51
a huge amount of value to
49:53
the user to have your copilot
49:55
companion be able to read over.
49:57
your email, look at
50:00
your calendar, schedule
50:02
things, buy things for you,
50:04
book and plan. And, you know, I
50:06
think the truth is know, I think the
50:08
truth is see have to wait and
50:10
see if consumers agree with that, that's a
50:12
very may not. reasonable that's a very
50:14
fair and reasonable thing to do. But
50:16
I believe that the utility will
50:18
drive the way. you know, And drive in
50:21
conjunction with that, that we have know, we
50:23
have to make sure that we have
50:25
the strongest privacy and security infrastructure to
50:27
protect that information, just
50:29
as we already do today,
50:32
right? I mean, many, many billions
50:34
of people their email with with
50:36
on Microsoft and rely on
50:38
Microsoft to to protect their consumer
50:40
work and their enterprise work. so
50:42
that's a massive, massive priority
50:44
for the company. But you know,
50:47
you know, it isn't just the
50:49
utility. the utility, to be has to
50:51
be really useful. Obviously, it has
50:53
to have good privacy and security
50:55
controls. But think it's also about
50:57
the way that we approach it,
50:59
it, do we take feedback? And
51:01
do we admit when we make
51:03
mistakes? when we make mistakes? You open -minded are
51:05
we we different ways of doing things?
51:07
things? You know, our business model? model? So I
51:09
think my I think my commitment is
51:11
to to be to be as open
51:14
-minded as I can on all
51:16
those questions and just listen and
51:18
just innovate carefully, observe and observe and
51:20
iterate as we go. That's the best
51:22
I can see to do at the
51:24
moment. the moment. In the
51:26
past, unlocking economic growth often
51:28
came with huge downsides. The
51:31
The economy expanded as
51:33
people discovered new continents new
51:35
opened up new frontiers. opened up
51:37
new but they they populations
51:39
at the same time. at
51:42
the same built factories, but
51:44
they were grim and dangerous places to
51:46
work. places to work. We
51:48
struck oil, but we but we polluted
51:50
the planet. Now, because because
51:52
we are still designing and building AI,
51:55
we have the potential and opportunity
51:57
to do it better. it better. Radically
51:59
better. and today we're not
52:01
discovering a new continent and
52:03
plundering its resources. We're building
52:05
one from scratch. Sometimes people
52:08
say that data or chips
52:10
are the 21st century's new
52:12
oil. But that's totally the
52:14
wrong image. AI is to
52:16
the mind what nuclear fusion
52:18
is to energy. Limitless, abundant,
52:21
world-changing. And AI really ...
52:23
That means we have to
52:25
think about it creatively and
52:27
honestly. We have to push
52:29
our analogies and our metaphors
52:31
to the very limits to
52:33
be able to grapple with
52:36
what's coming, because this is
52:38
not just another invention. AI
52:40
is itself an infinite inventor.
52:42
And yes, this is exciting
52:44
and promising and concerning and
52:46
intriguing all at once. To
52:48
be quite honest, it's pretty
52:51
surreal. But step back. See
52:53
it on the long view
52:55
of glacial time, and these
52:57
really are the very most
52:59
appropriate metaphors that we have
53:01
today. Since the beginning of
53:03
life on Earth, we've been
53:06
evolving, changing, and then creating
53:08
everything around us in our
53:10
human world today. An AI
53:12
isn't something outside of this
53:14
story. In fact, it's the
53:16
very opposite. It's the whole
53:19
of everything that we have
53:21
created. distilled down into something
53:23
that we can all interact
53:25
with and benefit from. It's
53:27
a reflection of humanity across
53:29
time. And in this sense,
53:31
it isn't a new species
53:34
at all. This is where
53:36
the metaphors end. AI isn't
53:38
separate. AI isn't even, in
53:40
some senses, new. AI is
53:42
us. It's all of us.
53:44
And this is perhaps the
53:46
most promising and vital thing
53:49
of all of all, As
53:51
we build out AI, we
53:53
can and must. reflect all
53:55
that is
53:57
good good, that
53:59
we love we
54:01
that is special
54:04
that is humanity, about our
54:06
empathy, our kindness, our
54:08
curiosity, and our
54:11
creativity. curiosity and This, I would
54:13
argue, is the greatest
54:15
challenge of the 21st greatest but
54:17
also of most wonderful, inspiring,
54:19
and hopeful opportunity for all
54:21
of us. Thank
54:23
you. and hopeful opportunity
54:26
for That was Mustafa Thank you.
54:28
He's the CEO of Microsoft AI
54:30
AI the author of the book the
54:32
Coming Wave, Wave, Power,
54:34
Power, and the
54:36
21st century's You can
54:38
see his full talk at
54:40
ted.com. Thank
54:44
you so much for listening to our episode
54:46
on the future of future of A.I. It It was
54:48
produced by Katie it was edited
54:50
by edited by and me. Our
54:52
production staff at NPR also
54:54
includes Rachel also includes Rachel James
54:57
De La Houssi, Fiona
54:59
Geirin James Delahousi, Our executive producer
55:01
is Irene executive Our audio
55:03
engineers were Our audio engineers were
55:05
Becky Brown, and Brown, and Gilly Moon.
55:07
Our Our theme music was
55:10
written by by Romteen R. partners
55:12
at TED are are Chris Anderson,
55:14
Roxanne High Lash, Salazar, and
55:16
Daniela and Daniela I'm Anusha Anusha
55:18
you have been listening
55:21
to been listening to NPR. Hour
55:23
from comes from message An outdated
55:25
approach to data infrastructure could
55:27
hold back a company's AI
55:29
ambitions. The new era of
55:32
AI requires a new approach
55:34
to data storage. From the
55:36
highest capacities to the highest
55:38
performance, performance, solidime storage solutions are
55:40
optimized to meet the complex
55:42
data demands at each stage
55:45
of the AI at pipeline, the
55:47
all with a smaller footprint
55:49
and incredible energy efficiency. Learn
55:51
how Saladime can help solidine can
55:53
help businesses achieve their
55:55
AI ambitions.com. ai.com. This
55:58
This message comes from solidime. An
56:00
outdated approach to data infrastructure
56:02
could hold back a company's
56:04
AI ambitions. The new era
56:06
of AI requires a new
56:08
approach to data storage. From
56:10
the highest capacities to the
56:12
highest performance, solid solid-state storage
56:15
solutions are optimized to meet
56:17
the complex data demands at
56:19
each stage of the AI
56:21
pipeline, all all with a smaller
56:23
footprint and incredible energy efficiency.
56:25
Learn how solid -dime can help
56:27
businesses achieve their AI ambitions
56:29
at at.com
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More