Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
City of Dreams is an incredible new
0:02
movie from executive producers of Sound of
0:04
Freedom and executive producer Vivek Ramaswamy, opening
0:06
in theaters everywhere on Labor Day weekend.
0:08
Inspired by a true story, it's about
0:10
a young boy from Mexico whose dream
0:12
of becoming a soccer star is shattered
0:14
when he's trafficked into America to work
0:16
in a sweatshop where he becomes one
0:18
of the 12 million children who are,
0:20
right now, victims of modern slavery. It's
0:22
the story of how he finds the
0:25
hope and courage to fight for his
0:27
freedom. See this important story that sheds
0:29
a light on the real-life issue of
0:31
child trafficking and join the mission to stop
0:33
this injustice. Tony Robbins calls City of Dreams
0:35
a prayer to the universe, and Grammy Award
0:37
winner Luis Fonsi says it's a rallying cry
0:39
for justice, and it will keep you on
0:41
the edge of your seat. Critics
0:43
are calling City of Dreams inspirational, Oscar-worthy, powerful,
0:46
and eye-opening. You don't want to miss this
0:48
film. Make sure you see it in a
0:50
theater. City of Dreams,
0:53
in theaters August 30th, rated
0:55
R, under 17 not admitted
0:57
without parent. Buy tickets at
0:59
cityofdreamsmovie.com. And
1:02
now, a Blaze Media Podcast. My
1:05
next guest is sounding the
1:07
alarm on the catastrophic risks
1:09
posed by AI, from
1:12
totalitarianism to bioengineered pandemics
1:14
to a total takeover
1:16
of mankind. When
1:20
you think about the things that we could
1:22
be facing, it doesn't look real good for
1:24
the human race, but it's not too late
1:26
to turn the ship around and harness the
1:29
power of AI to
1:31
serve our interests. But
1:33
if we don't, well, I'll
1:36
let him tell you what happens. Welcome
1:38
to the podcast, the Executive Director
1:40
at the Center for AI Safety
1:43
and an advisor for Elon Musk's
1:46
ex-AI, Dan
1:48
Hendricks. But
1:51
first, let me tell you
1:53
about Preborn, our sponsor. We're
1:55
going to be talking about life and
1:58
what is life. life, the
2:02
age of spiritual machines, if you will.
2:05
We know what life is now. Maybe
2:08
a quarter of the country doesn't
2:10
know what life is, but
2:13
it is worth living on
2:15
both ends of the scale in the womb
2:18
and towards the end. We
2:21
need to bring an end to
2:23
abortion and define
2:25
life and really appreciate
2:27
life, or AI
2:30
will change everything for us. It
2:32
will take our programming of, eh,
2:34
that one's not worth that much.
2:37
And God only knows where it will take us. The
2:40
Ministry of Preborn is working every
2:42
single day to stop abortion, and
2:44
they do it by introducing an
2:46
expecting mom to her unborn baby
2:49
through a free ultrasound that you and
2:51
I will pitch in and pay for.
2:54
They have rescued about 200
2:56
babies every day, and 280,000 babies have
3:03
been rescued so far just from the
3:05
ultrasound. And then also when
3:08
mom says, I don't have any support
3:10
system, they're there to offer assistance to
3:12
the mom and a support system for
3:14
up to two years after the baby
3:16
is born. Please help
3:19
out, if you will, make a
3:21
donation now. All you have to
3:23
do is just hit pound 250
3:25
and say the keyword baby that's
3:27
pound 250 keyword baby,
3:30
or you can go to
3:32
preborn.com/Glenn. This
3:34
episode is brought to you by Shopify. Forget
3:38
the frustration of picking commerce platforms
3:40
when you switch your business to
3:42
Shopify, the global commerce platform that
3:44
supercharges your selling wherever you sell.
3:46
With Shopify, you'll harness the same
3:48
intuitive features, trusted apps and powerful
3:50
analytics used by the world's leading
3:52
brands. Sign up today for your
3:55
$1 per month trial
3:57
period at shopify.com/tech all lower.
4:00
That's shopify.com/tech.
4:05
Hey Dan, welcome.
4:19
Hey, nice to meet you. Nice to
4:21
meet you. I am thrilled that you're on.
4:25
I have been thinking
4:27
about AI since I read The
4:30
Age of Spiritual Machines by Ray
4:32
Kurzweil. And that
4:34
so fascinated me. And
4:38
later I had a chance to talk to Ray. And
4:41
he's fascinating and terrifying, I think, at
4:43
the same time. Because,
4:46
you know, I don't see a lot of
4:48
people in your role. Can
4:50
you explain what you do
4:53
within the found, you
4:55
know, what you founded and what you do? Yes.
4:57
So I'm the director of the Center for AI Safety.
5:00
We focus on research and trying to
5:02
get other people to research and think
5:04
about risks from AI. And
5:07
we also help with policy
5:09
to try and suggest, suggest
5:12
policy interventions that will help reduce risks from AI.
5:16
Outside of that, I also
5:18
advise Elon Musk's AGI company,
5:20
XAI, as their sole safety
5:22
advisor. So we
5:24
have a variety of hats. There's a lot to do in AI
5:26
risk. So
5:29
research and policy advising are the main things I work
5:31
on. So how
5:34
many, how many heads
5:36
of AI projects are
5:40
concerned and are
5:45
not lost in, I'm going to
5:47
speak to God, this
5:50
drive that a lot of them
5:52
have to create something and
5:54
be the first to create it. How
5:57
many of them can balance that with, Maybe
6:00
we shouldn't do X, Y, and Z. I
6:05
think that a lot of the people who got
6:07
into this were concerned
6:09
about risks from AI, but
6:11
they also have another
6:14
constraint, which is that they want to make sure
6:16
that they're at the forefront and competitive. Because
6:19
if they take something like safety much
6:21
more seriously or slow down or proceed
6:23
more cautiously, they'll end up falling behind.
6:25
So although they would all
6:28
like there to be more safety and
6:31
for this to slow down, or
6:33
most of them, it's not
6:35
an actual possibility for them. So
6:38
I think that overall,
6:40
even though they have good
6:43
intentions, it doesn't matter,
6:45
unfortunately. Right. So
6:47
let me play that out a bit. Putin
6:52
has said whoever gets AI first will
6:54
control the world. I believe
6:56
that to be true. So
7:00
the United States can't slow down because
7:03
China is going to be, they're pursuing
7:05
it as fast as they can. And
7:08
I'm not sure. I don't want them
7:10
to be the first one with AI. It
7:12
might be a little spookier. So
7:16
is there any way to actually slow
7:18
down? Well,
7:21
we could possibly slow down
7:24
if we had
7:26
more control over the chips that
7:28
these AI systems run on. So
7:31
basically right now there are export controls to
7:33
make sure that the high end chips that
7:35
these AI's run on don't go to China,
7:37
but they end up going to China anyway.
7:39
They're smuggled left and right. And
7:42
if there were actually better constraint and we
7:44
had better export controls, then that
7:46
would make China substantially less competitive. Then we
7:48
would be out of this pernicious dynamic of
7:50
we all want safety, but you got to
7:52
do what you got to do and we
7:54
got to be competitive and keep racing forward.
7:56
So I think chips might be a way
7:58
of making a situation. not be in that
8:01
desperate situation. Are those chips
8:03
made in Taiwan or here? The
8:06
chips are made in Taiwan. However, most
8:08
of the ingredients that go
8:10
into those chips are
8:14
made in the US and made among
8:16
NATO allies. So about 90% of
8:18
those are in the US and NATO allies.
8:20
So we have a lot of influence over
8:22
the chips, fortunately. OK. But
8:25
if Taiwan is taken by China, we
8:29
lose all the, I
8:31
mean, we can't make those chips.
8:33
That's the highest end chip manufacturers,
8:35
right? And China will have
8:37
that. So what does that mean for
8:39
us? It seems
8:42
plausible that actually if China
8:44
were invading Taiwan, that the
8:46
place that makes those chips would actually just
8:48
be destroyed before they would fully take it.
8:51
So that would put us on more of an even playing
8:53
field. OK. So I've
8:58
been talking about this for 25, 30 years. And
9:03
it's always been over the horizon.
9:05
And I could never get people
9:07
to understand, no, you've got to
9:09
think about ethical questions right now.
9:12
Like, what is life? What is
9:14
personhood? All of these things. And
9:17
now it's just kind of like
9:19
the iPhone. It just happened.
9:21
And it's going to change
9:23
us. And it
9:25
hasn't even started yet. And it's amazing.
9:28
I go online now. I
9:30
don't know what's real or not.
9:33
I mean, I found myself this
9:35
week being on X or on
9:37
Instagram and looking and saying, is
9:40
that a real person? Is that a real
9:42
video? Is that
9:44
a real photo? You have no idea. Yeah.
9:48
And we've just begun. Yeah,
9:51
yeah, yeah. I
9:53
think that's a concern where
9:55
we don't have really great
9:58
ways to reliably detect. whether
10:00
something is fake or not. And this could
10:02
end up affecting our collective understanding of things.
10:05
I think another concern are AI companies
10:08
biasing their outputs. So people are wanting
10:10
to do things about safety, but it
10:12
creates a vacuum. If we got to
10:15
do something about it and what
10:17
takes its place is some culture war type of things.
10:19
As I think we saw with Google
10:22
Gemini, when you'd ask it to generate an
10:24
image of George Washington, then it'll
10:26
output, it'll make
10:28
him look black. To
10:30
make it, because its image outputs need to be
10:32
diverse. So that I
10:34
think is one reason
10:37
why Elon Musk's through
10:39
his company XAI is getting into
10:41
the arena and now
10:44
has a pretty competitive
10:46
AI system. So as to try
10:48
and change the norms so that
10:50
other big tech companies, when they're
10:52
sort of biasing their outputs, there
10:55
are alternatives so that we're
10:57
not all locked into whatever some random people
10:59
in San Francisco decide are the values of
11:01
AI systems. Yeah, it's
11:03
really difficult because you can see the
11:06
bias. It's quite
11:08
clear, the bias, especially if you know
11:10
history or you follow the news as
11:12
closely as I do. But
11:15
the average person won't see that. I
11:19
look at AI as a, like any
11:23
technology, a
11:25
tremendous blessing and
11:28
a horrible curse. But
11:31
this one has the potential of
11:34
enslaving all of us. Doesn't
11:41
it? I
11:43
think at least, I want at least
11:46
distinguish between the systems right now. The systems
11:48
right now have- Yeah, yeah, yeah. I mean
11:50
in potential, yeah, what's coming? Sure,
11:52
I mean, when it's as capable
11:54
as humans and when they have robotic
11:56
bodies and things like that, I mean,
11:58
there's basically no limits. to what
12:00
they could do. And it really matters how people
12:03
are using them, what instructions are given. Are
12:05
they given to cement a
12:07
particular government's power?
12:10
Are they used by non-state actors
12:12
for terrorism? All
12:14
of these things could lead to societal
12:16
scale risks, which
12:20
could include some sort of unshakable
12:22
totalitarian regime enabled by AI, or
12:26
unseen acts of terror.
12:28
So I think we're
12:31
at the same time, silver lining is
12:33
maybe if it all goes well, we
12:35
get automation
12:37
of things and we
12:39
don't have to work as much or at all.
12:43
So it's really divergent paths. Right.
12:47
Which do you think is more likely? I
12:52
think overall it's more likely that we end
12:54
up seeding
12:59
more and more control to AI systems and we can't
13:02
really make decisions without
13:04
them becoming extremely dependent on them. I
13:07
would also guess that some people would
13:10
give them various
13:12
rights in the farther future. And this will make
13:14
it be the case that we
13:17
don't control them or all of
13:19
them. So I am
13:22
not too optimistic for us overall.
13:25
There's still a lot of ways this could go. If
13:27
we said we're on team human, we
13:29
need to come together as a species and handle it, we're in
13:31
a different situation. But for instance,
13:33
if there were a
13:36
catastrophe, then we might actually
13:38
take this much more seriously. Otherwise we might
13:40
just sleepwalk into something and have the frog boil.
13:43
What would be a catastrophe that could
13:45
happen in the relative near future
13:47
that would wake us up that wouldn't destroy us?
13:51
Yeah. So I think one
13:54
possibility maybe say two to three years from
13:56
now is somebody instructing the dollar things
36:00
like that. So if we deleted
36:02
that knowledge from
36:04
the AI systems or had them just
36:06
refuse questions about reverse genetics or made
36:08
them use information about reverse genetics, then
36:11
we could still have
36:13
brain cancer research,
36:15
all these sorts of things, but
36:18
we're just bracketing off
36:21
virology, advanced expert level
36:23
virology. And maybe some
36:25
people could access that, like if they'd have
36:27
a clearance right now, we
36:29
have BSL-4 facilities. Like if you want to study
36:32
Ebola, you got to go to a BSL-4 facility.
36:34
So people can still do some research for it,
36:36
but it shouldn't be necessarily, everybody in the public
36:38
can ask questions about advanced
36:40
virology to how to increase the transmissibility of
36:42
a virus. So I think we can
36:45
partly decouple some of the
36:47
good from the bad with
36:50
biological capabilities. But as
36:54
it stands, the AI
36:56
systems keep learning more and more. There
36:58
aren't really guardrails to make sure that
37:00
they aren't answering those sorts of
37:02
questions. There aren't clear laws
37:04
about this. For instance, the US Bioterrorism Act
37:07
does not necessarily apply to AIs because
37:09
it requires that they are knowingly aiding
37:12
terrorism and AIs don't necessarily knowingly do
37:14
anything. We can't describe intent to them.
37:17
So it doesn't necessarily apply. A
37:19
lot of our laws on these don't necessarily
37:21
apply to AIs, unfortunately. So
37:24
yeah, I think
37:27
if we get expert level virologist AIs
37:29
and if they're ubiquitous and
37:31
it's easy to break
37:34
their guardrails, then that's
37:37
also walking into quite a potential
37:40
disaster. Right now, the
37:42
AI systems can't particularly help with making
37:44
bioweapons. They are better than Google, but
37:46
not that much better than Google. So
37:48
that's a source of
37:50
comfort. But I'm
37:54
currently measuring this with some
37:56
Harvard MIT virology PhD
37:59
students, where... where we're taking a
38:01
picture of virologists in the lab
38:04
and asking the AI, what should the virologists do next?
38:06
Like here's a picture of their Petri dish, here's their
38:08
lab conditions. And can it fill in the steps? And
38:10
right now it looks like it can fill in like
38:12
20% or so of the steps.
38:14
If that gets to 90%, then
38:17
we're in a very dangerous situation where
38:19
non-state actors are just randomly- How long
38:21
will that take? Yeah, so
38:23
I think progress is very surprising
38:25
in this space. Just last year,
38:27
the AIs could barely do basic
38:30
arithmetic where you're
38:32
adding two-digit numbers together. They would fail
38:34
at that. And then just last month,
38:36
or just this month, excuse me, now
38:38
they're getting a silver medal at the
38:40
International Mathematical Olympiad, which is the greatest
38:42
math competition. So it
38:46
could go from basically ineffective to
38:48
expert level, possibly within a year.
38:51
There's a bit of uncertainty about it, but it
38:54
wouldn't surprise me. So people,
38:56
it was a big debate whether
38:59
AGI and ASI could ever
39:01
happen. And
39:05
the point of singularity, I've
39:10
always felt like it, and I know nothing
39:12
about it, but I've always felt that it's
39:15
a little arrogant. We're
39:18
building something and we
39:20
look at it as a tool, but
39:23
it's not a tool. It's like
39:25
an alien, it's
39:27
like an alien coming down. We think they're gonna
39:30
think like us. Well, they won't think like
39:32
us. They have completely different
39:34
experiences. We don't know how this
39:36
will think. Do
39:40
you believe in the singularity that we
39:42
will hit ASI at some
39:44
point? I
39:46
think we'll eventually build a superintelligence if
39:49
we don't have substantial disruption
39:51
along the way, such as a huge
39:53
bio weapon that like harm
39:56
civilization or like TSMC gets blown.
39:58
Those might be things that would. really
40:00
extend the timeline. So by default
40:02
seems pretty plausible to me, like more likely
40:05
than not that we'd have a super intelligence
40:07
this decade. And
40:09
most people in the AI industry
40:11
think this as well. Like Elon thinks maybe
40:14
it's a few years away, Sam Mullen and
40:16
Jeddo does, Daria the head of entropic does,
40:19
one of the co-founders of Google DeepMind, I
40:22
think AGI is in 2026. So
40:25
yeah. But actually I
40:28
remember- Can you explain AGI? Can you
40:30
explain that to somebody who doesn't understand
40:32
what that means? Yeah,
40:35
so AGI has a constantly shifting
40:37
definition for many people, it used
40:39
to mean, an
40:41
AI that could basically talk like a human
40:44
and pass like a human. That
40:47
was the Turing test as it was called, but looks
40:49
like they're already able to do that. It
40:51
also was in contrast to narrow AI.
40:53
AI's just a few years ago could
40:55
only do a specific task. And
40:58
if you slightly change the specification of tasks,
41:00
they just fall apart. But now they can
41:02
do arbitrary tests, they can write poetry, they
41:04
can do calculus, they can generate images, whatever
41:07
you want. So
41:10
by some definitions, we have AGI. And
41:12
so there's been a moving goalpost where
41:14
people are now using it to mean
41:17
something like expert level in all domains.
41:20
Right. And able
41:22
to automate basically anything. So people
41:24
are like, we'll know when there's
41:26
AGI, when the AI lab stop
41:28
hiring people. Which
41:31
some of them have in their forecasts for
41:35
spending on labor, some of them are
41:37
expecting to stop hiring in a few
41:39
years because assuming there's automation. Wow. So
41:44
it varies quite a bit,
41:46
but you don't need AGI for a lot
41:48
of these malicious use
41:51
risks. It just needs to be very good at doing
41:53
like a cyber attack or it just needs to have
41:55
some expert level of virology knowledge and skills to
41:58
cause a lot of damage. So. big
56:01
AI upgrade maybe in the next six
56:03
months, late this year, early next year.
56:06
And that'll make the public go, what's going on?
56:10
And start having some demands of something to be
56:12
done about AI. How's that
56:14
gonna manifest itself in the next six months? So,
56:18
I make
56:20
this prediction largely just based on the
56:22
fact that it took them a long
56:24
time to build their 10X larger supercomputer
56:26
to train these AI systems. And now
56:28
they're basically built. And so now
56:30
they're training them. And they'll finish training
56:33
around the end of this year or early next year
56:35
and be released then. So, the
56:37
exact skills of them are unclear
56:39
each time, each 10X
56:43
in the amount of power and data that we throw
56:46
into these systems. We
56:48
can't really anticipate their capabilities
56:51
because AI systems are not really designed
56:54
like old traditional computer programs, they're more
56:56
grown. We just let them stew for
56:58
some months and then we see what
57:00
comes out. Wow, and it's like magic.
57:04
Kind of, we have extremely huge sources of energy, or
57:11
we have substantial sources of energy just
57:13
flowing directly into them for months. Right,
57:16
right. To create them. It's alive. Yeah,
57:19
yeah. So,
57:21
I think they should probably get a
57:26
lot more expert level reasoning, whereas right
57:28
now they're a bit shakier and this
57:30
could potentially improve their reliability for doing
57:32
a lot of these agent tasks. Right
57:34
now, they are closer to tools than
57:36
they are agents. But-
57:39
What's the difference between an agent and a tool?
57:43
Yeah, so a tool, it's like a
57:45
tool being like a hammer. Meanwhile,
57:47
an agent would be like an executive
57:50
assistant, a
57:52
secretary. You say, go do this for me, go book this for me,
57:55
arrange these sorts of plans, make me a PowerPoint,
57:58
write up this document. and
58:00
submitted, email it, and then handle the back and
58:02
forth in the email. I
58:05
think those capabilities could potentially turn on
58:07
with this next generation of AI systems.
58:09
We're already seeing signs of it, but
58:12
I think there could be
58:14
a substantial jump when we have
58:16
these 10X larger models. Wow.
58:18
I mean, think of it this way. Just in terms
58:20
of brain size, imagine like a
58:22
10X larger brain. Yeah. Shouldn't expect
58:24
that to be a lot more
58:26
capable. Well, at some point, it's
58:30
going to snap the neck, that
58:33
larger brain. So I hope that doesn't happen
58:35
soon. Dan,
58:37
you're fascinating. Thank you. 15
58:42
years ago, I was looking
58:44
for the people who had some
58:47
ethics that were saying, wait,
58:49
let's slow down. We should ask
58:51
these questions first. And I didn't
58:54
find a lot
58:56
of philosophy behind the progress
58:58
seekers. And
59:05
it really frightened me
59:08
because at some point,
59:12
they're going to say, you can live forever, but
59:15
it's just a downloaded you. And if
59:17
we haven't decided what life is, we
59:22
can easily be taught that, no,
59:24
that's grandma. And
59:27
then what value does
59:30
the actual human have, the body,
59:33
if it's just downloadable? So
59:38
I appreciate your look at safety
59:40
and what you're trying to do.
59:43
Thank you. And thank
59:46
you for bringing this topic to your
59:48
audience because it's important and it still
59:50
isn't discussed. Oh, yeah. Yeah.
59:52
Thank you. We'd love to have you back. Thank you.
59:55
Yeah. Have a good day. Bye. Thanks. Thank
59:58
you. Bye-bye. Just
1:00:04
a reminder, I'd love you
1:00:06
to rate and subscribe to the podcast and pass
1:00:08
this on to
1:00:16
a friend so it
1:00:19
can be
1:00:22
discovered by other people.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More