Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This is Intelligence
0:02
Matters, sponsored by Ginkgo
0:04
Biosecurity, whose cutting -edge
0:07
biological intelligence platform
0:09
empowers national security decision
0:11
-makers to identify and
0:13
respond to emerging
0:15
biological threats. Alexander
0:20
Wang is the founder and CEO
0:22
of Scale AI, a leading
0:24
AI data foundry, driving
0:26
some of the most significant
0:28
advancements in artificial intelligence
0:31
today, spanning autonomous vehicles, defense
0:33
applications, and generative AI. Alex
0:36
started Scale AI in 2016
0:38
as a 19 -year -old MIT student.
0:40
Under his leadership, the company
0:43
has grown into a market leader,
0:45
serving clients from the U .S.
0:47
Department of Defense to Microsoft,
0:49
Meta, and Open AI. Alex
0:51
joins us today to discuss his journey as
0:53
a pioneer in AI, the
0:55
evolving role of data in driving
0:57
AI innovation and the implications
0:59
of AI for national security. We'll
1:01
be right back with Alexander
1:03
Wang. Biosecurity
1:12
matters. What's biosecurity? It's
1:14
the mission -critical industry safeguarding
1:16
against 21st -century biological
1:18
threats. Biology doesn't respect
1:20
borders. Ginkgo Biosecurity monitors
1:22
critical assets and produces actionable
1:24
biointelligence for national security
1:27
decision -makers confronting biological threats,
1:29
whether they are natural, accidental,
1:32
or malicious. Ginkgo's technology
1:34
tracks deadly pathogens worldwide, and
1:36
our globally -scaled platform, the first of
1:38
its kind, detects outbreaks in near
1:40
real -time, deploys AI -powered threat
1:42
assessments, and identifies the source
1:44
of a threat, mitigating outbreaks before
1:46
they become national emergencies. When
1:49
lives and livelihoods are on the
1:51
line, every second counts. Ginkgo Biosecurity,
1:53
the biosecurity partner for an
1:55
uncertain new era. Learn more
1:57
at biosecuritymatters .com. Welcome
2:00
to Intelligence Matters. Hey, listen, before
2:02
we start, let's sort of level
2:04
set with everybody. And I've heard
2:06
you in the past sort of
2:08
talk about the three pillars of
2:10
AI. And then of course, scale,
2:12
AI, your company fits into one
2:15
of those. So can you just
2:17
lay that out for everyone so
2:19
make sure we're all sort of
2:21
on the same sheet as we
2:23
go along? Yeah. So artificial intelligence
2:25
or AI has been a pretty
2:27
modern, well, this field itself exists
2:29
for a long time, but modern
2:32
AI using neural networks and sort
2:34
of the new tech trend has
2:36
been basically since 2011 or 2012.
2:38
And it relies on three fundamental
2:40
pillars. Data, compute, and algorithms. So
2:42
data is the raw material for
2:44
intelligence. Data is what the AI
2:46
needs to learn from. That's where
2:48
scale plays and that's where we
2:51
play. Compute are the large, all
2:54
the computational power, all the supercomputers,
2:56
all of the very advanced GPUs
2:58
and chips that need to go
3:00
into actually crunching all the data
3:02
as well as processing all the
3:05
information for AI systems. That's where
3:07
NVIDIA and others have been incredibly
3:09
pivotal. And then there's the models,
3:11
which are the algorithmic techniques used
3:13
to actually sort of take compute
3:15
and data together and turn them
3:17
into intelligence. And those are developed
3:20
by the major AI labs that
3:22
everybody knows, OpenAI, Anthropic, Google, DeepMind,
3:24
Meta, and other major companies. And
3:26
so these three ingredients are three
3:28
pillars, really are the basis for
3:30
everything in AI is built on
3:33
top of these three pillars, models,
3:35
data, and compute. And one of
3:37
the reasons why we've seen such
3:39
incredibly fast progress in AI over
3:41
the past few years is that
3:43
there's been really meaningful innovation across
3:45
every single one of these pillars.
3:48
So computational power that we've invested
3:50
far more, built much larger data
3:52
centers than we've ever built in
3:54
the past, as well as the
3:56
chips... themselves have gotten better due
3:58
to Moore's law. So computers advanced
4:01
pretty dramatically. On the algorithmic side,
4:03
there was breakthroughs. In 2017, we
4:05
had the transformer model, which is
4:07
a pretty meaningful breakthrough. And progress
4:09
basically has continued recently the 01
4:11
style of breakthroughs. So we've continued
4:14
to have sort of like algorithmic
4:16
breakthroughs. And then data has also
4:18
continued to sort of advance pretty
4:20
dramatically from a technological perspective. You
4:22
know, we started out by leveraging
4:24
all of the data off the
4:26
internet. We kind of ran out
4:29
of all that data. And now we're
4:31
sort of onto more complex and more
4:33
advanced data types, you know, complex reasoning
4:35
data, agent data, other forms of data.
4:37
And so all of the progress that
4:39
you see up here on AI is
4:41
built, is fundamentally driven by progress within
4:43
each of the pillars. So
4:45
Alex, let's sort of just talk broadly
4:47
here about the state of AI.
4:50
And just recently, I saw this chart,
4:52
this graph, it's by the tech
4:54
firm Gartner, and they named it after
4:56
themselves, the Gartner hype cycle. You
4:58
might have seen this graph. And for
5:00
our listeners, it sort of shows
5:02
a steep ramp up to what they
5:04
call peak of inflated expectations. Then
5:06
it leads to something called a trough
5:09
of disillusionment. It finally leads back
5:11
up to the plateau of productivity. So
5:13
I know it's a little tongue -in
5:15
-cheek, but if you're thinking of that
5:17
model, because I'm sure you've seen it,
5:19
so where are we today with AI?
5:21
Are we at inflated expectations or have
5:23
we not even gotten there yet? Or
5:26
are we past it? We
5:28
are probably somewhere in between,
5:30
yeah, we're in this early
5:32
phase. We're not quite at
5:34
the trough, because I think
5:37
expectations are obviously still quite
5:39
high. And there's an argument
5:41
that AI as a technology
5:43
may skip the trough of
5:46
disillusionment because the progress within
5:48
the field is just so
5:50
fast. I mean, it's
5:53
very, usually new disruptive
5:55
technology sort of toil
5:57
in relative obscurity for some time.
6:00
but AI has become the number
6:02
one focus for the largest companies
6:04
in the world, largest countries in
6:06
the world. I mean, it is
6:08
by many metrics the most critical
6:10
technology of today. And so there's
6:12
a theory that the sort of
6:14
just the sheer amount of investment
6:17
smart people, and then by force
6:19
of will, will kind of skip
6:21
a trough of disillusionment. But it's
6:23
undeniable that the expectations are very
6:25
high right now. Given that as
6:27
background, Alex, let's take a look
6:29
at, we're looking at the AI
6:32
landscape, but let's sort of look
6:34
at it in particular at how
6:36
it may reshape national security, right?
6:38
Because that's typically what we talk
6:40
about in the podcast here. And
6:42
I realize, you know, looking out
6:44
10 years is sort of crazy
6:47
because it's hard to predict what's
6:49
going to happen in, you know,
6:51
five years. But if you look
6:53
out into that sort of five
6:55
year time period, and you're thinking
6:57
about the national security challenges that,
6:59
you know, all of us can
7:02
sort of come up with, you
7:04
know, how do you broadly see
7:06
AI reshaping that? So I grew
7:08
up in Los Alamos, New Mexico,
7:10
the birthplace of the atomic bomb,
7:12
the home of the Manhattan Project,
7:14
both my parents worked as physicists
7:17
at the national lab. And so
7:19
I grew up, you know, really
7:21
deeply well studied on the sort
7:23
of story of, of the atomic
7:25
bomb, the sort of core scientific
7:27
discovery, and then how that played
7:29
out in national security in the
7:32
world thereafter. And nuclear fission is
7:34
a very interesting parallel because nuclear
7:36
fission as a, as a technology,
7:38
or as a scientific discovery is
7:40
obviously this incredibly brilliant scientific discovery.
7:42
And there's, there were sort of
7:44
two, there was a fork in
7:46
the road of two applications. One
7:49
is, you know, cheap energy for
7:51
the world. And the other, the
7:53
other direction was for, for weapons,
7:55
for bombs. And I think what
7:57
we've seen over the past, you
7:59
know, 80 years or so is
8:01
that the, the path to, to
8:04
commercialization of of nuclear energy has
8:06
been somewhat embroiled due to there
8:08
being various sorts of risks and
8:10
the regulatory path's not been very
8:12
clear and environmental concerns, all of
8:14
that. But the application towards weaponry
8:16
and national security has been immense.
8:19
Both the United States and Russia
8:21
and other countries, we all have
8:23
large nuclear arsenals. There were, we
8:26
certainly have more than enough
8:28
nuclear weapons in the
8:30
world to sort of create
8:32
a nuclear armageddon and
8:34
make the earth inhospitable. And
8:36
so I look at
8:38
this, what had
8:40
happened and then obviously
8:43
the sort of the world,
8:46
the national security community had
8:48
to sort of, had basically been
8:50
working quite hard to ensure that we
8:52
never to use those nuclear weapons
8:54
for fear of sort of the escalation
8:56
and the sort of damage to
8:58
the world and to humanity. And I
9:00
look at that sort of arc or
9:03
that story and I think
9:05
you have to take the lesson
9:07
and think about what's gonna
9:09
happen to AI. So artificial intelligence
9:11
is dual use technology. There's
9:14
an application towards the economy and
9:16
towards commercial applications of the
9:18
technology and then there's undeniably military
9:20
and national security applications of
9:22
the technology. And it's very clear
9:24
that the United States, we
9:27
have our eyes on these use
9:29
cases, but our adversaries do
9:31
as well. The application of AI
9:33
and autonomy in Ukraine, for
9:35
example, is undeniable by both
9:38
sides, by both the Ukrainians
9:40
and the Russians. And
9:43
so it's this technology
9:45
that I expect undeniably is
9:47
going to be quite
9:49
central to the national security
9:51
strategies of every country globally.
9:53
Like it's clearly going to
9:56
be where most countries and most
9:58
militaries and intelligence communities sort
10:00
of look to gain advantage over
10:02
competitors or over adversaries. And
10:04
the applications are going to be
10:06
pretty widespread. Like as a
10:08
technology, it's much more general purpose
10:11
than nuclear fission was. So
10:13
nuclear fission, you build warheads or
10:15
you build bombs, but AI,
10:17
you can use it for offensive
10:19
cyber hacking or offensive and
10:21
defensive cyber. You can use it
10:24
for autonomous drones and autonomous
10:26
weaponry. You can use it as
10:28
a tool for bio weaponry.
10:30
That's certainly like a concerning potential
10:32
vector. You can use it
10:34
for all your back office functions
10:36
and make it, you know,
10:39
so you're dramatically better planning. The
10:41
application towards intelligence is pretty
10:43
clear, like this process of converting
10:45
data to intelligence is something
10:47
that AI is potentially going to
10:49
be very good at. And
10:51
so the applications are endless. But
10:54
what I think is undeniable
10:56
is that as a new technology
10:58
with so much potential, it
11:00
is going to be a cornerstone
11:02
element to national security strategies
11:04
globally and will be one of
11:07
the major elements that will
11:09
dictate which militaries or which countries
11:11
have advantages over other countries.
11:13
So this is an interesting parallel
11:15
that you drew on here
11:17
a minute ago, Alex. And let
11:19
me ask, you know, maybe
11:22
what do you see as some
11:24
of the most pressing risks
11:26
posed by our increased reliance on
11:28
AI? And if I think
11:30
about your parallel, you know, one
11:32
of the key themes of
11:35
the last Cold War was this
11:37
mutual assured destruction, you know,
11:39
which, you know, today, maybe people
11:41
think it's crazy, but, you
11:43
know, it was a pillar, let's
11:45
say of that time period.
11:47
Is, do you see any parallel
11:50
there? Maybe not that extreme,
11:52
of course, but can we draw
11:54
that out a little more?
11:56
Yeah, totally. I think that deterrence
11:58
theory in an AI paradigm
12:00
is, you know, this is, this
12:02
is, the central thing to look at. And one of
12:05
the things that I, you know, I've always thought about that
12:07
as important is like, how would you, the
12:09
issue is we actually, we don't know,
12:11
we don't have all the answers about
12:14
AI yet. We don't know what
12:16
are the exact properties of
12:18
the technology when we get towards
12:20
super intelligence or very advanced AI
12:22
systems. So things that we don't
12:24
know. One, we don't know if
12:27
AI will require. a huge amount
12:29
of computational power to build and
12:31
even to use, or if you're
12:33
going to be able to use
12:35
a relatively small amount of computing
12:38
power to leverage the capabilities. Maybe
12:40
you need a lot of computational
12:42
power to produce advanced AI, but
12:44
then you need relatively little to
12:46
actually utilize it. That is very
12:48
wide-ranging implications, because if AI is
12:50
something that only the biggest countries can
12:53
utilize, then that is a much
12:55
safer world than if anybody
12:57
in the world can get
12:59
a handful of GPUs and
13:01
utilize the technology. Another big
13:04
question that we don't know
13:06
is, is it going to
13:08
be possible to build safeguards
13:10
into the models themselves to
13:12
prevent misuse or use for
13:14
harmful areas like, you know,
13:16
bio weaponry or building nuclear
13:18
capabilities or building other forms
13:20
of weapons? Or is that
13:22
something that is like technically
13:24
impossible to safeguard and therefore
13:26
adversaries are going to be
13:28
able to remove those safeguards
13:30
very easily? Terrorists are going
13:32
to remove those safeguards very
13:34
easily and then utilize it for
13:36
those dangerous capabilities. So
13:38
there's all these questions about what
13:41
are the end-state facts about AI as
13:43
a technology? And depending on those end-state
13:45
facts, you have very different dynamics.
13:48
So one version of the world is
13:50
that there's something very close to
13:52
mutually sure destruction. You know, this
13:55
version of the world looks something
13:57
like, you know, let's say advanced
13:59
AI. both takes huge amounts of
14:01
computational power to produce, to
14:04
develop, as well as huge
14:06
amounts of computational power to
14:08
utilize. In this scenario, you
14:10
know, the United States and
14:12
China, potentially Russia, and other
14:14
countries will be leaders, and
14:16
I think there will be
14:18
a sort of natural... there
14:20
will be some level of
14:22
natural stability because you know
14:24
the leaders all know the
14:26
the power of the technology
14:29
by and large they're not
14:31
really trying they know the
14:33
other leaders are are are
14:35
quite powerful they're you know
14:37
and so there's there's some
14:39
level sort of mutually assured
14:41
destruction or some level of
14:43
of stability. And and every
14:46
country has access to powerful
14:48
AI, every terrorist organization has
14:50
access to powerful AI, and
14:52
in that case, it's a
14:54
much more complicated world, because even
14:56
if you have, even if you're the
14:59
United States or you're one of these
15:01
major countries, it's hard to stamp out
15:03
all the potential misuse or all the
15:06
potential sort of chaos ensued by the
15:08
technology. The last point I'll bring up
15:10
is this question of first strike. So.
15:13
Mutually sure destruction is a good
15:15
paradigm because if you launch a
15:17
nuke, as soon as other people
15:19
know that you're launching a leader,
15:22
they can launch they can launch
15:24
their nukes. And so it's very,
15:26
very, very difficult to have first
15:28
strike to be able to prevent
15:30
sort of this mutually acceptable, or
15:32
unattributable first strike. Yeah, yeah, sorry,
15:34
undetectable first strike. The issue is AI
15:37
as a technology doesn't seem
15:39
to have those characteristics
15:41
characteristics. Like it seems. potentially possible
15:43
you can utilize artificial intelligence and in
15:45
a you know as a as a
15:48
weapon in a negative way without it
15:50
being as attributable or detectable in the
15:52
same way and then two there's an
15:54
argument that oh if my AI is
15:57
a hundred times smarter than my adversary's
15:59
AI they won't be able to
16:01
retaliate. They won't even be able
16:03
to retaliate. Like my AI will
16:06
just sort of, you know, swiftly
16:08
dominate my adversary's AI. So long
16:10
story short, I think that it's
16:12
a very complicated picture. And it
16:14
is not obvious to me that
16:17
we necessarily have the same sort
16:19
of deterrence theory of mutually sure
16:21
destruction or whatnot, and we may
16:23
not, we can't know. I think
16:26
until we learn more about the
16:28
technology. It's really interesting parallels, Alex.
16:30
And even in your first parallel, the,
16:32
you know, it's the purview of, you
16:34
know, great powers kind of thing. You
16:36
know, even the nuclear weapons, that monopoly
16:39
didn't last all that long. If you
16:41
look at it on a, you know,
16:43
time scale, right? Yeah. And so, you
16:45
know, these kind of things tend to,
16:47
tend to leak out. So what do
16:49
you, as we just started to talk
16:51
about some of the risks? How should
16:54
we think about mitigating some
16:56
of the risks of adversarial
16:58
AI? You know, deep fakes, some of the
17:00
malicious use you talked about. So it
17:02
seems like it's this constant
17:04
trying to stay ahead of somebody else's
17:06
AI, right? So you make your AI
17:09
better, theirs becomes better, you know, and
17:11
so on and so forth. So it's less
17:13
of a race than it is a, I guess because
17:15
a race has an end, and this
17:17
doesn't at least appear to me to
17:20
have an end. Yeah, it
17:22
is a, it is a, it
17:24
is an industry with constant one-opsmanship.
17:26
I mean, it's very relevant. So
17:28
today, literally today, the, so Open
17:31
AI two months ago released their
17:33
O1 model, which was the sort
17:35
of like this very advanced reasoning
17:38
model. And then two months later,
17:40
the very first replication of the
17:42
technology came out of Deep Seek,
17:44
which is a Chinese startup. and
17:47
Deep Seek is choosing to open
17:49
source it globally or open source
17:51
the technology. So this is
17:53
notable for two main reasons.
17:56
The first is that one
17:58
China is very clearly competent
18:00
and competitive, and was the
18:02
first public replication of the
18:04
most advanced technology in the
18:07
United States, which is, you don't
18:09
have to take that data point
18:11
very seriously. And the second is,
18:14
this is now open-sourced. And just
18:16
as we're talking about before, like,
18:18
you know, the proliferation environment, it
18:21
seems like there's a real proliferation
18:23
sort of prerogative or mandate driven
18:25
by the sort of private market
18:27
incentives. And so, On the answer
18:30
of like risks, well, I think
18:32
that my framework on this is
18:34
so they're clearly in the
18:36
abstract are ways in which AI can
18:39
be incredibly damaging and
18:41
incredibly harmful to humanity. And
18:44
so the key is that
18:46
we have very accurate measurement,
18:48
you know. AI is actually pretty
18:50
close to being very dangerous for
18:53
bi- weaponry, but maybe it's nowhere
18:55
close to being useful for nuclear
18:57
weapons. For example, or what are
18:59
the areas in which from a
19:01
practical standpoint, AI is actually poses
19:03
any real risk? And we need to
19:06
be extremely good at this measurement process,
19:08
this test and evaluation process for all
19:10
of the AIs that we see pop
19:12
up globally so that we can... we
19:15
can properly tackle the risks. Because what's
19:17
been clear is that, you know, the
19:19
sort of very generic concern that like,
19:22
you know, powerful AIs are really dangerous,
19:24
and so we need to be really
19:26
careful. Like the general argument is pretty
19:28
uncompelling because I think people can use
19:30
these systems and see how flawed they
19:33
are, but we need specific conversations. We
19:35
need a specific conversation that, you know,
19:37
these AI models are pretty close to,
19:40
you know, some dangerous capability, and so
19:42
we need to manage it. We've been
19:44
working at scale. We're proud to work
19:47
pretty closely with the Pentagon and the
19:49
DOD within their chief digital and artificial
19:51
intelligence office to actually produce sort of
19:54
a test and evaluation framework for the
19:56
use of elements in national security, in
19:58
defense, and military. to be able
20:00
to have accurate and reasonable measurements
20:03
of the sort of of the
20:05
risks associated with the technology. We're
20:07
in the process of scaling industrial
20:10
commercial use as well so we've
20:12
produced our seal evaluations where we'll
20:14
evaluate in kind of the same
20:17
way as consumer reports maybe does
20:19
for most other other products or
20:21
technologies, a public evaluation of all
20:24
the models, and there's sort of
20:26
various capabilities, including in areas like
20:28
agent safety or other dangerous capabilities.
20:30
But I think that we need to
20:33
keep going on this, and I certainly
20:35
encourage other people aid us in this
20:37
process of actually having very, very
20:40
accurate measurement to mitigate risks. So,
20:42
you know, this sort of leads
20:44
into this point about how the
20:46
US can maintain a competitive advantage.
20:48
in AI, we're the world leaders today,
20:50
right? And you can tell me I'm
20:53
wrong about that, but I think
20:55
we are. So developing AI, but
20:57
also ensuring responsible use, right? It's
20:59
not a Wild West scenario, which,
21:02
you know, maybe the case in
21:04
other countries. So how do we
21:06
think we strike that balance?
21:08
We are lucky that, so yeah,
21:10
the US is the leader. indisputably,
21:12
and we have, and this is
21:15
due to the benefit of our
21:17
frankly our incredible system, a lot
21:19
of AI research was was funded
21:21
by the private sector, has been
21:23
funded for decades by the public
21:25
sector, and that industry has attracted
21:27
huge amounts of commercial capital as
21:30
well as the best and brightest
21:32
people in the world, and we've
21:34
been able to produce and create,
21:36
you know, the largest data centers
21:38
of the world are being built in
21:40
the United States, the smartest people in
21:43
the United States, the most advanced data
21:45
for these systems is being produced in
21:47
the United States. So we have every
21:49
reason to maintain the advantage, but competitors
21:52
are quite close on the chase. So,
21:54
you know, as I mentioned, Deep Sea
21:56
replicated opening eyes models, Ali Baba, a
21:58
Chinese company released a new. model Quinn
22:01
which by some metrics is
22:03
actually the top LLLM in
22:06
the world so according to
22:08
Hugging Faces Open LLL leaderboard
22:11
is actually outperforms many
22:13
Western counterparts and so
22:15
fundamentally I think we the United
22:17
States is in a we have
22:19
a very tight-type to walk which
22:22
is that on the one hand
22:24
we have global adversaries who are
22:26
catching up and we need to
22:28
stay ahead of. That's undeniable. The
22:30
United States needs to stay ahead.
22:33
But on the other hand, because
22:35
we are the leaders, it is
22:37
up to a certain extent to
22:39
really set the right guard rails
22:41
and set the right standards for
22:43
how this technology should develop in
22:45
a safe manner. And luckily, because
22:47
so much of the progress in
22:49
other countries is due to almost
22:52
pure replication of what the United
22:54
States does, we develop in a
22:56
direction and build regulation or are
22:58
thoughtful about building in a direction
23:00
that mitigate some of the risks,
23:02
then we actually can be in a
23:04
pretty good spot as a country. and
23:06
set the world up for this new
23:08
technology in a good way. You know,
23:11
no specific answer there, but I think
23:13
it's an incredibly hard question, and I
23:15
don't think the answer is necessarily accelerate
23:17
at all costs, but it's probably to
23:20
leverage our leadership position to thoughtfully set
23:22
the standards for the world. So that's
23:24
a great lead into the next question
23:26
I was going to raise, Alex, and
23:29
that's the role of government regulation,
23:31
for lack of a better way to put
23:33
it. So what role should the
23:35
US government play in play in?
23:38
in regulating AI development, but also
23:40
making sure that we stay
23:42
ahead as you just described.
23:45
Clearly there's a role here. Where
23:47
do you think the left and
23:49
right boundaries are? I think
23:51
the most critical piece, there's
23:54
a few pieces that I view as
23:56
like, you know, hands down, absolutely
23:58
critical. which are one,
24:01
ensuring integration into our national
24:03
security community. So we have
24:05
all this incredible commercial technology.
24:08
It needs to get integrated
24:10
into the DOD and IC very, very
24:12
rapidly. That's like, you know, kind
24:14
of immensely, immensely important. And I
24:16
think the second is also ensuring
24:19
that we have, kind of as I
24:21
mentioned, we have the right measurement
24:23
mechanisms and schemes to ensure that
24:25
we. are cognizant of when AI
24:28
capabilities are at a point where
24:30
they create real societal level risks
24:32
or around, you know, buy a
24:34
weapon re, nuclear weapon re, you
24:36
know, stuff like that. I think
24:38
those are relatively, those are very
24:40
clear. I think that the, the
24:42
additional, you know, if you get further from
24:45
that, I mean, I think there's. You
24:47
can have a lot of debate
24:49
about what regulations are needed or
24:51
important or what's important after that.
24:53
I think one perspective, which I
24:55
really I admire and I appreciate,
24:58
is, hey, we have to ensure
25:00
that we stay ahead. So, you
25:02
know, there's the use of regulation
25:04
to accelerate the ability to build
25:06
out large data centers to enable
25:09
sort of a Manhattan project like.
25:11
build out of capacity in the
25:13
United States, that certainly seems important.
25:15
Other things are in ensuring that,
25:17
you know, I do think the guardrails
25:19
that we set around the technology
25:21
are going to be important as
25:24
it continues becoming more powerful,
25:26
so we need a thoughtful
25:28
approach to both. But the
25:30
national security stuff and the
25:32
measurement of national security related
25:34
capabilities are table stakes in
25:36
my view. We're going to take
25:38
a quick break and we'll be back with
25:40
Alex Wang. Beacon Global Strategies is
25:43
the premier national security advisory
25:45
firm. Beacon works side by
25:47
side with leading companies to
25:50
help them understand national security
25:52
policy, geopolitical risk, global technology
25:54
policy, and federal procurement trends.
25:57
Beacon's insight gives business leaders
25:59
the... Decision Advantage, founded in
26:01
2013, Beacon develops and supports
26:04
the execution of bespoke strategies
26:06
to mitigate business risk, drive
26:09
growth, and navigate a complex
26:11
geopolitical environment. With a bipartisan
26:13
team and decades of experience,
26:16
Beacon provides a global perspective
26:18
to help clients tackle their
26:20
toughest challenges. Alex, what do you
26:23
see as the role here for
26:25
international cooperation in order to
26:27
help sort of develop some of
26:29
these guidelines? there seems to be
26:31
a place here for like-minded nations,
26:33
right? It's not everyone's
26:36
going to participate, but
26:38
how do you see that
26:40
developing? Absolutely critical. You know,
26:42
I think that the entire
26:44
world I think is going
26:46
to be safer, stronger, friendlier
26:48
if, you know, the United
26:50
States is in the driver's
26:52
seat in ensuring that there's
26:54
international cooperation and that
26:56
we, you know, this is... The
26:58
stakes here are so high fundamentally.
27:00
I think I was listening, I
27:02
was reading, listening to a podcast
27:05
by a sort of war historian who
27:07
said that, you know, wars are won
27:09
by great alliances more than anything else.
27:11
I don't know specifically that's true, but
27:13
I think in this case it's pretty
27:15
true, which is that the AI is a large
27:18
scale, human level, humanity level, infrastructure,
27:20
and scientific project, and as we
27:22
embark on it, we want to
27:25
ensure that we have the... the
27:27
right levels of international cooperation. So,
27:29
relevantly, today in San Francisco, there
27:32
is a convening of the global
27:34
AI safety institutes. So, many countries
27:36
now, you know, the United States,
27:38
the UK, Japan, many countries have
27:41
established AI safety institutes and are
27:43
convening and sort of collaborating on
27:45
ensuring that each of these countries
27:48
is sort of the right guardrails in place
27:50
to ensure that. we mitigate the sort of
27:52
like negative risks. I think that's a great
27:55
step in the right direction. I think that
27:57
there's probably more to do. There's probably deeper
27:59
collaboration. at Iron Heart levels of
28:01
government coalitions, and so I
28:04
think it's absolutely necessary.
28:06
So Alex, something triggered when you so
28:08
said humanity level, and so you don't
28:10
see a AI winter, I guess, coming
28:12
again, right? I mean, that's been the
28:14
history, look, when I was getting my
28:17
graduate degree in computer science in the
28:19
Dark Ages, AI was just around the
28:21
corner. There was claim was that you
28:23
could just almost touch and feel it,
28:26
and of course. you know, nothing happened
28:28
for a long time. So you don't
28:30
see this win, you know, AI winter,
28:32
you just see this progressing, you know,
28:35
the slope of the curve, you know,
28:37
we could probably debate what that would
28:39
look like, but you don't really see
28:41
a downturn, right? It's a complicated
28:43
question because the answer is yes
28:45
and no. I think pure scaling,
28:48
which is what has been the
28:50
sort of technical calling card of
28:52
AI for the past few years,
28:54
which is all we need is more
28:56
and bigger data centers and more data
28:58
and you know the thing will just
29:00
get smarter. I think this sort of
29:03
like pure scaling approach is hitting
29:05
limits. There's some winter associated with
29:07
that approach but at the same
29:09
time there's totally new approaches like
29:11
post training, test time compute and
29:14
and other new technical approaches which
29:16
seem to have very high ceilings
29:18
as a technology. So there's... One
29:20
way to put this is going
29:22
back to the pillar analogy at
29:24
the start. One of the beauties
29:27
of AI is because there's these
29:29
three pillars, compute data and algorithms.
29:31
If you hit roadblocks in some
29:33
of them, let's say we hit
29:35
a roadblock in computer, a roadblock
29:37
in data, then you still can
29:39
work on the others. And so
29:41
as long as one of these
29:43
pillars is humming and driving progress,
29:46
we're going to see continual progress
29:48
in AI systems. One of the
29:50
big issues, especially when you think, well,
29:52
I guess in a lot of applications,
29:54
but when you think about national security
29:56
for AI, that sort of three things,
29:59
transparency, accountability, explainability. Right and I can
30:01
we get to I mean right now you've
30:03
got you know, and maybe you believe
30:05
this too There AI does things that people
30:07
can't quite explain You know
30:09
how did it do that? You know how
30:11
did it come to that conclusion which you
30:13
know creates it? You know challenging a lot
30:15
of fields, but in particular national security So
30:18
how do you see that shaping? Yeah
30:22
So what one of the things that
30:24
I think is is really important
30:26
for national security use cases is is
30:28
a fundamental level of You know
30:30
transparency and explainability so in all
30:32
of the Systems that
30:35
we've built for the national security
30:37
community like the DoD itself and
30:39
others we have had an approach
30:41
which Which all
30:43
claims need to be backed up
30:45
by Grounded and verifiable information and those
30:47
need to be those sources need
30:49
to be sort of cited and sourced
30:51
and Tied with all the claims
30:53
because obviously if you don't have this
30:56
verifiability then you know you we
30:58
have You know we have bigger we
31:00
have bigger problems And so so
31:02
that's been pretty critical and core to
31:04
our approach I think has enabled
31:06
our system scaled on of into to
31:08
have much greater impact than some
31:10
other AI systems Where these guarantees weren't
31:12
there looking forward. I think
31:15
the technology is actually moving
31:17
in direction of Greater transparency and
31:19
and explainability, you know
31:21
with the new Thinking models
31:23
or the new reasoning models you can
31:25
actually just you know The model tells
31:27
you what it's thinking and there have
31:29
been cases where we've found the models
31:31
will even say when it's trying to
31:33
be deceptive Like it'll tell you when
31:35
it's trying to be deceptive Yeah, it
31:38
admits it so so the industry is
31:40
moving in a direction of like greater
31:42
transparency or greater explainability which is good,
31:44
but I think many people's greatest fear
31:46
and I probably share this is We
31:48
hook AI systems up to very powerful
31:50
weapon systems and they make decisions that
31:52
are totally unexplainable Inexplicable and I think we're
31:54
gonna avoid that for what it's worth. I
31:56
don't think that's like a real near -term
31:58
risk, but think we need
32:01
more progress in this direction,
32:03
for sure. Yeah, that's probably not
32:05
a near -term risk for the
32:07
US, but others may be
32:09
more willing to allow that to
32:11
happen, right? 100%, yeah. So
32:15
since you sit on top of
32:17
this big data pillar and
32:20
the fact that these
32:22
huge amounts of data have
32:25
now been ingested, how
32:27
do we ensure security and
32:29
privacy of sensitive information, particularly
32:33
when it has, I mean, there's personal privacy,
32:36
but particularly when it has national security implications,
32:38
right? You need to have as much data
32:40
as possible, the more the better. Something like
32:42
you just explained a few moments ago. Is
32:45
there a way we can ensure the security
32:47
of that information? Yes,
32:50
this is possible. I think
32:52
that there's, the AI industry has
32:54
gotten potentially a bad reputation
32:56
because in some cases, some of
32:58
the most prominent actors are
33:00
a little bit loose with how
33:02
they work with data. But
33:04
I think there's no fundamental reason
33:07
why AI and rigorous standards
33:09
around data security and privacy are
33:11
fundamentally at odds. So for
33:13
example, in our work with government
33:15
work, we're authorized at Fedoramp
33:17
High. We comply with all the
33:19
necessary data security authorizations, and
33:21
then we employ these pretty deeply
33:23
in our methods for how
33:25
we've developed this technology for national
33:28
security use cases. So I
33:30
think it's important, and I think
33:32
probably what's more important than
33:34
anything is that we ensure that
33:36
for all national security purposes,
33:38
we're doing things right and properly.
33:40
And I think that's very
33:42
much feasible and possible. So
33:45
Alex, as we sort of
33:48
just, again, sort of look a
33:50
little bit forward and the
33:52
implications of AI for geopolitics sort
33:54
of call it that. What
33:56
are the emerging, we just talked
33:58
about this inference capability. But what
34:00
are the emerging technologies that
34:02
you believe will have the AI
34:04
emerging technologies that you think will have
34:07
the greatest impact on national security? I
34:10
think that there's a few
34:12
tranches. I think the first
34:14
is obviously AI and then
34:16
AI applied to every field.
34:18
So AI applied to cyber
34:20
capabilities, AI applied to greater levels
34:23
of autonomy, AI applied to
34:25
biology and chemistry and medicine,
34:27
AI applied to advanced manufacturing.
34:29
So you can peer down
34:31
each of these threads and
34:33
see just sort of like, you
34:35
know, I expect sort of
34:37
10x or greater improvements
34:39
in each of these
34:41
areas, which is going to
34:43
be pretty incredible. But
34:45
I would say specifically, if
34:48
you think from a
34:50
military standpoint, the use
34:52
of AI that is
34:54
quite fundamentally disruptive
34:56
is in autonomy.
35:00
And, you know,
35:02
the replacement
35:04
of manned systems to
35:06
unmanned systems to autonomous systems
35:08
to a treatable autonomous systems,
35:10
like that entire chain or
35:13
sort of like progression pattern,
35:15
we're seeing basically play out
35:17
in Ukraine. Or like, you
35:19
know, some folks have mentioned
35:21
to me that basically the level
35:23
of iteration and the fully
35:25
autonomous systems are going to be
35:28
deployed either by the Ukrainians or
35:30
the Russians are moving just incredibly
35:32
quickly on the drone level.
35:34
This flips military doctrine on its
35:36
head a little bit because you
35:38
have a very different calculus,
35:40
you have a very different reality,
35:42
and all of a sudden you
35:44
get to a point where
35:46
the, you know, warfare becomes to
35:48
some extent somewhat definitely difficult to
35:50
fully perceive and comprehend
35:53
and control. And so,
35:55
autonomy is clearly one
35:58
major theme, which I think is of... And
36:00
then I think from an
36:02
intelligence standpoint, I think that the
36:04
world has to grapple with
36:06
this sort of like new reality,
36:08
certainly, of open source intel.
36:10
And there's just so much information
36:12
out there. And how do
36:14
we think about all that? I
36:16
mean, I think these clearly
36:18
seem like the big themes that
36:20
seem quite important. So that's
36:22
a great point, Alex. And if
36:24
you think about it, not
36:26
only from an intelligence, but if
36:28
you're an analyst almost of
36:30
any kind looking at data, whether
36:32
it's geopolitical data or whatever,
36:34
it's becoming impossible to think about
36:36
doing that without AI right
36:38
next to you helping you. The
36:41
amount of information is so vast
36:43
that there's just no way for a
36:46
human being to go through it
36:48
anymore. All
36:51
right, let me ask you
36:53
for a prediction. How far
36:56
away are we from artificial
36:58
general intelligence? It's
37:03
the challenge is so hard to
37:05
define exactly what it is. But
37:08
I do think we're sort of
37:10
some small number of years, let's
37:12
say under five years from very
37:14
powerful AI systems that will vastly
37:16
outperform humans in many areas and
37:18
still be somehow worse than humans
37:21
in a host of other areas.
37:23
I mean, in some sense, we
37:25
already have them today, but I
37:27
think that trend will continue and
37:29
over the next few years, we're
37:31
going to see progressively, basically we'll
37:33
see systems that are better than
37:36
humans at more things while still
37:38
being worse than humans at others.
37:40
Sorry, Alex. Last question. Are you
37:42
as worried as some AI luminaries
37:44
who've worn against a
37:48
almost apocalyptic future where AI takes over?
37:50
I mean, if you listen to them,
37:52
that's sort of the path that they're
37:54
going down. Where do you come out
37:56
on that? I
37:59
think that Instead, if you look at
38:01
it, I think that there's a, you
38:06
know, with any discontinuous
38:08
technology, there's obviously always a
38:10
fair amount of risk, but
38:12
the most complicated question is
38:14
whether or not AI develops
38:16
the ability to self -improve. And
38:19
this, I think,
38:21
is the core, you
38:23
know, I think when
38:25
people have, you know,
38:27
some called doomer or, you know,
38:29
very concerned about AI risks and
38:31
others aren't, I think the core
38:33
question is, do they believe whether
38:35
or not AI can self -improve? Because
38:38
in a world where AI can't
38:40
really self -improve, then I think risks
38:42
are very manageable, right? Like, you
38:44
know, we're developing a technology, we'll
38:46
know what the limits are. If
38:48
we get to any point that's
38:50
concerning, we'll be able to, you
38:52
know, stop it. That's a governable
38:54
technology. If instead you get to
38:56
a point where AI is meaningfully
38:58
self -improving, which is clearly plausible,
39:00
but we have not seen
39:02
yet in a very in
39:05
a very real way.
39:07
But if you get that,
39:09
then you have this technology, this,
39:11
effectively this form of like digital
39:13
life that evolves at a faster rate
39:15
than humans do, and, you know,
39:17
then you might get worried that natural
39:20
selection might benefit the sort of
39:22
AI systems versus the human systems. So
39:24
I think that really is like
39:26
where the question boils down to. That's
39:28
kind of how I frame it.
39:30
If we see real self -improvement out
39:33
of AI models, I
39:35
think we ought to really take a
39:37
step back and think quite hard about the
39:39
technology. In a paradigm where we still
39:41
don't, then I think we need to be
39:43
more worried about misuse and be more
39:45
worried about, okay, how will that actually utilize
39:47
this technology and what does that mean
39:50
for humans? Alex,
39:52
that was a great discussion. Thanks so much. I mean,
39:54
there was a lot to think about there, and
39:56
thanks for taking the time to be on the podcast.
39:59
Can't thank you so much for me. me. I I
40:01
agree, Intelligence Matters. Matters. That was Alex Wang.
40:03
I'm was Alex Wang. I'm
40:05
Andy join us Please join us
40:07
next week for another episode
40:09
of Intelligence Matters. And you
40:12
can always reach us at always
40:14
reach us at gmail .com. pod@gmail.com.
40:21
Intelligence Matters is produced
40:23
by Steve Dorsey assistance from
40:25
from Ashley Barry. Matters is
40:27
a production of of Beacon Global
40:30
Strategies.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More