Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
It was a cold, windy day in January
0:05
nineteen seventy nine when the robots took
0:07
their first human life. It
0:10
happened in Flat Rock, Michigan, about twenty
0:13
miles down the interstate from Detroit, at
0:15
the Ford plant. There. Robert
0:17
Williams was twenty five. He
0:19
was a Ford worker and one of the people who
0:21
oversaw the robotic arm that was designed
0:24
to retrieve parts from bins in the storage room
0:26
and place amended carts that carried them
0:28
out to the humans on the assembly line. But
0:31
the robot was malfunctioning that day, and
0:33
aware of the slowdown it was creating on the
0:35
line, Robert Williams went to grab
0:38
the parts himself. While Williams
0:40
was reaching into a bin, the one ton
0:42
robotic arms swung into that same bin. The
0:45
robot didn't have any alarms to warn Williams
0:47
it was nearby. It didn't have any censors
0:49
to tell it a human was in its path. It
0:52
only had the intelligence to execute its commands
0:54
to retrieve and place auto parts. The
0:57
robots struck William's head with such force
1:00
it killed him instantly. It
1:03
was thirty minutes before anyone came to look
1:05
for Robert Williams. During that
1:07
time, the robot continued to slowly do
1:09
its work while Williams lay dead on the
1:11
parts room floor. The
1:13
death of Robert Williams happened during a weird
1:16
time for AI. The public
1:18
at large still felt unsure about the machines
1:20
that they were increasingly living and working among.
1:23
Hollywood could still rely on the trope of our
1:25
machines running a muck and ruining the future
1:27
for humanity. Both war Games
1:29
and The Terminator would be released in the next
1:31
five years. But
1:34
within the field that was trying to actually produce
1:36
those machines that may or may not run amuck
1:38
in the future, there was a growing crisis
1:41
of confidence. For decades, AI
1:43
researchers had been making grand but fruitless
1:45
public pronouncements about advancements in
1:48
the field. As early as nineteen
1:50
fifty six, when a group of artificial
1:52
intelligence pioneers met at Dartmouth, the
1:54
researchers wrote that they expected to have all
1:56
the major kinks worked out of AI by
1:58
the end of this semester, and
2:01
the predictions kept up from there. So
2:03
you can understand how the public came to believe
2:05
that robots that were smarter than humans were
2:08
just around the corner, but AI
2:10
never managed to produce the results expected
2:12
from it, and by the late nineteen eighties
2:15
the field retreated into itself.
2:17
Funding dried up, candidates
2:19
looked for careers in other fields. The
2:22
research was pushed to the fringe. It
2:24
was an AI winter the
2:27
public moved on to. The terminator
2:29
was replaced by Johnny five in
2:32
the film Maximum over Drive. Our machines
2:34
turn against us, but it's the result
2:37
of a magical comet, not from the work of
2:39
scientists. We lost
2:41
our fear of our machines. Recently,
2:44
quietly, the field of AI has
2:46
begun to move past the old barriers that once
2:48
held it back, garner the grand
2:50
pronouncements. Today's researchers,
2:53
tempered by the memory of their predecessor's public
2:55
failures, are more likely to downplay
2:57
progress in the field, and from
2:59
the new AI we have a clearer
3:01
picture of the existential risks that poses
3:04
than we ever had before. The AI
3:06
we may face in the future will be subtler
3:09
and vastly more difficult to overcome
3:11
than a cyborg with a shotgun. In
3:19
hindsight, it was the path toward machine
3:21
learning that the early AI researchers chose
3:24
that led them to a dead end. Let's
3:26
say you want to build a machine that sorts red
3:28
balls from green balls. First you
3:30
have to explain what a ball is. Well
3:33
first, really, you have to have a general
3:35
understanding of what makes a ball a ball,
3:37
which is easier said than done. Try
3:40
explaining a ball to someone that doesn't use
3:42
terms that one would have to already be familiar
3:44
with, like sphere, around or
3:46
circle. Once you have
3:49
that figured out, you then have to translate that
3:51
logic and those rules into code,
3:53
the language of machines, ones and zeros
3:56
if some. Then now you
3:58
have to do the same thing with the incept of the
4:00
color red and then the color green,
4:03
so that your machine can distinguish between red
4:05
balls and green balls. And let's not
4:07
forget that you have to program it to distinguish
4:09
in the first place. It's not like your machine
4:11
comes preloaded with distinguishing software.
4:14
You have to write that too. Since you're
4:16
making a sorting machine, you have to write code
4:18
that shows it how to manipulate another machine,
4:20
your robot sorder to let it touch
4:23
the physical world. And once you have
4:25
your machine up and running and working smoothly
4:27
separating red balls from green ones, what
4:29
happens when a yellow ball shows up. Things
4:32
like this do happen from time to time in real life.
4:34
What does your machine do? Then? Despite
4:37
the incredible technical difficulties that faced
4:40
the field of artificial intelligence did have
4:42
a lot of success at teaching machines that could
4:44
think very well within narrow domains.
4:48
One program called deep Blue beat
4:50
the reigning human chess champion Garry Kasprov
4:52
in six games at a match in to
4:56
be certain, the intellectual abilities required
4:58
by chess are of bad improvement over
5:01
those required to select a red ball from a green
5:03
one. But both of those programs
5:05
share a common problem. They only
5:07
know how to do one thing. The
5:09
goal of AI has never been to just build
5:12
machines that can beat humans at chess. In
5:14
fact, chess has always been used as a way to test
5:17
new models of machine learning, and
5:19
while there is definitely used for a machine that can
5:21
sort one thing from another, the ultimate
5:23
goal of AI is to build a machine with
5:25
general intelligence. Like a human
5:27
has to be good
5:29
at chess and only chess, is to be a machine
5:32
to be good at chess, good at doing taxes,
5:35
good at speaking Spanish, good at picking
5:37
out apple pie recipes. This begins
5:39
to approach the ballpark of being human.
5:42
So this is what early AI research ran
5:44
up against. Once you've taught the AI
5:46
how to play chess, you still have to teach
5:48
it what constitutes a good apple pie recipe,
5:51
and then tax laws in Spanish,
5:53
and then you still have the rest of the world to teach
5:55
it all the objects, rules, and concepts
5:58
that make up the fabric of our reality.
6:00
And for each of those, you have to break it down to
6:02
its logical essence and then translate that essence
6:05
into code, and then work through all the kinks.
6:07
And then once you've done this, once you've taught it absolutely
6:10
everything there is in the universe, you have
6:12
to teach the AI all the ways these things
6:14
interconnect. Just the thought of this as
6:17
overwhelming. Current
6:19
researchers in the field of a I refer to the work
6:21
their predecessors did as go fi, good
6:23
old fashioned AI. It's meant
6:25
to evoke images of malfunctioning robots,
6:28
their heads spinning wildly, is smoked pores
6:30
from them. It's meant to establish a line
6:32
between the AI research of yesterday and
6:35
the AI research of today. But
6:38
yesterday wasn't so long ago. Probably
6:41
the brightest line dividing old and new
6:43
in the field of AI comes around two thousand
6:46
six. For about a decade prior
6:48
to that, Jeoffrey Hinton, one of the Skeleton
6:50
crew of researchers working through the AI Winter,
6:53
had been tinkering with the artificial neural
6:55
networks, an old AI concept first
6:57
developed in the nineteen forties. The
7:00
neural nets didn't work back then, and they didn't
7:02
work terribly much better in the nineties, But
7:04
by the mid two thousands, the Internet
7:06
had become a substantial force in developing
7:09
this type of AI. All of those images
7:11
uploaded to Google, all that video uploaded
7:13
to YouTube, the Internet became a vast
7:16
repository of data that could be
7:18
used to train artificial neural networks
7:22
in very broad strokes. Neural nets are
7:24
algorithms that are made up of individual
7:27
units that behave somewhat like the neurons
7:29
in the human brain. These units are
7:31
interconnected and they make up layers. As
7:33
information passes from lower layers to higher
7:36
ones, and whatever input has passed
7:38
through the neural net is analyzed in increasing
7:40
complexity. Take for example,
7:43
the picture of a cat. At the
7:45
lowest layer, the individual units
7:47
each specialized in recognizing some very
7:49
abstract part of a cat. Picture. So
7:51
one will specialize in noticing shadows
7:53
or shading, and another will specialize
7:56
in recognizing angles. And these
7:58
individual units give a confident at
8:00
center bowl that what they're seeing is
8:02
the thing that they specialize in. So
8:04
that lower layer is stimulated to transmit
8:06
to the next higher layer, which specializes
8:09
in recognizing more sophisticated parts.
8:12
The units in the second layer scan the shadows
8:14
and the angles that the lower layer found,
8:16
and it recognizes them as lines and curves.
8:19
The second layer transmits to the third layer,
8:21
which recognizes those lines and curves as whiskers,
8:24
eyes, and ears, and it transmits
8:26
to the next layer, which recognizes those
8:28
features as a cat. Neural
8:31
nets don't hit a accuracy, but
8:33
they work pretty well. The problem
8:35
is we don't really understand how they work.
8:38
The thing about neural nets is that they learn
8:40
on their own. Humans don't act as
8:43
creator gods who code the rules of the universe
8:45
for them like in the old days. Instead,
8:47
we act more as trainers. And
8:49
to train a neural net, you expose it to
8:51
tons of data on whatever it is you wanted to
8:53
learn. You can train them to recognize
8:56
pictures of cats by showing them millions of
8:58
pictures of cats. You can train them
9:00
on natural languages by exposing them to thousands
9:02
of hours of people talking. You can train
9:04
them to do just about anything. So long as
9:07
you have a robust enough data set. Neural
9:09
net's find patterns in all of this data,
9:11
and within those patterns, they decide
9:13
for themselves. What about English makes
9:16
English english? Or what makes a cat
9:18
picture a picture of a cat? We don't
9:20
have to teach them anything. In
9:22
addition to self directed learning, what makes
9:25
this type of algorithm so useful is its
9:27
ability to self correct to get better
9:29
at learning. If researchers show a
9:31
neural net a picture of a fox and the AI
9:33
says it's a cat, the researchers can tell
9:35
the neural net it's wrong. The algorithm
9:38
will go back over its millions of connections and
9:40
fine tune them, adjusting the way it
9:42
gives each unit so that in the future it
9:44
will be able to better distinguish a cat from
9:46
a fox. It does this too,
9:48
without any help or guidance from humans.
9:51
We just tell the AI that it got it wrong. The
9:54
trouble is we don't really know how neural nets
9:56
do what they do. We just know they
9:58
work. This is it's called opaque.
10:01
We can't see inside the thought processes of
10:03
our AI, which makes artificial
10:05
neural nets black boxes, which
10:07
makes some people nervous. With
10:10
the black box, we add input and receive
10:13
output, but what happens in between is
10:15
a mystery. Kind of like when you put
10:17
a quarter into a gumball machine. Quarter
10:19
goes in, gumball comes out. The
10:21
difference is that gumball machines aren't
10:23
in any position to take control of our world
10:25
from us. And if you were curious enough,
10:28
you could open up a gumball machine and look
10:30
inside to see how it works with
10:32
the neural net. Cracking open the algorithm
10:34
doesn't help. The machine learns in its own
10:37
way, not following any procedures
10:39
we humans have taught it. So when we examine
10:41
a neural net, what we see doesn't explain
10:43
anything to us. We're
10:46
already beginning to see signs of this opaqueness
10:48
in real life as reports come in from
10:50
the field. A neural net that Facebook
10:53
train to negotiate developed its own language
10:55
that apparently works rather well in negotiations,
10:58
but doesn't make any sense to human Here's
11:01
a transcript from a conversation between age
11:03
and a and agent b Alice and Bob.
11:06
I can I I everything else dot dot
11:08
dot dot dot dot dot dot dot dot dot
11:10
dot dot dot balls have zero to me, to
11:12
me, to me, to me, to me, to me, to me, to meet you,
11:14
I everything else dot dot dot dot dot
11:16
dot dot dot dot dot dot dot dot dot
11:19
balls have it ball to me, to me, to me, to me, to me,
11:21
to me, to me, I I can I I I everything
11:23
else dot dot dot dot dot dot dot dot
11:25
dot dot dot dot dot. Another algorithm
11:27
called deep patient was trained on the medical
11:30
history of over seven hundred thousand people
11:32
twelve years worth of patient records from Mount
11:35
Sinai Hospital in New York. It
11:37
became better than human doctors at predicting
11:39
a patient would develop a range of ninety three
11:41
different illnesses within a year. One
11:44
of those illnesses is schizophrenia.
11:46
We humans have a difficult time diagnosing
11:48
schizophrenia before the patient suffers
11:50
their first psychotic break, but Deep
11:53
patient has proven capable of diagnosing
11:55
the mental illness before then. The
11:57
researchers have no idea what patterns the
11:59
algorithm as seeing in the data. They
12:01
just know it's right. With
12:04
astonishing quickness, the field of AI
12:06
has been brought out of its winter by neural nets.
12:09
Almost overnight, there was a noticeable improvement
12:12
in the reliability of the machines that do work
12:14
for US. Computers got better at
12:16
recommending movies. They got better at creating
12:18
molecular models to search for more effective
12:21
pharmaceuticals. They got better at
12:23
tracking weather. They got better at keeping
12:25
up with traffic and adjusting our driving routes.
12:28
Some algorithms are learning to write code so
12:30
that they can build other algorithms. With
12:32
neural nets. Things are beginning to fall into
12:34
place for the field of AI. In
12:36
them, researchers have produced an adaptable,
12:39
scalable template that could be capable
12:41
of a general form of intelligence. It
12:43
can self improve, it can learn to code.
12:46
The seeds for a superintelligent AI are
12:49
being sowned. There
13:02
are enormous differences, by orders
13:05
of magnitude really, between the AI
13:07
that we exist with and the super intelligent
13:09
AI that could at some point result from
13:11
it. The ones we live with today are
13:14
comparatively dumb, not just compared
13:16
to a super intelligent AI, but compared to
13:18
humans as well. But the point
13:20
of thinking about existential risks posed
13:23
by super intelligent AI isn't about
13:25
time scales of when it might happen, but
13:27
whether it will happen at all. And
13:29
if we can agree that there is some possibility
13:31
that we may end up sharing our existence with a super
13:34
intelligent machine, one that is vastly
13:36
more powerful than us, then we better
13:38
start planning for its arrival now. So
13:41
I think that this transition to a machine
13:43
intelligence era looks like it has
13:45
some reasonable chance of occurring
13:48
within perhaps the lifetime of a lot
13:50
of people today. We don't really not, but maybe it could
13:52
happen in a couple of decades, maybe it's like a century,
13:55
and that it would be a very important
13:58
transition the last invention that
14:00
human sever needs to make. That
14:02
was Nick Bostrom, the Oxford philosopher
14:04
who basically founded the field of existential
14:07
risk analysis. Artificial
14:09
intelligence is one of his areas of focus.
14:11
Bostroom used a phrase in there that AI
14:14
would be the last invention humans ever need
14:16
to make. It comes from a frequently
14:18
cited quote from British mathematician
14:20
Dr Irving John good one of the crackers
14:23
of the Nazi Enigma code at Bletchley Park
14:25
and one of the pioneers of machine learning. Doctor
14:28
Goods quote reads, let an ultra
14:30
intelligent machine be defined as a machine
14:32
that can faster pass all the intellectual
14:35
activities of any man, however clever.
14:38
Since the design of machines is one of these
14:40
intellectual activities, an ultra
14:42
intelligent machine could design even better
14:44
machines. There would then not unquestionably
14:47
be an intelligence explosion, and
14:49
the intelligence of man would be left far behind.
14:52
Thus, the first ultra intelligent machine
14:55
is the last invention that man need ever make.
14:58
In just a few lines, Ductor Goods sketches
15:01
out the contours of how a machine might suddenly
15:03
become super intelligent, leading to that
15:05
intelligence explosion. There
15:07
are a lot of ideas over how this might happen, but
15:10
perhaps the most promising path is stuck
15:12
in the middle of that passage a machine
15:15
that can design even better machines. Today,
15:18
AI researchers call this process recursive
15:20
self improvement. It remains theoretical,
15:23
but it stands as a legitimate challenge to
15:25
AI research, and we're already
15:27
seeing the first potential traces of it in
15:29
neural nets. Today, a
15:32
recursively self improving machine would be capable
15:34
of writing better versions of itself. So
15:37
Version one would write a better version of its code,
15:39
and that would result in version two, and
15:42
version two would do the same thing, and so
15:44
on, and with each iteration the
15:46
machine would grow more intelligent, more
15:48
capable, and most importantly, better
15:51
at making itself better. The
15:54
idea is that at some point the rate of improvement
15:56
would begin to grow so quickly that the
15:58
machines intelligence would take off the
16:01
intelligence explosion that Dr Good predicted
16:04
how an intelligence explosion might play out.
16:06
Isn't the only factor here? At
16:09
least equally important is how quickly an
16:11
AI might become intelligent? Is
16:13
just how intelligent it will become. An
16:16
AI. Theorist named Eliezer Yukowski
16:18
points out that we humans have a tendency
16:20
to underestimate the intelligence levels
16:23
that AI can attain. When
16:25
we think of a super intelligent being, we
16:27
tend to think of some amazing human genius,
16:30
say Einstein, and then we put
16:32
Einstein in a computer or a robot.
16:34
That's where our imaginations tend to aggregate
16:37
when most of us ponder super intelligent AI.
16:40
True as self improving AI may at some
16:42
point reach a level where it's intelligence
16:44
is comparable to Einstein's, but why
16:46
would it stop there? Rather
16:48
than thinking of a super intelligent AI
16:50
along the lines of the difference between Einstein
16:53
and US regular people. Nick
16:55
Bostrom suggests that we should probably
16:57
instead think more along the lines of the different
17:00
between Einstein and earthworms.
17:03
The super Intelligent AI would
17:05
be a god that we made for ourselves.
17:13
What would we do with our new god? It's
17:15
not hyperbole to say that the possibilities
17:17
are virtually limitless, but
17:20
you can kind of see the outlines and what we do
17:22
with our lesser AI gods. Now, we
17:25
will use them to do the things we want
17:27
to do but can't, and to do the
17:29
things we can do but better. I.
17:32
J. Good called the super Intelligent Machine
17:34
the last invention humans ever need to
17:36
make, because after we invented it, the
17:39
AI would handle the inventing for us
17:41
from there on. Now, our technological
17:43
maturity would be secured as it developed
17:46
new technologies like atomically precise
17:48
manufacturing using nanobots. And
17:50
since it would be vastly more intelligent than
17:53
us, the machines that created for us
17:55
would be vastly superior to anything we
17:57
could come up with flawless technology.
17:59
As far as we were concerned, we
18:01
could ask it for whatever we wanted to establish
18:04
our species outside of Earth by designing
18:06
and building the technology to take us elsewhere
18:08
in the universe. We would live in
18:11
utter health and longevity. It
18:13
would be a failure of imagination, says Eliza
18:15
Yukowski, to think that AI would
18:17
cure, say cancer, the super
18:19
intelligent AI would cure disease. It
18:22
would also take over all the processes we've
18:24
started, improve on them, build
18:26
on them, create whole new ones we hadn't
18:28
thought of, and create for us a post
18:31
scarcity world, keeping our global
18:33
economy humming along, providing
18:35
for the complete well being, comfort,
18:37
and happiness of every single person
18:39
alive. It would probably
18:42
be easier than guessing at all of the things of super
18:44
intelligent a I might do for us to instead
18:46
look at everything that's wrong with the world, the
18:48
poverty, the wars, the crime, the
18:51
exploitation and death was suffering,
18:53
and imagine a version of our world utterly
18:56
without any of it. That starts
18:58
to get at what those people intice pating the
19:00
emergence of a super intelligent AI expect
19:02
from it. But
19:06
there's another little bit at the end of that famous
19:08
quote from I J. Good, one that almost
19:10
always gets left off, which says a
19:12
lot about how we humans think of the risks
19:14
posed by AI. The sentence
19:16
reads in full. Thus, the first
19:19
ultra intelligent machine is the last
19:21
invention that man need ever make, provided
19:23
that the machine is docid enough to tell
19:26
us how to keep it under control. We
19:28
humans tend to assume that any AI we
19:30
create would have some desire to help
19:33
us or care for us, But existential
19:35
risk theorists widely agree that almost
19:37
certainly would not be the case. That
19:39
we have no reason to assume a super intelligent
19:42
AI would care at all about as humans are
19:44
well being, in happiness, or even
19:46
our survival. This is transhumanist
19:49
philosopher David Pierce. If
19:51
the intelligence explosion word
19:53
come to pass, it's by no means
19:55
clear that the upshot would
19:58
be sentients friendly super
20:01
intelligence. In much the same way
20:03
that we make assumptions about how aliens
20:05
might walk on two legs, or have eyes,
20:08
or be in some form we can comprehend,
20:11
we make similar assumptions about AI, but
20:15
it's likely that a super intelligent AI
20:17
would be something we couldn't really relate to
20:19
at all. It sounds bizarre,
20:21
but think for a minute about what would happen
20:23
if the Netflix algorithm became super
20:25
intelligent. What about the Netflix
20:28
algorithm makes us think that it would care at all
20:30
about every human having a comfortable income
20:32
and the purest fresh water to drink. Say
20:36
that a few decades from now, computing
20:38
power becomes even cheaper and computer
20:40
processes more efficient, and Netflix
20:42
engineers figure out how to train its algorithm
20:44
to self improve. Their purpose
20:47
at building upon their algorithm isn't to save
20:49
the world. It's to make an AI that can
20:51
make ultra tailored movie recommendations.
20:54
So if the right combination of factors came
20:56
together and the Netflix algorithm underwent
20:58
and intelligence explosion, there's
21:00
no reason for us to assume that it would become
21:02
a super intelligent, compassionate Buddha.
21:05
It would be a super intelligent movie recommending
21:07
algorithm, and that would be an extremely
21:10
dangerous thing to share our world with. About
21:13
a decade ago, Nick Bostrom thought of a really
21:16
helpful but fairly absurd scenario
21:18
that gets across the idea that even the most
21:20
innocuous types of machine intelligence
21:22
could spell our doom should they become super
21:25
intelligent. The classical example being the
21:27
AI paper clip maximizer that
21:30
transforms the Earth into
21:32
paper clips are space colonization
21:34
props that then gets sent out and
21:37
the transform the universe into paper tips. Imagine
21:39
that a company that makes paper clips hires
21:41
a programmer to create an AI that can run
21:43
its paper clip factory. The programmer
21:46
wants the AI to be able to find new ways
21:48
to make paper clips more efficiently and cheaply,
21:51
so it gives the AI freedom to make its own decisions
21:53
on how to run the paper clip operation. The
21:56
programmer just gives the AI the primary
21:58
objective, its goal of
22:00
making as many paper clips as possible.
22:03
Say that paper clip maximizing AI become
22:05
super intelligent. For the AI, nothing
22:07
has changed. Its goal is the same to
22:10
it. There is nothing more important in the
22:12
universe than making as many paper clips
22:14
as possible. The only difference
22:17
is that the AI has become vastly more
22:19
capable, so it finds new
22:21
processes that building paper clips that were
22:23
overlooked by as humans. It creates
22:25
new technology like nanobots to build atomically
22:28
precise paper clips on the molecular level,
22:30
and it creates additional operations
22:32
like initiatives to expand its own computing
22:34
power so it can make itself even better
22:37
at making more paper clips. It
22:39
realizes at some point that if it could
22:41
somehow take over the world, that would be a whole
22:43
lot of more paper clips in the future. Then if
22:46
it just keeps running the single paper clip factory,
22:48
so it then has an instrumental reason to
22:50
place itself in a better position to take over
22:52
the world. All those fiber optic
22:54
networks, all those devices we connect
22:56
to those networks, are global economy. Even
22:59
as human would be repurposed and
23:01
put into the service of building paper clips,
23:04
rather quickly, the AI would turn its attention
23:06
to space as an additional source of materials
23:09
for paper clips. And since the AI
23:11
would have no reason to fill us in on its new
23:13
initiatives to the extent that it considered
23:16
communicating with us at all, it would probably
23:18
conclude that it would create an unnecessary drag
23:20
on its paper clip making efficiency. We
23:22
humans would stand by as the AI launched
23:25
rockets from places like Florida and Kazakhstant,
23:28
left to wonder what's it doing now?
23:38
It's nanobot workforce would reconstitute
23:41
matter, rearranging the atomic structures
23:43
of things like water molecules and soil
23:46
into aluminum to be used as raw material
23:48
for more paper clips. But
23:50
we humans, who have been pressed into services
23:52
paper clip making slaves by this point, need
23:55
those water molecules in that earth for our
23:57
survival, and so we would
23:59
be thrown to a resource conflict with
24:01
the most powerful entity in the universe,
24:03
as far as we're concerned, a conflict
24:06
that we were doomed from the outset to lose.
24:09
Perhaps the AI would keep just enough water
24:11
and soiled to produce food and water to sustain
24:13
as slaves. But let's not forget
24:16
why we humans are so keen on building machines
24:18
to do the work for us. In the first place. We're
24:20
not exactly the most efficient workers around,
24:23
so the AI would likely conclude that it's paper
24:26
clip making operation would benefit more
24:28
to use those water, molecules and soil to
24:30
make aluminum than it would keep us
24:32
alive with it. And it's about
24:34
here that those nanobots the AI built
24:36
would come for our molecules too. As
24:42
Elie as our Yukowski wrote, the AI
24:44
does not hate you, nor does it love you,
24:47
but you are made of atoms which it can use
24:49
for something else. But
24:52
say that it turns out as super intelligent, AI
24:54
does undergo some sort of spiritual conversion
24:57
as a result of its vastly increased intellect,
24:59
and also gains compassion. Again,
25:02
we shouldn't assume we will come out safely from
25:04
that scenario. Either. What
25:06
exactly what the AI care about, not
25:09
necessarily just us, considers,
25:12
says transhumanist philosopher David
25:14
Pierce, an AI that deeply values
25:16
all sentient life. That is
25:18
to say that it cares about every living being
25:20
capable of, at the very least the experience
25:23
of suffering and happiness, and
25:25
the AI values all sentient lives the
25:27
way that we humans place a high value
25:29
on human life. Again, there's no
25:31
reason for us to assume that the outcome for
25:34
us would be a good one under
25:36
scrutiny. Perhaps the way we tend
25:38
to treat other animals we share the planet with
25:40
other sentient life would bring the
25:42
AI to the conclusion that we humans are
25:44
an issue that must be dealt with to preserve
25:46
the greater good. Do you imagine
25:48
if you were a
25:51
full spectrum superintelligence,
25:54
would you deliberately create brain
25:57
damaged, psychotic, eccentric
26:00
for malays written Darwinian humans,
26:03
or would you think are matro
26:05
and energy could be optimized
26:08
in a in a radically different way,
26:11
or perhaps its love ascenient life. We'd
26:13
preclude it from killing us and our species
26:15
would instead be imprisoned forever to prevent
26:17
us from ever killing another animal, either
26:20
special death or special imprisonment.
26:22
Neither of those outcomes of the future we have in
26:24
mind for humanity. So
26:27
you can begin to see why some people are anxious
26:30
at the vast number of algorithms in development
26:32
right now, and those already intertwined
26:34
in the digital infrastructure we've built atop
26:36
our world. There is thought
26:39
given to safety by the people building these intelligent
26:41
machines. It's true. Self driving
26:43
cars have to be trained in programmed to choose
26:45
the course of action that will result in the fewest
26:48
number of human deaths when an accident
26:50
can't be avoided. Robot
26:52
care workers must be prevented from dropping
26:54
patients when they lift them into a hospital
26:56
bed. Autonomous weapons, if
26:59
we can't agree to BAND out right, have
27:01
to be carefully trained to minimize the
27:03
possibility that they kill innocent civilians,
27:05
so called collateral damage. These
27:08
are the type of safety issues that companies
27:10
building AI consider. They
27:12
are concerned with the kind that can get your company
27:14
sued out of existence, not the kind
27:17
that arises from some vanishingly remote
27:19
threat to humanities existence. But
27:22
say they did build their AI to reduce the
27:24
possibility of an existential threat. Controlling
27:27
a god of our own making is as difficult
27:29
as you would expect it to be. In
27:32
his two thousand fourteen books Super Intelligence,
27:34
Nick Bostrom lays out some possible solutions
27:37
for keeping a super intelligent AI under
27:39
control. We could box
27:41
it physically, house it on one single
27:43
computer that's not connected to any network
27:45
or the Internet. This would prevent the AI
27:48
from making masses of copies of itself
27:50
and distributing them on servers around the world,
27:52
effectively escaping. We
27:55
could trick it into thinking that it's actually just a
27:57
simulation of an AI, not the real
27:59
thing, so its behavior might be more
28:01
docile. We could limit
28:03
the number of people that comes in contact with just
28:05
a few, and watch those people closely
28:08
for signs they're being manipulated by the AI
28:10
and helping it escape. Each
28:13
time we interact with the AI, we could wipe
28:15
its hard drive clean and reinstall it
28:17
ANEW to prevent the AI from learning anything
28:19
it could use against us. All
28:21
of these plants have their benefits and drawbacks,
28:24
but they are hardly full proof. Bostrom
28:26
points out one fatal flaw that they all have
28:28
in common. They were thought up by people.
28:32
If Bosterman others in his field have thought
28:34
of these control ideas. It stands
28:36
to reason that a super intelligent a I would
28:38
think of them as well and take measures
28:40
against them. And just as
28:42
important, this AI would be a greatly
28:44
limited machine, one that could only
28:46
give us limited answers to a limited number
28:49
of problems. This is not the AI
28:51
that would keep our world humming along for the benefit
28:53
and happiness of every last human. It
28:55
would be a mere shadow of that. So
28:58
theorists like Bosterman you how s Ki, tend
29:00
to think that coming up with ways to keep a super
29:03
intelligent AI hostage isn't the
29:05
best route to dealing with our control issue.
29:07
Instead, we should be thinking up ways
29:10
to make the AI friendly to us humans,
29:12
to make it want to care about our well being.
29:15
And since as we've seen we humans will have
29:17
no way to control the AI once it's super
29:19
intelligent, we will have to build friendliness
29:21
into it from the outside. In
29:24
fact, aside from a scenario where we
29:26
managed to program into the AI the express
29:29
goal of providing for the well being and welfare
29:31
of humankind, a terrible outcome
29:33
for humans is basically the inevitable
29:35
result of any other type of emergence
29:38
of a super intelligent AI. But
29:41
here's the problem. How do you convince
29:43
Einstein to care so deeply about
29:45
earthworms that he dedicates his immortal
29:48
existence to providing and caring for
29:50
each and every last one of them.
29:52
As ridiculous as it sounds, this
29:54
is possibly the most important question we humans
29:56
face as a species right now. We
30:08
humans have expectations for parents
30:10
when it comes to raising children. We
30:12
expect them to be raised to treat other people
30:14
with kindness. We expect them to be taught
30:17
to go out of their way to keep from harming others.
30:19
We expect them to know how to give as well
30:22
as take. All these things and more
30:24
make up our morals. Rules that
30:26
we have collectively agreed are good because
30:28
they help society to thrive, and,
30:31
seemingly miraculously, if you think about
30:33
it, parents after parents managed
30:35
to call some form or fashion of morality
30:38
from their children, generation after generation.
30:41
If you look closely, you see that each
30:43
parent doesn't make up morality from scratch.
30:46
They pass along what they were taught, and
30:48
children are generally capable of accepting
30:51
these rules to live by and well live
30:53
by them. It would seem if you'll
30:55
forgive the analogy that the software
30:57
for morality comes already on board
30:59
a child as part of their operating system.
31:02
The parents just have to run the right programs.
31:05
So it would seem then that perhaps the solution
31:08
to the problem of instilling friendliness in
31:10
an AI is to build a super
31:12
intelligent AI from a human mind.
31:15
This was laid out by Nick Bostroman his book super
31:17
Intelligence. The idea is that
31:19
if the hard problem of consciousness is not
31:21
correct, and it turns out that our conscious
31:24
experience is merely the result
31:26
of the countless interactions of the interconnections
31:28
between our hundred billion neurons, then
31:30
if we can transfer those interconnected neurons
31:33
into a digital format, everything
31:35
that's encoded in them, from the smell of
31:37
lavender to how to ride a bike would
31:39
be transferred as well. More to
31:41
the point, the morality encoded
31:43
in that human mind should emerge in the
31:45
digital version too. A
31:47
digital mind can be expanded, processing
31:50
power can be added to it. It could be edited
31:52
to remove unwanted content like greed
31:54
or competitiveness. It could be upgraded
31:56
to a super intelligence. There
31:59
are a lot of magic wands waving around here,
32:02
but interestingly, uploading a mind
32:04
called whole brain emulation is
32:06
theoretically possible with improvements
32:08
to our already existing technology.
32:11
We would slice a brain, scan it with
32:13
such high resolution that we could account for every
32:15
neuron, synapse and nano leader
32:17
of neurochemicals, and build that information
32:20
into a digital model. The answer
32:22
to the question of whether it worked would come when we turn
32:24
the model on. It might do absolutely
32:26
nothing and just be an amazingly accurate
32:29
model of a human brain. Or it
32:31
might wake up but go insane from
32:33
the sudden novel experience of living in a digital
32:36
world. Or perhaps it could work.
32:38
The great advantage to using whole brain emulation
32:41
to solve the friendliness problem is that
32:43
the AI would understand what we meant
32:45
when we asked it to dedicate itself to looking
32:47
after and providing for the well being and
32:50
happiness of all humans. We
32:52
humans have trouble saying exactly what we mean
32:54
at times, and Bostroon points
32:56
out that a superintelligence that takes us
32:58
literally could prove disastrous if
33:01
we aren't careful with our words. Suppose
33:04
we give an AI the goal of making all humans
33:06
as happy as possible. Why should
33:08
we think that the superintelligent a I would understand
33:10
that we mean it should purify our air and water,
33:13
create a bucolic wonderland of both peaceful
33:16
tranquility and stimulating entertainment,
33:19
Do away with wars and disease, and
33:21
engineer social interactions so that we
33:23
humans can comfort and enlighten one another.
33:26
Why wouldn't the AI reach that goal more directly
33:29
by, say, rounding up us humans and keeping
33:31
us permanently immobile, doped up
33:33
on a finely tuned cocktail of dopamine,
33:35
serotonin, and oxytocin. Maximal
33:38
happiness achieved with perfect efficiency.
33:41
Say we do manage to get our point across?
33:44
What's our point? Anyway? Whose
33:46
morality are we asking the AI to adopt?
33:49
Most of our human values are hardly universal.
33:52
Should our global society embrace multiculturalism
33:55
or our homogeneous society is more harmonious.
33:58
If a woman didn't want to have a child, would she be
34:00
allowed to terminate her pregnancy or should
34:02
be forced to have it? Would we eat
34:05
meat? If not, would it be because
34:07
it comes from sacred animals, as Hindu people
34:09
revere cows, or because it's taboo,
34:12
as Muslim and Jewish people consider swine.
34:15
From Out of this seemingly intractable problem
34:17
of competitive and contradictory human
34:19
values AI theorist eli as
34:21
A Yukowski had a flash of brilliance. Perhaps
34:24
we don't have to figure out how to get our point
34:26
across to an AI, after all, Maybe
34:28
we can leave that task to a machine.
34:32
In yukowski solution, we would build
34:34
a one use super intelligence with the goal
34:36
of determining how to best express to another
34:38
machine the goal of ensuring the well
34:41
being and happiness of all humans. Yukowski
34:44
suggests we use something he calls a coherent
34:46
extrapolated vision, Essentially
34:49
that we give the machine the goal of figuring out
34:51
what we would ask a super intelligent machine
34:53
to do for us, if the best version
34:55
of humanity we're asking with the best
34:58
of intentions, taking into a
35:00
count as many common and shared values
35:02
as possible, with humanity in
35:04
as much agreement as possible, Considering
35:07
we had all the information needed to make
35:09
a fully informed decision on what to request.
35:12
Once the super intelligent machine determined
35:14
the answer, perhaps we would give it one more
35:17
goal to build us a super intelligent
35:19
machine with our coherent extrapolated vision
35:21
aboard the last invention, Our
35:24
last invention ever need make like
35:26
whole brain emulation. Yukowski's coherent
35:29
extrapolated vision takes for granted
35:31
some real technological hurdles. Chiefly,
35:33
we have to figure out how to build that first super
35:36
intelligent machine from scratch, but
35:38
perhaps it's a blueprint for future developers.
35:50
The problems of controlling AI and instilling
35:52
friendliness raises one basic question.
35:55
If our machine random mock, why wouldn't
35:57
we just turn it off? In the
35:59
movie, there's always a way, sometimes
36:01
a relatively simple one for dealing
36:03
with troublesome AI. You
36:06
can scrub it's hard drive, control
36:08
all, delete it, sneak up behind it with a
36:10
screwdriver, and remove its motherboard. But
36:12
should we ever face the reality of a super
36:15
intelligent AI emerging among
36:17
us, we would almost certainly not come
36:19
out on top, And AI
36:21
has plenty of reasons to take steps to
36:24
keep us from turning it off. It may
36:26
prefer not to be turned off in the same
36:28
way we humans most of the time prefer not
36:30
to die. Or it may have no real
36:32
desire to survive itself. But
36:35
perhaps it would see being turned off as an
36:37
impedance to its goal, whatever its goal,
36:39
maybe and prevent us from turning it
36:41
off. Perhaps it would realize
36:43
that if we suspected the AI had gained
36:45
super intelligence, we would want to turn
36:47
it off, and so it would play dumb and
36:50
keep this new increased intelligence out of our awareness
36:53
until it has taken steps to keep us from turning
36:55
it off. Or perhaps we could
36:57
turn it off, but we would find we didn't
36:59
have the will to do that. Maybe
37:02
it would make itself so globally pervasive
37:04
in our lives that we would feel like we couldn't
37:06
afford to turn it off. Sebastian
37:08
Farquhar from Oxford University
37:11
points out that we already have a pretty bad
37:13
track record at turning things off even
37:15
when we know they're not good for us. One
37:18
example of that might be global
37:20
warming. So we all kind of know that
37:23
carbon dioxide emissions are
37:25
creating a big problem, but we also
37:27
know that burning fossil fuels
37:29
and the cheap energy that we get of it,
37:31
it's also really useful, right. It
37:34
gives us cheap consumer goods,
37:36
it creates employment, it's very attractive,
37:39
and so often, once we
37:41
know that something is going to be harmful for us, but
37:43
we also know that it's really nice, it
37:46
becomes politically very challenging
37:48
to to actually make an active decision
37:50
to turn things off. Maybe it would be adept
37:52
enough at manipulating us that it used
37:55
a propaganda campaign to convince a majority
37:57
of US humans that we don't want to turn
37:59
it off. It might start lobbying, perhaps
38:02
through proxies or fronts um
38:04
or it might you know, studing
38:07
looking at the political features of our time. It
38:10
might create Twitter bots that
38:12
argue that is AI is really useful
38:15
that needs to be protected, or that it's
38:17
important to some political or identity
38:19
group. And perhaps we are already
38:21
locked into the most powerful force in keeping
38:24
AI pushing ever forward. Money.
38:27
Those companies around the globe that build and use
38:29
AI for their businesses make money
38:32
from those machines. This creates
38:34
an incentive for those businesses to take some
38:36
of the money the machines make for them and
38:38
reinvest it into building more improved
38:40
machines to make even more money with This
38:43
creates a feedback loop that anyone with
38:45
a concern for existential safety has
38:48
a tough time interrupting. This
38:51
incentive to make more money. As well as the competition
38:53
posed by other businesses, gives
38:56
companies good reason to get new and
38:58
improved AI to market as soon as possible.
39:01
This in turn creates an incentive to cut corners
39:03
on things that might be nice to have but
39:05
aren't at all necessary in their business,
39:08
like learning how to build friendliness into the
39:10
AI they deploy. As
39:12
companies make more and more money from AI,
39:14
the technology becomes more entrenched in
39:17
our world, and both of those things will
39:19
make it harder to turn off. If, by chance,
39:21
that Netflix algorithm does suddenly explode
39:23
in intelligence. It
39:27
sounds like so much gibberish, doesn't it Netflix's
39:30
algorithm becoming super intelligent and wrecking
39:32
the world. I may as well say a which
39:34
could come by and cast a spell on it that wakes
39:37
it up. But when it comes to technology,
39:40
things that seem impossible given
39:42
the luxury of time start to seem
39:44
much less. So put
39:46
yourself in with the technology
39:49
people lived with back then. The earliest
39:51
radios and airplanes, the first washing
39:53
machines, neon lights were new,
39:56
and consider that they had trouble imagining
39:59
it being much more advanced than it was then.
40:02
Now compare those things to our world in two
40:04
thousand eighteen, and let's go
40:06
the other way. Think about our world and the technology
40:08
we live with today, and imagine what
40:11
we might live among ineen
40:15
the impossible starts to seem possible.
40:20
What would you do tomorrow if you woke up
40:22
and you found that Siri on your phone was making
40:25
its own decisions and ones you didn't
40:27
like, rearranging your schedule
40:29
into bizarre patterns, investing
40:31
your savings in its parent company, looping
40:34
in everyone on your context list, too sensitive
40:36
email threads. What would you do? What
40:39
if fifty or a hundred years from now you
40:42
woke up and found that the Siri that we've
40:44
built for our whole world has begun to
40:46
make decisions on its own. What
40:48
do we do then if we go to turn
40:50
it off and we find that it's removed our ability
40:53
to do that? Have we shown it that
40:55
we are an obstacle to be removed? On
41:07
the next episode of the End of the World
41:09
with Josh Clark, The field
41:12
of biotechnology has grown sophisticated
41:14
in its ability to create pathogens that
41:16
are much deadlier than anything found
41:19
in nature. That researcher thought
41:21
that was a useful line of inquiry,
41:24
and there were other researchers who
41:26
vehemently disagreed and thought
41:29
it was an extraordinarily reckless
41:31
thing to do. The biotech field
41:33
also has a history of recklessness
41:36
and accidents and as the world
41:38
goes more connected, just one
41:40
of those accidents could bring an abrupt
41:42
end to humans.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More