Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Hello, experiment listeners.
0:02
Julie Longoria here. It's
0:04
been a while. I miss you guys. I wanted
0:06
to give you a little update on
0:09
what I've been working on. Today,
0:11
we're giving you a preview of
0:13
a four-part series that just came out
0:15
on a show called Unexplainable.
0:18
I hope you'll listen to
0:20
this trailer and then go
0:22
subscribe to Unexplainable, wherever
0:25
you get podcasts. Do
0:27
you mind if I ask you a few
0:29
questions? Sure, if it's correct.
0:32
Yeah, cool. Over the last
0:34
few months, I've been cornering
0:36
some very smart people. To
0:38
ask what I thought was
0:40
a simple question. That's a
0:42
big question. It's hard to
0:45
know how to answer questions
0:47
like that. The question is one,
0:49
I think a lot of us have right
0:51
now. Should we be worried
0:53
about artificial intelligence?
0:56
But the answers from the
0:58
greatest minds in AI surprised
1:00
me. I mean, I'll tell, so I'll tell
1:03
you, like, I'll tell you a story. Basically,
1:05
it's a sort of a parable.
1:08
They replied with fantastical tales, as
1:10
if I'd asked them about the
1:12
great mysteries of the universe. So,
1:14
so what does this have to
1:16
do with AI? I'm making it.
1:19
Oh, so very broadly. Like, it
1:21
was like Aesop's fables, AI
1:23
edition. Okay, should I
1:25
start from the top? Suppose
1:28
in the future there's
1:30
an artificial intelligence.
1:33
One guy told a parable of
1:35
a future super intelligent
1:38
AI that could cause
1:40
an apocalypse. Let's give
1:43
this super intelligent
1:45
AI a simple goal.
1:47
Produce, paper clips. Be a
1:49
paper clip? And then there was
1:52
another woman. So. The octopus thought
1:54
experiment goes like this. She imagined
1:56
AI underwater. We posit this octopus.
1:59
to be mischievous as well. Still,
2:01
other people told a story that
2:03
sounded like it was out of
2:06
the Bible. She seems likely to
2:08
drown. What should you do? Imagining
2:10
AI as a Savior. People wanting
2:13
to be told what to do
2:15
by some abstract force that they
2:17
can't interact with? Like a God?
2:20
Yeah, like a God. I
2:25
haven't been sure what to make
2:27
of these new robots that have
2:30
entered our lives. And all these
2:32
amazing stories from the greatest minds
2:34
in AI made me wonder... Maybe
2:36
even these people don't know what
2:39
to think. Do you ever worry
2:41
that in testing this you're sort
2:43
of teaching the models to do
2:46
this kind of nefarious stuff? Um...
2:48
Yes. Humans are just suckers for
2:50
anything that looks human. Robots just
2:52
take advantage of that direct. The
2:55
disagreements in the field of AI
2:57
over what we should worry about.
2:59
Like, you know, look over there,
3:01
Terminator, don't, you know, don't look
3:04
over here, racism. Have ended up
3:06
feeling like a meditation. Less about
3:08
how to build the best robots
3:11
on Elon Musk's Mars. More about
3:13
how to be a good human
3:15
on Earth. I'm
3:23
Joy Langoria,
3:26
Good Robot,
3:29
a four-part
3:32
series from
3:36
Unexplainable. Listen
3:39
in the
3:42
Unexplainable Feed
3:45
wherever you
3:48
get podcasts.
3:54
I'm gonna go to
3:56
sleep.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More