Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
Hi listeners, Dave here. We'll be
0:04
back with a whole new season of
0:06
how God works, starting in early March.
0:08
But in the meantime, we wanted to
0:10
share a few shows from our archives
0:12
that speak to some of the most
0:14
pressing issues we're all facing today.
0:16
One of the biggest involves
0:18
technology. AI in particular. Just
0:20
this week, the new administration
0:22
announced a plan to invest
0:24
$500 billion in AI development
0:26
over the next few years.
0:28
That's a hefty sum. But you
0:31
might wonder, what does that have to
0:33
do with religion? Turns out, more
0:35
than you might think. The
0:37
links between technology and religion
0:40
are back in the news.
0:42
In fact, a recent New
0:44
York Times article profiled a
0:46
surge of new faith-based tech
0:48
startups and new ways in
0:51
which religious leaders and institutions
0:53
are experimenting with technology, including
0:55
AI alter egos like rabbi
0:57
bot and sermons generated entirely
0:59
by AI. which ultimately raised
1:02
the questions, can God speak
1:04
through AI? Or is AI like God?
1:06
The ethics and surprising
1:09
history of Tekken
1:11
religion is something
1:13
we explored in
1:16
a previous episode,
1:18
and we're excited to
1:20
share it with you again
1:22
now, given the times. We
1:24
hope you enjoy it. I
1:27
mean, if you can count sermons
1:29
by Zoom. But in reality, technology
1:31
has been creeping into
1:34
religions for longer than
1:36
you might think. In 2007,
1:38
a humanoid robot called Mindar
1:40
began offering Buddhist wisdom in
1:43
a 400-year-old temple in Japan.
1:45
And a robot named Santo?
1:47
is helping Catholics in Poland pray during the
1:49
pandemic. It is easier for a camel to
1:52
pass through the eye of a needle than
1:54
for someone rich to enter the kingdom of
1:56
God. I know, I know, this all might
1:58
sound a little... sci-fi... But it
2:00
is year 2022. Tech
2:04
isn't just about robots
2:06
and algorithms that assemble your
2:08
car and tell you
2:10
what groceries you need. Robots
2:12
aren't just about efficiency
2:15
and utility anymore. They're about
2:17
relationships, trust, even emotion.
2:19
So now the question is,
2:21
can we build a
2:23
robot that actually seems alive?
2:33
It's been surprising to me how
2:35
willing and able people are to
2:37
engage them as social entities. And
2:40
I think it's happening on both
2:42
conscious but also subconscious levels. This
2:45
is religion's new frontier. If
2:47
you can believe, trust, or be
2:49
comforted by a robot, can
2:51
you worship with it? Or even
2:53
perhaps worship the technology itself?
2:55
Why can't God give a robot
2:58
a soul? I don't think
3:00
the limits of your imagination are
3:02
necessarily the limits of your
3:04
gods. On this
3:06
episode, we'll talk to MIT
3:08
professor Cynthia Brasile about social robotics
3:10
and a friendly little robot
3:13
named Nexie. She was probably about
3:15
four feet tall, two eyes,
3:17
eyelids, eyebrows, you know, the whole
3:19
gamut. And we'll ask religion
3:21
professor Robert Girassi what these advances
3:23
mean for faith today. Maybe
3:25
we'll have AI that evokes feelings
3:27
of awe and can raise
3:29
people to a more spiritual kind
3:31
of level. I'm
3:33
Dave Disteno and this is How God
3:36
Works. Cynthia
3:39
Brasile is one of the world's
3:41
leading social roboticists. And like me,
3:44
her interest in robots began in
3:46
a galaxy far, far away. I
3:50
have to say I'm a Star
3:52
Wars child. Me too. In
3:58
many ways, kind
4:00
of... crystallized my idea of
4:02
like what what an autonomous robot
4:04
should be socially emotionally intelligent
4:07
entities who have rich personalities who
4:09
are really there to be more
4:11
allies to us rather than just
4:13
tools that do what we say.
4:16
Of course in Star Wars they
4:18
go off and save the galaxy
4:20
together so like there was a
4:22
real value add to that as well.
4:24
So she started designing robots that
4:26
like R2D2 and C3PO.
4:29
appear to have personalities, thoughts, and
4:31
feelings all their own. In fact, if
4:33
you walk into her lab at MIT, you
4:35
kind of feel like you're in a
4:37
Star Wars movie. You're surrounded by
4:39
robots of all kinds. There are cute
4:41
and furry ones that look like teddy
4:44
bears and dragons that she uses to
4:46
help children learn. There are humanoid
4:48
ones with wires hanging out that
4:50
look like a cross between C3PO
4:52
and Terminator. There are card dashboards with
4:54
little robotic screens that pop up
4:57
to chat with you. get ready to
4:59
see those in your vehicles in a few
5:01
years. Some of these robots look like
5:03
they could actually be alive, and others
5:05
are very clearly a bunch of gears
5:07
and screens. But Cynthia says that when
5:09
it comes to people treating them like,
5:11
well, a friend, it doesn't actually matter
5:14
that much if they look human.
5:16
They clearly are designed explicitly to
5:18
look like robots, or maybe they're kind
5:20
of playful, like kind of a sesame
5:23
street character sort of brought to life
5:25
in a robotic form, but they're clearly
5:27
not human. It's been surprising to
5:29
me how willing and able people
5:31
are to engage them as social
5:34
entities, and I think it's happening
5:36
on both conscious but also subconscious
5:38
levels. You might be thinking,
5:40
not me. I wouldn't respond to
5:43
a robot like that. I would know it was
5:45
just a machine. But did you
5:47
cry when the robot Wally looked
5:49
sad in that Pixar movies? Or
5:51
did you laugh when the flying
5:54
carpet in Aladdin wildly gesticulated? Cynthia
5:57
says it's not so different.
6:00
Robots are like mechanical cartoons, you know, in
6:02
many ways I say it really does not
6:04
have to look human at all, but there's
6:06
qualities of movements that make you want to
6:08
interpret its behavior as a social entity. The
6:11
thing that's fascinating to me is even
6:13
the lowest fidelity mechanical thing seemed to be
6:15
enough where people do want to anthropomorphize
6:17
them. I remember watching it,
6:19
the magic carpet in Aladdin, and thinking,
6:22
it's annoyed. It's happy. It's excited, but
6:24
you know, I think about some of
6:26
the cute robots you've designed to work
6:28
with kids and it looks like they're
6:30
breathing. They have kind of a respiration
6:32
mode in them or the gaze and
6:34
the eyes change. And I think we
6:36
really want to see intent
6:39
in things. And the easier you make that
6:41
with that design, it grasps us. You
6:43
know, once you start dealing with something
6:45
like speech, it's like game over. You know,
6:47
it can be just text and that's enough for
6:49
people to treat a computer as a social entity. There's
6:52
an implicit assumption that the more
6:54
human, the better. And
6:56
a lot of my work is
6:58
basically saying, no, not necessarily.
7:00
You know, it's like, what is
7:02
the interaction you're trying to
7:04
achieve? And in a world where
7:07
we've got plenty of people,
7:09
why do we need robots to
7:11
try to pretend they're people?
7:13
Why not design robots to
7:15
be what they are, to
7:17
be great partners and collaborators
7:19
to compliment us and our
7:22
human abilities? It's
7:24
not about looking more human. It's
7:26
about acting more human. And
7:28
that's where cues come in. The
7:31
nonverbal things that show what we're thinking
7:33
about and what we intend to do. We
7:36
evolved to be social beings,
7:38
to collaborate, to interact with one
7:40
another. And so when another
7:42
entity exhibits these cues, I think
7:44
our brain at a deep
7:46
level is like, oh, social entity,
7:48
you know, not a thing
7:50
or object that's governed by the
7:52
laws of physics, but some
7:54
kind of being now that is
7:56
governed by the laws of
7:58
having a mind, right, of social
8:00
interaction, of having goals and
8:02
intents. Gaze is a profoundly important
8:04
social cue, right? It's attending
8:06
to you as - attention to me. Across all
8:08
species, gaze is extremely salient. So even a camera, a raw
8:10
camera, if it turns just to look at you, people associate
8:12
that with gaze. You know, is it a friendly thing or
8:14
is it potentially a threat, right? Okay, I understand
8:17
if some people might be a
8:19
little skeptical about this.
8:21
How could people really think a
8:23
robot has feelings? That it
8:25
actually wants to be nice or mean?
8:28
Well... It's not a futuristic
8:30
thing. We have the technology
8:32
right now to do this, to
8:34
build robots that can make us
8:36
trust them, or make us think they're
8:39
shifty. And that doesn't mean
8:41
people are suckers or
8:44
gullible. Responding this way to
8:46
a robot is actually what
8:48
makes us human. Don't believe
8:50
me? Well, let me tell
8:52
you about an experiment Cynthia
8:54
and I did using a
8:56
robot called Nexi. So Nexte kind
8:59
of had very much an anthropomorphic,
9:01
more human-like face. Two eyes, eyelids,
9:03
eyebrows, a jaw that could move
9:05
in ways that could suggest different
9:07
kind of jaw, mouth movements to
9:09
express. She's smaller than a person. So
9:11
like she was probably about, I think, maybe
9:14
four feet tall, kind of from floor to
9:16
top of the head. So it's about the
9:18
same height that if you're going to sit
9:20
in a chair. Nexte could be making
9:22
like eye-to-eye contact with you. had two
9:25
arms, fingers, you know, the whole gamut.
9:27
So it was essentially like an upper
9:29
torso humanoid robot, but it was on
9:32
a wheeled base. Okay, I have to admit,
9:34
Nexty was the coolest robot I had
9:36
ever seen at the time. I mean, I'm
9:38
a total sci-fi geek. But that wasn't
9:41
the only reason I wanted to work
9:43
with Cynthia. I actually wanted to
9:45
use this robot to learn something
9:47
about humans. The goal of the
9:49
experiment Cynthia and I designed was to
9:52
see if people would unconsciously react to
9:54
the robots' nonverbal cues. You know, things
9:56
like, fidgeting, touching its face, opening its
9:58
arms in a welcoming... or crossing them
10:01
in a blocking one, in the
10:03
same way that we respond to
10:05
a human's. I can communicate in
10:07
many of the ways that
10:09
people do. I can tell you that
10:11
I'm sad, mad, confused, excited,
10:14
or even bored just by
10:16
moving my face. But I hope you
10:18
can see that I am very happy
10:20
to have met you. Thank you for
10:23
visiting me, and I hope to see
10:25
you again soon. Before
10:29
we ever brought Nexsey into the picture,
10:31
we had filmed many pairs of people
10:34
talking with each other, and then playing
10:36
a game where they could either cooperate
10:38
with their partner and make a little
10:40
money, or cheat their partner and make
10:42
a lot. To make a long story
10:45
short, we found that certain cues help
10:47
people predict their partner's plans to be
10:49
fair or to cheat. That's when Nexsey
10:51
came in. We wanted to see if
10:53
people would respond to the same cues if
10:55
they came from a robot. Of
10:57
course, this was 15 years ago,
10:59
and the technology at the time
11:02
wasn't up to letting Nexsey run
11:04
all on her own. So when
11:06
people were chatting with her before
11:08
playing the trust game, Nexsey's brain
11:11
was really being run by three,
11:13
well, wizards. Not the Gandoff
11:15
type. Think Wizard of Oz. It's
11:17
really being puppeted by a human
11:19
behind the scenes. So in the
11:21
case of Nexsey, we... we actually
11:24
had a team of graduate students.
11:26
One of the graduate students was
11:28
kind of controlling Nexe's gestures and
11:30
the other graduate student was literally
11:32
having the conversation. So you could
11:35
feel like you're actually having this
11:37
kind of chit-chat interaction with this
11:39
robot. Yeah, the illusion was amazing.
11:41
And the thing that was fascinating to
11:44
me, I don't remember if I ever told you
11:46
this, but during the conversation where people were
11:48
kind of doing to get to know
11:50
you, they started next to you like really.
11:52
Interesting questions
11:55
like, nexty, what is the meaning
11:57
of life? Nexty, is there a
11:59
God? It was very, it was
12:01
very fascinated, because even at that
12:03
point, people were willing to ask
12:06
the robot in a way that
12:08
they were actually not, they weren't
12:10
trying to mess with it. They
12:12
were actually seeking those kinds of
12:14
existential questions from it. It was, it
12:16
was fascinating to me. The first few
12:18
minutes, people were like, holy cow, I'm
12:21
talking with a robot. But they really
12:23
did quickly feel comfortable enough
12:25
to offer all kinds of
12:28
self-disclosures. how they spent their summer
12:30
vacation, what their life goals were,
12:32
you know, the kind of stuff
12:34
you tell someone you were comfortable
12:36
chatting with. But the main point
12:39
wasn't what Nexie was saying, it's
12:41
what she was doing. So during
12:43
the chit-chat we had Nexie show one
12:45
of the two sets of cues, a
12:47
sneaky set or a more neutral one.
12:49
The sneaky cues were the ones we
12:51
found people showed when they were going
12:54
to cheat you. They'd fidget with
12:56
their hands, repeatedly touch their faces,
12:58
and cross their arms to kind
13:00
of signal a social distance
13:02
or block. The neutral ones were
13:05
more innocuous, like raising an arm
13:07
or tilting the head. And then it was
13:09
time to put some money on the line.
13:11
We had people play a game where they
13:13
had to decide if they thought Nexsey
13:16
was going to cheat and try and
13:18
take money from them. What we found
13:20
was nothing short of startling.
13:22
People who saw Nexsey give the
13:24
untrustworthy cues reported believing that the robot
13:27
was going to cheat them in the
13:29
game. And so, they tried to cheat
13:31
her back when it was their turn.
13:33
Those who saw the other cues thought
13:36
Nexsey was great and that they could
13:38
count on her. They had no idea why.
13:40
They just said they had a gut
13:42
feeling. The real reason, though, was that
13:44
we had cracked the mind's unconscious
13:46
code for reading whether someone cares
13:49
about you or not. And best
13:51
of all... We showed we could
13:53
use it perfectly in a
13:55
robot. It learned so much from
13:58
that project. And one
14:00
of the things that really fascinated
14:03
me was, we used Nexie
14:05
as a scientific instrument, right? So
14:07
it was acknowledging that it's
14:09
literally impossible to ask a
14:11
human to control their nonverbal
14:13
cues with such precision, but
14:16
with the robot, you can
14:18
very precisely control these cues.
14:20
And that's great, right? You can
14:22
make sure that if you're designing
14:24
a robotic partner, it can make
14:27
people feel comfortable. Like... They
14:29
can rely on it. Tell it
14:31
anything. Trust it. Well, maybe.
14:33
But it depends on if
14:36
that trust is well placed. Or
14:38
put another way, it depends
14:40
on who designed that robot,
14:43
and for what purpose. Here's
14:45
where text-rolling religion
14:48
comes into question.
14:50
If we can make a robot
14:52
people will trust, we'll
14:54
believe. That raises a host
14:56
of questions when we're talking
14:58
about things like morality, sin,
15:01
and existential concerns. I
15:03
think when you're talking about
15:05
something that's so deeply human,
15:07
like spirituality, I think
15:10
you really do have to think
15:12
about what's the appropriate role of
15:14
a technology in that scenario?
15:16
Just because we can make
15:18
a technology seem more trustworthy,
15:20
more wise, more caring, does that
15:22
mean we should? For
15:26
thousands of years people have
15:28
looked to priests, rabbis, and
15:30
imams for spiritual guidance.
15:33
Is it okay for robots or
15:35
other tech to fill those roles?
15:37
Or put another way, can tech
15:39
have a spark of the spiritual
15:41
in it? I
15:53
think people finding ways to see
15:55
magic and enchantment and wonder in
15:57
the world is a good thing.
16:00
That's Robert Girassi, a professor of
16:02
religion at Manhattan College, who has
16:04
spent a good deal of his career exploring
16:06
how technology and religion interact. And
16:09
one thing he likes to point out
16:11
is that the question of religion using technology
16:13
is nothing new. There's
16:15
a long history, for example, in Christianity
16:17
of believing technology was kind of God's
16:19
way of making the world better that
16:21
goes back at least to France. It's
16:23
very clear in Francis Bacon. Bacon
16:26
was a century philosopher and
16:28
devout Christian. Who, as Robert says,
16:30
believed God gave us the smart
16:32
to develop technology as a gift
16:34
to improve the world. And
16:36
you even have people in
16:38
early 20th century Russia arguing
16:40
that the resurrection promised by
16:42
Jesus was a resurrection that
16:44
human beings were actively supposed
16:46
to create through technology. So
16:48
there are people within religious
16:50
communities who have had that
16:52
kind of commitment to technology. So
16:55
when it comes to religion, Robert, like
16:58
Cynthia, thinks the main question for
17:00
tech isn't if it will be used, but
17:03
why it should be used, either
17:05
in current religions or to
17:07
develop entirely new ones. So
17:09
what is our motivation for
17:12
building these kinds of new
17:14
religions really matters to me
17:16
because religion engages contemporary
17:18
technology. You can
17:20
imagine how new architectural and engineering
17:22
techniques produce new kinds of
17:24
church experiences for medieval Europeans, right?
17:27
So there's kind of nothing
17:29
new about thinking about how does
17:31
this technology change the way
17:33
I'm going to do religion and
17:35
seeing divine activity through AI
17:37
or robotics seems like that could
17:40
be reasonable to me. That's
17:43
true. You know, I never thought
17:45
about that, but when you walk into
17:47
medieval churches and you have those
17:49
beautiful high arches, they evoke this feeling
17:51
of awe in you. And as
17:53
a as a psychologist who studies emotion,
17:55
we know how those feelings of
17:57
awe affect people and can actually lead
17:59
to spirit. experiences. So you're right,
18:01
that in and of itself
18:03
was a technology to kind
18:05
of ping parts of our minds. Yeah,
18:07
and maybe we'll have AI that evokes
18:10
similar feelings of awe and can raise
18:12
people to a more spiritual
18:14
kind of level. Robots that
18:16
talk to us are one thing, but
18:19
an artificial intelligence that
18:21
could truly think would be another.
18:23
You could ask it anything, and it
18:25
would know. It would seem wise, all
18:27
knowledgeable. Perhaps even godlike?
18:29
Worthy of following? Or maybe
18:32
even worshipping? This might seem a
18:34
bit strange at first. An idea
18:36
that only a modern tech guru
18:38
might come up with. But believe
18:40
it or not, the idea has a basis
18:42
in some traditional faiths. He
18:44
might be familiar with the idea
18:47
of an avatar from that
18:49
James Cameron flick, where it
18:51
referred to a remote-controlled, genetically
18:54
engineered body sent to the
18:56
people of Pandora. Many
18:58
religions have the idea
19:00
of avatars too. Physical forms that
19:03
gods use when coming to
19:05
earth to talk to us. In
19:07
Hinduism or Buddhism, they're pretty
19:09
common. So at a time like
19:12
ours, where AIs are about to
19:14
become sentient or self-aware, could
19:16
a God inhabit a computer
19:19
or manifest in the internet?
19:21
Yeah, there's so much there. I've
19:23
already seen... in some of my
19:25
research in India, people referring to
19:27
the 10th avatar of Vishnu, Kalki,
19:29
coming as an AI. So for
19:31
those who don't know much about
19:33
Hinduism, Vishnu is a preserver god
19:35
and he has shown up on
19:37
earth so far in nine different
19:39
forms and he's supposed to come
19:41
a tenth time as this god
19:43
Kalki to end the world and
19:45
begin the new world to restore
19:47
the world to its original state
19:49
of grace right because that's a
19:51
cyclical process in Hinduism. Once I was
19:53
given a talk at a design institute
19:56
and I had a student who said
19:58
to me what is God today,
20:00
but it could be AI.
20:02
Couldn't Kalki come as
20:04
AI? Wouldn't that make sense?
20:06
And I've had other interview conversationalists
20:08
who have said, hey, there are lots
20:10
of people who think that maybe
20:12
Kalki will come as AI because if
20:14
AI gets really, really, really powerful
20:16
and has lots of knowledge and ability
20:18
to make things manifest in the
20:20
world, then it kind of makes sense
20:22
that the God would come that
20:24
way. I haven't heard anyone personally –
20:26
I've never had anyone suggest to
20:28
me that the return of Jesus might
20:30
come in a robotic form, but I
20:32
honestly don't suppose it'll be too
20:34
much longer before someone does. In Buddhism,
20:37
you have this all -pervading presence of
20:39
Buddha, and sometimes that involves integrating
20:41
technological objects like funeral practices for
20:43
dolls or printing blocks. 40 years
20:45
ago, the roboticist in Japan, Masahiro
20:47
Mori, he suggested that a robot
20:49
could become a Buddha. It's like
20:52
a one -line almost a throwaway
20:54
in his book, just this one.
20:56
Robots have Buddha nature because everything
20:58
has Buddha nature, so someday a
21:00
robot could be a Buddha. And
21:02
in 40 years, nobody has come
21:04
out to say, absolutely not,
21:06
that's unreasonable. Having Buddha
21:08
nature is kind of what it sounds like.
21:11
It means having the seed of Buddha's wisdom
21:13
within you, a seed that can
21:15
grow with learning and practice, and that
21:17
when fully matured will clear all
21:20
illusions from your mind and lead to
21:22
enlightenment. That's kind of
21:24
the idea behind Mindar, the Buddhist robot in
21:26
Japan. It's a robotic
21:28
torso covered in silicone meant to look
21:30
and move like a real person. And
21:32
unlike Nexi, the wisdom
21:34
it offers comes from its Buddhist -influenced
21:37
algorithms. It can live
21:39
forever, and so it can
21:41
keep becoming wiser. But
21:43
what do most people really think about this?
21:46
Do they think it's great or
21:48
creepy? I want to
21:50
start with biotechnology. If you go back to
21:52
the 1970s when in vitro fertilization was
21:54
being created, Leon Cass, the bioethicist, said this
21:56
was a terrible idea. It was going
21:58
to do all these terrible things and he
22:01
knew it was a terrible idea because
22:03
when you told people we're going to
22:05
fertilize eggs functionally in a test tube
22:07
that, which isn't really how
22:09
it's done, but we were going to do
22:11
it that way. People are going, oh, that's gross
22:13
and unnatural. And so he
22:15
coined this phrase wisdom of repugnance
22:17
and the common term was the
22:19
yuck factor and he said anything
22:21
we feel yucky about, we just
22:23
shouldn't do. Now the
22:25
reality is in vitro fertilization is
22:27
now a technology that's enormously comfortable
22:29
to people, including Leon Katz and
22:31
so we get really used to
22:33
things. So I've been
22:35
asking my students about this for many,
22:38
many years now for the decades I've
22:40
been teaching and I
22:42
started asking them when I first taught a
22:44
religion and science class, let's
22:46
say you have a domestic robot that
22:48
helps wash and keep things working around
22:50
your house and it can talk with
22:52
you, it converses with you and one
22:54
day it says, may I come to church with you?
22:59
Will you let it come to
23:01
church with you? And 20
23:03
years ago I would get maybe
23:06
one out of 25 students,
23:08
two out of 25 students kind
23:10
of hesitantly going maybe and
23:12
all the others were kind of
23:14
uniformly no. And when
23:16
I ask them now, and it hasn't
23:18
been that long, but when I ask
23:20
them now as I continue to every
23:22
single year, now about half of a
23:24
class will just say, sure, why not?
23:26
And so there's been a massive change
23:28
at least among young people attending
23:30
a small Catholic liberal arts college
23:32
in the Bronx, right? So I
23:34
mean, that's my sample set for
23:36
that particular question is my students
23:38
at Manhattan College, but I've seen
23:40
really, really significant changes and I
23:42
think what has happened is that
23:44
people are used to their technologies
23:47
in a way they weren't. It's
23:50
true. Sometimes now we
23:52
light candles in church by pushing a
23:54
button that turns on an LED flame. We
23:57
have robotic pets. They're
23:59
even smart refrigerators that know when we're
24:01
running low on groceries and automatically add
24:03
them to an insta card account to
24:05
remind us to go to the store. It's
24:08
going to be all those little baby steps
24:10
right? I mean already people name robot vacuum
24:12
cleaner. That's right. And it was well known
24:14
when the when the room of vacuum cleaner
24:16
was kind of first made that people were
24:18
mailing them back broken and saying don't send
24:20
me a new one you have to fix
24:23
this one. Like this one's a part of
24:25
my family you have to fix it. You
24:27
have to fix it. It's not
24:29
clear if Rumba honored those requests,
24:31
or if like some desperate parents
24:33
who dropped a new goldfish in
24:35
the fish tank to replace a
24:38
belly-up one before their kids got
24:40
home, they just hoped no one
24:42
would notice a swap. But either
24:44
way, people wanted their robot. After
24:46
all, it usually had a name.
24:49
People will name objects, they
24:51
form relationships with objects, even
24:53
objects that are terribly hard
24:55
to form relationships too, like
24:57
a frisbee-shaped robot vacuum. So
24:59
by the time it gets
25:01
much more able to engage
25:03
with us, then it will be a
25:05
natural progression for people to start
25:07
seeing it. And I think when
25:09
I think of my students 20
25:12
years ago saying absolutely not, a
25:14
robot can't come to church with
25:16
me, they didn't have interactions with...
25:18
computers that were
25:20
sufficiently sophisticated, that they could
25:22
see that as a plausible kind
25:24
of question. Whereas now my students
25:27
are talking to their phones, they
25:29
might have something like an Alexa
25:31
spying on them at home. You
25:33
know, they've got technologies that they're
25:35
talking to and that sometimes talk
25:37
back to them, and so they
25:39
have a whole lot more comfort
25:42
and familiarity with that conceptually.
25:44
And one little bit at a time, right?
25:47
But what if robots get too good?
25:50
From an engineering and design
25:52
perspective, Cynthia says it is
25:54
possible for AI to actually
25:56
be smarter than humans. But
25:58
what does that mean for... religion
26:00
and really for all of humanity. I
26:03
think this is kind of this
26:05
provocative story about AI right now.
26:07
Let's call it narrow AI, right?
26:09
So once you kind of understand
26:11
what the thing is to optimize,
26:13
you can run these algorithms to
26:16
do that potentially in a way
26:18
that far exceeds what human capacity
26:20
is and to outper people, right?
26:22
I'm wondering what you think about
26:24
that because in the right hands.
26:26
Those can be amazing tools, right?
26:28
I know you design robots to
26:30
keep kids comfortable in the hospital
26:32
and make them feel at ease,
26:35
but you can build this sense
26:37
of trust and empathy and camaraderie
26:39
with robot by doing these things.
26:41
But if you're a marketer or
26:43
you have some other intent of
26:46
trying to convince people of
26:48
something, doesn't that also give
26:50
you tremendous power? Here
26:55
I am at the Media Lab building
26:58
these robots that through social interaction as
27:00
you're saying, yes, I mean, we're talking
27:02
about helping someone learn or helping someone
27:04
adhere to a diet and exercise program
27:06
or to be more emotionally resilient. I
27:09
mean, these are AI systems that are
27:11
shaping our thoughts, our beliefs, our behaviors,
27:13
I mean, all of these things. Now
27:15
we're doing it to try to empower
27:17
people to make better decisions for
27:19
themselves, but there's another whole other side
27:22
to that coin, right where it could
27:24
be used to manipulate. Systems are
27:26
capable of persuading you in potentially
27:28
really profound ways. And you have
27:30
to ask the question, so who's
27:33
persuading me for what reason? And
27:35
of course as human beings, we
27:37
try to persuade each other all
27:39
the time. It's just part of
27:42
social interaction. But now you're
27:44
talking about an entity where
27:46
there's layers of that, who do you
27:48
trust? That's a good question. When
27:50
it comes to asking Syria or Alexa what's
27:53
on the shopping list, or what the
27:55
weather's going to be tomorrow, the stakes
27:57
aren't that high. But when it comes to
27:59
religion... Questions about
28:01
morality, about meaning. The stakes
28:04
can get pretty big, pretty fast.
28:06
The thought of a robot becoming
28:08
a charismatic leader, like a
28:11
politician or a Pope, is kind of
28:13
chilling. I think at this present
28:15
stage here in the early 21st
28:17
century, there are lots of people
28:19
watching these technologies develop and feeling
28:21
fear. Right, but we human beings,
28:23
part of the existential state of
28:25
humanity is the terror of history,
28:28
right? Like, we're always scared of
28:30
the things we did in the
28:32
past and the things we might
28:34
do in the future. Those warnings
28:36
we get from pop culture that
28:38
say, we're going the wrong way,
28:40
our technologies are going to... overmaster
28:42
us and destroy us, that's really
28:44
a fear about us. That's not
28:47
really a fear about our machines.
28:49
That's the recognition that we're not
28:51
always doing right in the world
28:53
and that our obligation is to
28:55
do right. That's a critical point.
28:57
Yes, someday soon, AIs might have
28:59
an almost god-like intelligence, and
29:02
given the number of cameras
29:04
we have around, a god-like
29:06
omnipotence. But it's we who are
29:08
creating those AIs. It's we who, in
29:10
a sense, are their gods, at least
29:12
right now. And so if we want
29:15
our creations to be good,
29:17
to be virtuous, we have to teach
29:19
them how. We have to imbue them
29:21
with a moral sense. If we
29:24
don't, if we rely on letting
29:26
them learn just by conversing
29:28
with people on the web,
29:30
they're going to end up
29:33
like Microsoft's infamous AI chat
29:35
batte. who within days turned
29:37
into a misogynistic racist. And
29:39
so, in a sense, robots need religion
29:41
or rules of ethics as much as
29:44
the rest of us, which is why
29:46
we scientists and designers and academics
29:48
are all putting our heads
29:50
together. There's room for philosophical
29:52
ethics, there's room for biological
29:55
ethicists, for religious ethicists to
29:57
come together and talk about...
30:00
What ought we be doing around us
30:02
and how do we fix the problems?
30:04
And then if the folks in
30:06
the industry say, yeah, we want
30:08
to draw on that. We want
30:11
to build machines that are
30:13
actually actively helping, then I think
30:15
we'll get there. But as with any
30:17
religion, you have to be
30:19
careful. We have to be clear and
30:22
honest that religions are not
30:24
always good for people. Sometimes
30:26
people do terrible things. in the
30:28
name of their religions or even
30:31
provoked by their religions. But people
30:33
also do beautiful, wonderful, amazing things
30:35
in the name of their religions
30:38
and provoked by their religions. And
30:40
so an honest appraisal might say,
30:42
okay, if I'm looking at, say,
30:45
in Hinduism, the darma, or which
30:47
is usually translated as duty, if I'm
30:49
looking at duty in Hinduism and thinking,
30:51
what does that mean for... human to
30:53
human and human to machine relationships and
30:56
how do I want to design AI?
30:58
I don't want to take Darma and
31:00
go, okay, that means that some people
31:02
are born with particular duties that they're
31:05
stuck with forever. That's a terrible use
31:07
of it, right? But I do want
31:09
to think about what are our sense
31:11
of mutual obligation. So what are my
31:14
duties that I owe? to others, my duties
31:16
to the environment, and we draw
31:18
on our religious traditions and practices
31:20
to do that, that would be
31:22
a wonderful thing. And so I
31:24
do think there's a lot of
31:26
space for conversation in
31:28
the AI ethics field that it's not
31:30
just about... philosophical positions on ethics, and
31:33
of course there's a place for that
31:35
too, but also religious ethics and religious
31:37
actions. What is a good Christian, a
31:39
good Jew, a good Hindu? What do
31:42
they ought to be doing in the
31:44
world? And once we look at what
31:46
they ought to be doing in the
31:48
world, we're going to start looking at
31:50
kind of overlaps that produce probably a
31:53
good ethos for what we want our
31:55
machines to do in the world also. So
31:57
in some ways, it's not just us
31:59
learning to worship in AI
32:01
or a robot, it
32:03
is spiritually developing that AI itself,
32:05
giving it that spiritual in
32:07
some sense in terms of principles
32:10
grounding so that it is
32:12
as it grows and develops, it's
32:14
going to have the right
32:16
kind of tealess or purpose of what
32:18
it should do. Yeah, no matter how
32:20
complex it ever gets, whether it gets
32:22
to human equivalents or greater, if we're
32:25
building that in, if we're leveraging those
32:27
religious values to benefit the most marginalized
32:29
to create a better world around us,
32:31
if we can do that, we will
32:33
have served the future well, regardless
32:35
of how good the robots get. So Robert, as you
32:37
think about maybe 20 years, 30 years down the
32:39
line from where we are now, and I know it's
32:41
all speculation and we're not sure how it's going
32:43
to come out, but what ways
32:46
do you see maybe
32:48
technology, AI, robotics influencing the
32:50
average person's worship? So
32:53
my hope is that people would find
32:55
that the technology, that you get these
32:57
initial hype waves right and everybody jumps on
32:59
the technology, and then they come to
33:01
kind of realize maybe I don't want to
33:04
be on Facebook all the time or
33:06
whatever. And so we get these up and
33:08
down kind of waves of use and
33:10
it may be that there are
33:12
better and worse times for
33:14
people to engage in something
33:16
like a robot at home
33:18
that could get quite interactive. I
33:20
mean, you could imagine it getting
33:22
quite sociable in that fashion,
33:24
but you could also imagine it suggesting
33:26
you go to church with other
33:28
people once in a while. And
33:30
that might be a good kind
33:32
of programming element in all of
33:34
this might be if it's coming
33:36
from a spirit of genuine religiosity,
33:38
maybe the robots, the apps, whatever
33:41
could say, hey, you're doing great.
33:45
At heart, much of religion is about
33:48
connection, a caring
33:50
connection with the divine, but also
33:52
with our fellow humans. And
33:54
it's here that AI driven
33:56
robots could be a big
33:58
help. They can be a
34:00
spiritual companion or even God.
34:02
for people who can't go to church or temple because
34:04
of being homebound, or in areas where there's a shortage
34:06
of clergy. Or like an advanced mind
34:09
are, they can offer inspiration to
34:11
whole communities of worshippers that
34:13
come together to see them. In some
34:16
ways, they might even prove to be
34:18
superior preachers, less vulnerable to motives
34:20
for power or bias than human
34:22
religious leaders can be. In the
34:25
best world, they'll help us
34:27
grow, spiritually and otherwise. If,
34:29
and here's the big if, we design
34:31
them to be good partners. To
34:33
me, that's really the question of,
34:35
it's the partnership. That's what I
34:38
care about, and how do you need
34:40
to design these entities that are engaging
34:42
and natural for people to interact
34:45
with so that they can empower
34:47
us and complement us? What do we
34:49
as people need that's going to
34:51
help us be who we aspire to be
34:53
to live in a society we want to
34:55
live in? How
35:01
God Works is hosted by me, Dave
35:04
Disteno. This episode was written by
35:06
Josie Holtzman and me. Our senior producer
35:08
is Josie Holtzman. Our producer
35:10
is Sophie Eisenberg. Our associate
35:12
producer is Emmanuel Desarmé. Executive
35:15
producer is Genevieve's sponsor. Merritt
35:17
Jacob is our mix engineer
35:19
and composed our theme, which
35:21
was arranged by Chloe Disteno.
35:23
The executive producer of PRX
35:26
Productions is Jocelyn Gonzales. This
35:28
podcast was also made possible
35:30
with support from the John
35:32
Templeton Foundation. To learn more
35:34
about the show and access
35:37
episode transcripts, you can find our
35:39
website at how God Works, all one
35:41
word, dot org. And for news and
35:43
peaks at what's coming, feel free to
35:45
follow us on Instagram at How God
35:47
Works pod, or me on X or
35:50
Blue Sky at David Disteno.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More