Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Happy Monday! I'm Michael Kavnatt, host
0:02
of The Next Big Idea Spinoff,
0:04
The Next Big Idea Daily. Every
0:06
weekday, I invite leading nonfiction authors
0:08
to share the big ideas from
0:10
their books about psychology, business, relationships,
0:12
and just about everything else. It's
0:14
a mini master class you can
0:16
listen to in the time it
0:18
takes to walk your dog. You're
0:20
about to hear the episode we
0:22
published this morning. To get the
0:24
rest of this week's episodes, follow
0:26
the next Big Idea Daily, wherever
0:28
you get your podcasts. LinkedIn
0:30
Presents. Good morning. I'm
0:32
your host Michael Kavnath and
0:35
this is the next big idea
0:37
daily. Now lately there's been
0:40
a lot of debate about
0:42
artificial intelligence. Is it going
0:44
to destroy humanity or save
0:47
it? Is it really capable
0:49
of thinking? of consciousness. But
0:51
perhaps the most provocative question
0:53
we can ask is, does
0:56
AI have rights? What responsibilities
0:58
do we have to these
1:00
increasingly sophisticated systems? These questions
1:03
might sound futuristic, but Jeff
1:05
Sibo, a leading thinker in
1:07
environmental ethics, argues we've been avoiding
1:09
them for too long, not just
1:12
for AI, but for animals, plants,
1:14
and even entire ecosystems. In his
1:16
new book, The Moral Circle, Who
1:18
matters, what matters, and why? Jeff
1:20
makes the case that we need
1:22
to expand our ethical concerns beyond
1:24
humans. If there's any chance that
1:26
AI or a fish or an
1:28
octopus or even a fungus has
1:30
sentience, agency or moral significance, then
1:32
we should act as if they
1:34
do. Because the way we treat
1:37
them today will shape the kind of
1:39
world we live in tomorrow. Here's Jeff
1:41
to share five big ideas from his
1:43
book right after this quick break. If
1:52
you're interested in the story behind
1:55
the business headlines, check out big
1:57
technology podcast. My weekly show that
1:59
features in-depth interviews. with CEOs, researchers,
2:01
and reformers in business and technology.
2:04
Hi, I'm Alex Kantruitz. I'm a
2:06
long-time journalist, CMBC contributor, and the
2:08
host of the show. I emptied
2:10
my Rolex every Wednesday to bring
2:13
you awesome episodes, so go check
2:15
out Big Technology Podcast. It's available
2:17
on all podcast apps. I'd love
2:19
to have you as listener. Hi,
2:26
my name is Jeff Sivo and
2:28
I direct the Center for Environmental
2:30
and Animal Protection and the Center
2:33
for Mind Ethics and Policy at
2:35
New York University. And I'm here
2:37
today to tell you about my
2:39
book, The Moral Circle, Who Matters,
2:42
What Matters, and Why. We share
2:44
the world with a vast number
2:46
and wide range of non-human. That
2:48
includes other animals, vertebrates and invertebrates.
2:50
It includes other living beings, plants,
2:53
plants, plants, plants. And in all
2:55
these cases, we have disagreement and
2:57
uncertainty about which beings matter and
2:59
what we owe them. So how
3:01
can we make decisions together about
3:04
how to interact with such a
3:06
vast number and wide range of
3:08
non-human despite substantial ongoing disagreement and
3:10
uncertainty about these ethical and scientific
3:12
issues? That is the question this
3:15
book addresses. So we can hear
3:17
focus on five key ideas that
3:19
come out in the book, starting
3:21
with idea number one, if you
3:23
might matter. we should assume that
3:26
you do. In ethics, we have
3:28
a lot of disagreement and uncertainty
3:30
about what it takes to matter.
3:32
Some people think you need to
3:34
be sentient, able to consciously experience
3:37
pleasure and pain. Other people think
3:39
you need to be agentic, able
3:41
to set and pursue your own
3:43
goals based on your own beliefs
3:45
and desires. Other people think you
3:48
need to be alive, able to
3:50
perform basic life functions associated with
3:52
survival and reproduction. Now in science
3:54
we also have disagreement and uncertainty
3:56
about which beings have these features
3:59
with other mammals. and birds, we
4:01
can be confident based on the
4:03
information currently available, that they are
4:05
not only alive, but also sentient
4:07
and agentic and morally significant. They
4:10
can feel, they can think, and
4:12
they matter for their own sakes.
4:14
But what about the other vertebrates,
4:16
reptiles, amphibians, fishes? What about... invertebrates
4:18
with more distributed cognitive systems? What
4:21
about plants and fungi with radically
4:23
different kinds of cognitive systems? What
4:25
about chatpots and robots that are
4:27
made out of silicon instead of
4:29
being made out of carbon? In
4:32
these cases, we might genuinely be
4:34
uncertain about whether it feels like
4:36
anything to be them, for example,
4:38
for a long time. Well, I
4:40
argue that we should not wait
4:43
for certainty before taking basic steps
4:45
to treat these non-human as well.
4:47
If there is at least a
4:49
realistic non-negligible chance that they matter
4:51
for their own sakes, based on
4:54
the best information and arguments currently
4:56
available, then we should take reasonable,
4:58
proportionate steps to consider and mitigate
5:00
the risks that our actions and
5:02
policies might be imposing on them.
5:05
Many beings might matter. How can
5:07
we tell which non-human or sentient
5:09
or agentic or otherwise morally significant?
5:11
Well, when proof and certainty are
5:13
unavailable, we can at least collect
5:16
evidence and estimate probabilities. And in
5:18
particular, we can use a marker
5:20
or indicator method to assess non-humans
5:22
for behavioral or anatomical evidence associated
5:24
with capacities like sentience and agency.
5:27
With animals, we can use behavioral
5:29
tests. We can ask, for example,
5:31
do they nurse their own wounds?
5:33
Do they respond to analgesics and
5:35
antidepressants in the same ways as
5:38
us? Do they make behavioral tradeoffs
5:40
between the avoidance of pain and
5:42
the pursuit of other valuable goals?
5:44
To the extent that the answer
5:46
is yes, it increases the probability
5:49
of moral significance. And when we
5:51
ask these questions about animals, the
5:53
answer is often yes. In fact,
5:55
at this point, many experts in
5:58
many fields are prepared to say
6:00
that there is at least a
6:02
realistic not- negligible chance of moral
6:04
significance in all vertebrates, mammals, birds,
6:06
reptiles, amphibians, and fishes, and many
6:09
invertebrates. Cephalopod mollus like octopuses, decapod
6:11
crustaceans, like lobsters, even insects, like
6:13
ants, and bees. Now with AI
6:15
systems, we might not be able to trust
6:17
behavioral evidence in the same kinds of
6:20
ways of present, but we can look
6:22
past potentially misleading misleading behaviors
6:24
at... underlying architectures, and we
6:27
can ask, do AI systems
6:29
have computational features that we
6:31
associate with capacities like sentience and
6:33
agency? For example, do they have
6:36
their own forms of perception, attention,
6:38
learning, memory, self-awareness,
6:40
social awareness, language, language, and
6:43
reason? To the extent that
6:45
the answer is yes, that once
6:47
again increases the probability of moral
6:49
significance. And while current AI
6:51
systems might not have many of these
6:53
capacities at all, We can expect
6:56
that near future AI systems will
6:58
have advanced and integrated versions of
7:01
many of these capacities, but they
7:03
happen to be built out of silicon
7:05
instead of carbon. And so I argue
7:07
that we should give at least minimal
7:09
moral consideration to all vertebrates,
7:12
many invertebrates, and many near
7:14
future AI systems in the
7:16
spirit of caution and humility
7:19
based on the best information
7:21
and arguments currently available. If
7:24
we might be affecting you, we should assume
7:27
that we are. In addition to having disagreement
7:29
and uncertainty about which beings
7:31
matter, we also have disagreement
7:33
and uncertainty about what we
7:35
owe everyone who matters. In ethics,
7:37
we still debate whether we have
7:39
a general responsibility to help each
7:42
other. Some people think that we
7:44
should. If I can prevent something
7:46
very bad from happening without sacrificing
7:48
anything comparably significant, then I should
7:51
help. Other people think, no, we do not
7:53
have a general responsibility to help
7:56
others. Yes, I should consider the risks that
7:58
I might be imposing on a... and I
8:00
should reduce and repair the harms
8:02
that I cause to others, but
8:04
beyond that, helping is optional, not
8:06
required. And then in science, we
8:09
often have disagreement and uncertainty about
8:11
whether our actions and policies are
8:13
in fact imposing risks and harms
8:15
on vulnerable others. Suppose you dump
8:18
a bunch of toxic untreated waste
8:20
in a lake, and then suppose
8:22
you walk by that lake the
8:24
next day, and you see a
8:27
rabbit drowning in the middle of
8:29
the lake. Did you play a
8:31
role in this predicament? Well, you
8:33
might not have directly imperiled the
8:36
rabbit. You might not have picked
8:38
her up and plopped her in
8:40
the middle of the lake. But
8:42
you might have indirectly imperiled her.
8:45
Your toxic untreated waste might have
8:47
played a role in her getting
8:49
stuck in the middle of the
8:51
lake. So I argue that we
8:54
should cultivate caution and humility in
8:56
the face of disagreement and uncertainty
8:58
about these ethical and scientific issues
9:00
as well. In this case, for
9:03
example, you should help the rabbit,
9:05
either because you might have a
9:07
responsibility to help others where possible,
9:09
or at least because your own
9:12
actions might have indirectly imperiled this
9:14
rabbit, and so helping her as
9:16
a way of reducing and repairing
9:18
the harms that you are personally
9:21
causing in the world. But if
9:23
we do have these responsibilities, then
9:25
we have to ask, how often
9:27
do they arise? How often are
9:30
we in a position of, if
9:32
not helping, then at least reducing
9:34
and repairing the harms that we
9:36
cause to vulnerable others? We might
9:39
be affecting many beings. We now
9:41
live in the Anthropocene, a geological
9:43
epoch where humanity is a dominant
9:45
influence on the planet. We are
9:48
now affecting non-humans all over the
9:50
world, whether we like it or
9:52
not, both directly. and indirectly, both
9:54
individually and collectively. Consider industrial animal
9:57
agriculture. This food system kills hundreds
9:59
of billions of captive vertebrates and
10:01
trillions of captive invertebrates for food
10:03
every year. To say nothing of
10:06
all the wild animals killed for
10:08
food every year, to say nothing
10:10
of all the animals killed for
10:12
other purposes every year. And this
10:15
food system also very significantly increases
10:17
global health and environmental threats, including
10:19
threats associated with the spread of
10:21
diseases, antimicrobial resistance, pollution, biodiversity loss,
10:24
human-caused climate change. And then when
10:26
these threats occur, they imperil humans
10:28
and non-humans alike, both directly and
10:30
indirectly. They imperilice directly by exposing
10:33
us to diseases and fires and
10:35
floods, and they imperilice indirectly by
10:37
amplifying. ordinary threats that we already
10:39
face, like threats associated with hunger,
10:42
thirst, illness, injury. For animals, they
10:44
amplify threats associated with human violence
10:46
and neglect. And in the future,
10:48
we can expect similar dynamics to
10:51
occur with emerging technologies like the
10:53
development and deployment of advanced AI
10:55
systems. If and when AI systems
10:57
are sentient, agentic or otherwise morally
11:00
significant, we could be using them
11:02
at even greater scales than we
11:04
do with nonhuman animals right now.
11:06
And then when we do that,
11:09
we could also be creating and
11:11
amplifying a wide range of threats,
11:13
we could lose control of AI,
11:16
and AI could harm us. We
11:18
could retain control of AI, and
11:20
we could use AI to harm
11:22
each other. And, AI might amplify
11:25
ordinary threats that we already face,
11:27
like threats associated with bias, disinformation,
11:29
and even graver threats in the
11:31
future. In all of these cases,
11:34
I argue that we have a
11:36
responsibility to consider all affected stakeholders
11:38
equitably when making decisions about our
11:40
effects in the world. That includes
11:43
humans, animals, and eventually, AI systems.
11:45
exceptionalism, the presumption that humanity always
11:47
matters most and takes priority. If
11:49
such a vast number and wide
11:52
range of non-humans might matter, and
11:54
if our actions and policies might
11:56
be affecting them at global scales,
11:58
then we owe them a lot.
12:01
Now many humans assume that we
12:03
should nevertheless prioritize fellow humans because
12:05
we have higher capacities for welfare.
12:07
I can suffer more than a
12:10
mouse, for example. But first of
12:12
all, we might not... always have
12:14
higher capacities for welfare, I might
12:16
not be able to suffer more
12:19
than an elephant or a whale
12:21
or in the future a very
12:23
sophisticated AI system. And second of
12:25
all, even if we have higher
12:28
capacities for welfare than non-human's individually,
12:30
we might not have higher capacities
12:32
for welfare than them in the
12:34
aggregate because the non-human population is
12:37
and will be much larger and
12:39
much more diverse than the human
12:41
population. They have more at stake
12:43
than we do overall, even if
12:46
we have more at stake than
12:48
they do individually. Now some humans
12:50
also assume that we should prioritize
12:52
fellow humans because we have closer
12:55
bonds with fellow humans. But that
12:57
might not always be true either.
12:59
If we really are affecting non-human
13:01
everywhere whether we like it or
13:04
not, then we have morally significant
13:06
bonds with them too. We are...
13:08
impacting them, and we have a
13:10
responsibility to reduce and repair the
13:13
harms that we are causing them.
13:15
So I think that we might
13:17
not be able to sustain our
13:19
assumption that we always take priority.
13:22
Now there might be a limit
13:24
to how much we can support
13:26
non-human right now because we lack
13:28
the knowledge and power and political
13:31
will that we need to help
13:33
them. But we can still do
13:35
more than we are at present,
13:37
and we can also make an
13:40
effort to build knowledge, build capacity,
13:42
build political will, towards helping them
13:44
in the future. And if and
13:46
when we have the ability to
13:49
prioritize them in an effective and
13:51
sustainable way, perhaps we should prioritize
13:53
them at that point. Now I
13:55
know that some of these ideas
13:58
might seem implausible. They might even
14:00
seem like a distraction from the
14:02
other very important issues that we face
14:04
right now. But I think that we
14:07
should take them seriously anyway alongside those
14:09
other very important issues. As the dominant
14:11
species, we have a responsibility
14:13
to ask how we might
14:15
be affecting all stakeholders, including
14:17
humans and animals and eventually
14:19
potentially AI systems. And if that
14:21
leads us to Uncomfortable conclusions about our
14:24
treatment of non-human's, so be it. We
14:26
can accept those conclusions and we can
14:28
try to build a better world for
14:30
humans and non-human's alike. Thank
14:34
you Jeff. Okay, listeners, you can get a
14:37
copy of The Moral Circle wherever you get
14:39
your books. And make sure you're subscribed to
14:41
the next Big Idea Daily Feed because this
14:44
week we'll be talking about what climate change
14:46
is doing to your health, how soon we
14:48
could be mining on Mars, and a whole
14:50
lot more. I'm Michael Kavnat. I hope to
14:53
see you tomorrow.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More