Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:02
This is a Technikon podcast.
0:08
Humankind has been talking about ethics
0:10
as long as we have been talking, but
0:13
ethics and technology now, that's
0:15
a relatively new conversation. After
0:18
all, when we think of ethics, technology
0:20
is probably not the first thing that comes
0:22
to mind. I'm Peter Balint
0:24
from Technikon, and I'm happy to bring
0:26
you this special podcast series entitled
0:29
Ethics and Technology - A
0:31
Prerequisite for European Research.
0:34
We can't tackle the entire and often
0:37
fluid topic of ethics, but
0:39
we can show you that things are being done to avoid
0:41
unintended ethical consequences
0:43
and inequities. We can
0:45
explain some of the more common ethical
0:47
issues that become apparent in technology
0:50
projects. And we can tell you
0:52
how these issues are abated.
0:54
And that is exactly what we will do in this
0:56
series. We will look at ethics
0:59
in the context of technology from
1:01
many disciplines, including cyber
1:03
security, infrastructure
1:05
and smart mobility, artificial
1:07
intelligence in personalized health care
1:09
and forensic software and hardware
1:12
engineering, to name a few. We
1:14
will talk about attitudes towards ethics,
1:16
and we will examine how ELSA is increasingly
1:18
becoming part of the framework in EU
1:21
funded projects. And in case
1:23
you don't know, ELSA is
1:25
ethical, legal and societal
1:27
aspects. And if you're running
1:29
a project without regard to ELSA,
1:31
your propensity for complications will
1:33
grow exponentially. To
1:36
kick things off, we delve into the idea
1:38
of ethics in the world of personalized
1:40
medicine, more specifically
1:43
using artificial intelligence to harmonize
1:45
tons of data in order to guide doctors
1:47
towards prescribing personalized treatment
1:50
plans for children with cancer. These
1:52
data consist of things like medical
1:54
publications or large molecular data
1:57
sets. As you
1:59
can imagine, balancing the ethical issues
2:01
with technological advancements is quite
2:03
a task. Today, we are lucky
2:05
enough to speak with Nikola Biller-Andorno ,
2:08
who directs the Institute of Biomedical
2:10
Ethics and History of Medicine at
2:12
the University of Zurich in Switzerland.
2:15
She's an ethics advisor in the EU funded
2:18
iPC project, which stands for
2:20
Individualized Paediatric Cure.
2:23
Welcome, Nicola. Thanks so much for coming
2:25
on.
2:26
You're welcome.
2:27
iPC is a project that relies
2:29
on artificial intelligence to inform
2:32
medical decisions. What
2:35
potential conflicts could arise when
2:37
introducing ethics into a project like
2:39
this and how are these conflicts dealt with?
2:41
In my experience, scientists are quite
2:44
receptive of ethical thinking and
2:46
very willing to comply with ethical standards.
2:48
They just at times tend to underestimate
2:51
the complexity of ethical assessments as compared
2:53
to the science they are doing. The
2:56
High-Level Expert Group on artificial intelligence
2:58
of the European Commission has issued guidance
3:01
that outlines key requirements for trustworthy
3:03
A.I. These requirements include
3:05
human agency and oversight,
3:07
technical robustness and safety, privacy
3:10
and and adequate data governance, transparency,
3:13
diversity, non-discrimination and fairness,
3:15
societal and environmental well-being
3:18
and accountability. So that's a lot
3:20
of stuff to address. And projects
3:22
can fall short. For instance, time
3:24
pressure might lead to the temptation to make
3:26
compromises regarding safety tests or
3:29
commercial interests may tempt developers to posteriorise
3:32
privacy concerns. It's
3:34
therefore helpful if researchers note that
3:37
the ethical standards are part
3:39
of what they expect to adhere to
3:41
and that they are morally accountable for their actions.
3:44
And in highly innovative fields such as
3:46
A.I., scientists have to even go beyond
3:48
compliance and in fact help interpret
3:51
and operationalise ethical principles
3:53
in light of the work they are doing. And
3:56
for me as an ethicist, this is where the fun part
3:58
starts.
3:59
And if we look at perhaps in the
4:01
context of iPC or...
4:04
actually any EU funded project
4:06
for that matter, what gains can
4:08
be realized by adhering to these ethical
4:10
principles other than achievements
4:13
on moral grounds?
4:15
Well, I may be biased, but let me say
4:17
the gains are tremendous. Having a clear
4:19
process for involving ethics early on
4:21
can pave the way towards public trust and
4:23
acceptance, which is what you need if
4:25
you want to sell a product based on your research
4:27
later on. And even more so, involving
4:30
ethics can lead to tools that are indeed trustworthy,
4:32
which is, I guess, what we all want on
4:35
the user side of things. This,
4:37
however, requires a close collaboration between
4:39
ethicists and scientists, which
4:41
presumes a willingness to dedicate resources
4:44
to ethical inquiry that goes beyond
4:46
a token contribution. I
4:48
think such a serious investment in ethics
4:50
is more than worthwhile.
4:52
Yes, and it seems this willingness
4:54
to dedicate resources is overtly
4:57
shared by the EU as they start to
4:59
build in ethics awareness and
5:01
reporting into technology projects.
5:04
Some say the technology is moving
5:06
exponentially faster than ethics.
5:08
This means that ethics can
5:10
never catch up. How should we
5:13
deal with this in the future?
5:14
Yeah, the issue of ethics lagging
5:16
behind scientific innovation has been discussed
5:19
in bioethics for decades, at least
5:21
since the human genome project, when
5:23
people were impressed by the speed with which
5:25
the genome was eventually deciphered. We
5:28
since learned that ethics can contribute
5:30
both from outside and from
5:32
within such big projects. Such
5:35
embedded ethics components are certainly
5:38
helpful to ensure an almost simultaneous
5:40
transfer between scientific discovery
5:43
to ethical deliberation and
5:45
discussing ethical issues together when
5:47
they emerge, or even trying to anticipate
5:50
them, is intellectually appealing
5:52
and provide scientists with an opportunity
5:54
to grapple with their social responsibility.
5:57
Once you've understood and accepted the ethical
5:59
dimension of the work you're doing, it's much
6:01
easier to communicate with the public and
6:03
venture into debates about your own work.
6:06
So it sounds like ethics definitely has a
6:08
place in research and
6:10
technology kind of projects, but
6:13
generally researchers and engineers tend to
6:15
believe what they see; and
6:17
how do we position ethics
6:19
principles among formulas
6:21
and data and complex functions?
6:24
Well, scientists typically have no
6:26
problem at all to understand and use ethical principles.
6:29
Oftentimes, they are the ones who see most clearly
6:31
where questions might be ahead or what consequences
6:34
the practical implementation of a tool may have.
6:36
It's important to encourage such open
6:38
discussions that are uncensured by roles,
6:41
hierarchies of conflicts of interest. Excellent
6:44
scientists are reflective about their own work,
6:46
can be an inspiration to younger colleagues,
6:49
and can help them see that excellence should
6:51
not be limited to technical skills, but
6:53
also extends to ethical deliberation and
6:55
foresight.
6:57
OK, let's go back to this concept of
6:59
artificial intelligence, which is something that
7:01
is being explored in
7:03
your project iPC, Stephen
7:06
Hawking once said that artificial
7:08
intelligence would be either
7:10
the best thing humans ever created
7:13
or the last. So what
7:16
do you make of that?
7:17
I think that's what's so fascinating about
7:19
ethics. We can use technologies in many
7:21
different ways and it's up to us to figure out in
7:23
what direction to take them or where
7:26
the limits, you know, where we
7:28
want to set limits to ourselves. One
7:31
example for this malleability is
7:33
the so-called death algorithm.
7:36
AI is getting quite good at predicting
7:38
a person's remaining lifetime. This
7:40
information can obviously be used
7:42
against people, but it could also be helpful,
7:45
for instance, by making sure they receive palliative
7:47
care when the time has come. If
7:50
we opt to generate this information,
7:53
how we use it, who gets access,
7:55
how it is communicated, all of that
7:57
is up to us humans. So
8:00
I think Immanuel Kant
8:02
was spot on when he said
8:04
in his critique of practical reason:
8:07
"Two things fill the mind with ever new
8:09
and increasing admiration and awe: the
8:12
starry heavens above me and the moral
8:14
law within me." Ethics is
8:16
hot stuff. And I think it will remain so as
8:18
long as humans continue to develop and
8:20
innovate
8:21
And we will be forever innovating,
8:24
which means ethics will always have a place
8:26
in technology. Thank you, Nikola, for
8:28
coming on today and sharing your knowledge with
8:30
us.
8:30
You're very welcome. Thanks.
8:32
Next time around, we look at ethics in
8:35
the realm of infrastructure and
8:37
technology. Efforts
8:40
are currently underway to utilize your phone's
8:42
built in security features to allow
8:44
you to access existing data infrastructures
8:47
to safely enable electronic voting
8:49
and smart mobility. Naturally,
8:52
a project of this magnitude requires a vigilant
8:54
adherence to ethics standards. But
8:57
we dig deeper to see why this project
8:59
goes beyond just data protection. See
9:02
you next time.
9:07
The iPC project has received funding
9:09
from the European Union's Horizon 2020
9:11
Research and Innovation Program under
9:14
grant agreement number 826121
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More