Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
This podcast is brought to
0:03
you by Knowledge at Wharton.
0:06
Welcome to Knowledge at Wharton.
0:08
I'm Andrew Bassini. I'm here
0:10
today with Kevin Warbach. He
0:12
is professor and chair of
0:15
the Department of Legal
0:17
Studies and Business Ethics
0:19
at Wharton. He's also
0:21
the faculty director of
0:23
our new Wharton Accountable
0:25
AI Lab. which is dedicated
0:27
to advancing responsible development of
0:30
artificial intelligence. And that is
0:32
what we're going to talk about today.
0:34
Kevin, welcome aboard. Thanks so much. Appreciate
0:36
you having your hair. So let's just jump
0:39
right into it. What is accountable AI? Why
0:41
did you start this lab? Accountable
0:43
AI is about understanding the
0:45
challenges that AI poses. The
0:48
starting point is that AI
0:50
is an incredible innovation, has
0:52
tremendous potential to create value
0:55
for businesses, and to do
0:57
a great deal of social
0:59
good. But we can't realize
1:01
that potential. We can't achieve
1:04
the benefits of AI without
1:06
acknowledging and mitigating the risks.
1:08
thinking about the potential dangers
1:10
and harms and problems with
1:12
AI. So accountable AI is
1:14
about not just thinking what
1:16
could happen, what are the risks,
1:18
although that's part of it, not
1:20
just asking from an abstract perspective,
1:22
what principles should organizations have about
1:24
what they're doing with AI? Again,
1:27
that's part of it. Not just
1:29
saying generally we should be responsible
1:31
about AI or have well-governed AI,
1:33
although that's part of it. It's
1:35
saying systematically. How do we put
1:37
into place the kinds of
1:40
practices and the understandings that
1:42
it takes to ensure that
1:44
AI systems are deployed and
1:46
developed in the ways that
1:48
maximize their benefits
1:51
and appropriately mitigate and
1:53
address or redress the problems and
1:55
harms? And accountability is chosen. by
1:57
something I chose intentionally, it's about
1:59
making those connections, the connection between
2:01
the risks and the potential or
2:03
real harms, and what actually happens
2:05
to prevent them, to mitigate them,
2:07
to understand them, to address them,
2:09
having all those practices in place
2:11
and doing it in a thoughtful,
2:13
systematic, structured, rigorous way, which of
2:15
course is very consistent with how
2:17
we think about things at work.
2:19
So let me ask you, is
2:21
the lab going to, are you
2:23
going to develop sort of like
2:25
a best practices or prescriptive information
2:27
for business leaders for tech companies
2:30
about how to use AI, how
2:32
to deploy it? One of the
2:34
things that I have found in
2:36
speaking with companies in the research
2:38
that I do in this area,
2:40
and as we were putting together
2:42
the plans for the lab, is
2:44
that most of them are really
2:46
struggling to get on top of
2:48
these issues. They don't understand what
2:50
other organizations are doing. There's a
2:52
few companies who are very far
2:54
advanced, especially some of the big
2:56
technology companies have invested significantly in
2:58
responsible AI or AI governance. But
3:00
even they have questions about what
3:02
should they be doing? What are
3:04
other companies doing? Are they appropriately
3:06
addressing all of the issues? What
3:08
is a data show about what
3:10
kinds of governance mechanisms are effective?
3:12
And most companies are not even
3:14
at that point. So we are
3:16
certainly not going to say, we'll
3:18
tell companies what the best practices
3:20
are. AI is so diverse, and
3:23
there's so many different kinds of
3:25
AI. There's machine learning systems, there's
3:27
generative AI. It's a different thing
3:29
if we're talking about a company
3:31
that is doing hardcore technical development
3:33
of AI models. versus a company
3:35
that may be a very large
3:37
enterprise, but is deploying a system
3:39
that they are procuring from elsewhere,
3:41
versus a small startup that is
3:43
involved in this area, and it
3:45
depends on what industry you're in
3:47
and so forth. So we are
3:49
first going to try to understand
3:51
what organizations are actually. doing, what's
3:53
successful, what's not successful, what are
3:55
the gaps, to try and synthesize
3:57
some of that to help organizations
3:59
understand what the possibilities are. And
4:01
it's a moving target. It's going
4:03
to be an ongoing process of
4:05
understanding what can be done, what
4:07
are all the problems that are
4:09
most concerning, and how can they
4:11
be overcome. That is a tall
4:13
task, but I know that you're
4:15
upward. I want to tell people
4:18
a little bit about your background.
4:20
You have a law degree from
4:22
Harvard. You came to Wharton in
4:24
2004, so going on 21 years
4:26
now. But you also worked in
4:28
the Clinton administration, the Obama administration,
4:30
you worked with the FCC on
4:32
emerging technology. You've been at this
4:34
for a long time. You have
4:36
four books about technology, including blockchain.
4:38
You've seen the emerging technology. You've
4:40
worked on the business implications, the
4:42
ethical implications. Here's AI. Is it
4:44
different? How is it different from
4:46
the concerns that we've dealt with
4:48
in the past? Or is it
4:50
the same? Some of both. As
4:52
you note, I have been working
4:54
on emerging technologies my whole career
4:56
when I started in the 1990s.
4:58
That was the internet. And I
5:00
wrote a paper on internet policy
5:02
at the Federal Communications Commission. This
5:04
was early on before I was
5:06
an academic. And at that point,
5:08
there were something like less than
5:10
50 million people on the internet
5:13
in the entire world. And the
5:15
vast majority of them were people
5:17
dialing up on their telephone to
5:19
the proprietary American online service. There
5:21
was not a single person. in
5:23
all of China who had a
5:25
private internet connection at that point.
5:27
And yet we could see the
5:29
issues that were coming up. We
5:31
could see that this has a
5:33
technology that has the potential to
5:35
change the world and we needed
5:37
to understand what the issues were.
5:39
And so all throughout my career
5:41
I've tried to get engaged on
5:43
major important technology developments early enough
5:45
to identify the issues to work
5:47
on helping to develop. the regulatory
5:49
strategies, work with government, identify and
5:51
highlight what the problems are before
5:53
it was too late. And so
5:55
I did that with broadband technology.
5:57
I did that with something. called
5:59
gamification, which is applying psychological techniques
6:01
and other techniques from video games
6:03
to motivate people in different contexts.
6:05
I did it, as you mentioned,
6:08
with blockchain, which was another field
6:10
that I saw coming that had
6:12
this diverse potential, but it was
6:14
still poorly understood. If, frankly, it's
6:16
still poorly understood today. AI, I
6:18
put in a similar bucket. We
6:20
are in some ways very far
6:22
along with AI. The AI if
6:24
you're talking about in terms of
6:26
machine learning technology is decades old.
6:28
In some ways, though, we're just
6:30
at the beginning. We're just a
6:32
couple years after the kind of
6:34
chat GPT shot heard around the
6:36
world announcement that kicked off this
6:38
incredible race to exploit the potential
6:40
and understand the potential of generative
6:42
AI. And we know there are
6:44
all these problems. We know there
6:46
are issues about privacy and bias
6:48
and intellectual property and manipulation and
6:50
so on and so forth. And
6:52
yet we don't have good solutions.
6:54
So AI is similar to these
6:56
earlier technologies in that it starts
6:58
at a point where it has
7:01
tremendous potential and generates a lot
7:03
of excitement, but there is a
7:05
lack of understanding broadly about really
7:07
whether it will realize this potential
7:09
and what the impacts will be.
7:11
But every technology is different. And
7:13
with each of these waves we
7:15
build on what came before. So
7:17
AI leverages the fact that we
7:19
have the internet. And we have
7:21
these incredible networks and technical capabilities,
7:23
which allow things to be deployed
7:25
and scaled very fast around the
7:27
world. And we see this tremendous
7:29
amount of activity and investment going
7:31
into this space. So it's different
7:33
than it was back 30 years
7:35
ago when I was looking at
7:37
dial up internet. But it's similar
7:39
in that we have this period
7:41
of uncertainty. And I think that
7:43
is the point where it's most
7:45
important to really dig in. Think
7:47
about the ethical issues. Think about
7:49
the governance issues. regulatory issues and
7:51
so that's really the genesis of
7:53
the accountable AI lab. There are
7:56
a number of issues and in
7:58
my experience interviewing people about a
8:00
which I've been doing quite a
8:02
bit of in the last year,
8:04
I find that there are three
8:07
camps, right? So there are the
8:09
people who fear it, the people
8:11
who celebrate it, can't wait for
8:13
more of it, and then those
8:16
are, those are, there are folks
8:18
who are just proceeding with caution,
8:20
right? Yellow light, green light, red
8:22
light. What camp do you fall
8:24
into? Why? And you know, what's
8:27
your overall message about AI, especially
8:29
heading up this lab? It means
8:31
that you believe AI has this
8:33
incredible potential that it's going to
8:35
be deployed and going to have
8:38
real impacts. And similarly, you can't
8:40
celebrate it without recognizing these challenges,
8:42
a whole range of challenges. And
8:44
some of them are very speculative,
8:47
but many of them are very
8:49
real. I talked to lots of
8:51
companies that say, our focus is
8:53
not on regulation. Our focus is
8:55
on whatever the government tells us.
8:58
we know we're going to deploy
9:00
the systems that might have problems.
9:02
And if we build and deploy
9:04
something that breaks, it fails, the
9:07
generated AI system hallucinates and gives
9:09
false information, that could be a
9:11
big problem for us with our
9:13
customers in the marketplace. So, and
9:15
these are companies that are deploying,
9:18
these are companies that are excited
9:20
about it, but they realize they
9:22
need to understand the problems. And
9:24
then the reality is, there are
9:27
some aspects of this where speed
9:29
is absolutely essential. companies need to
9:31
invest, things are developing so fast,
9:33
there's so much potential, you don't
9:35
want to get left behind. But
9:38
you need to understand where there
9:40
are points, where care is warranted,
9:42
where there is the opportunity and
9:44
the need to slow down and
9:46
ask and answer these questions. And
9:49
even if the technology is moving
9:51
really fast, there's going to be
9:53
regulation. they're going to be lost
9:55
past. They're going to be court
9:58
cases addressing these issues. And so
10:00
you can't just ignore all of
10:02
that. You have to appreciate that
10:04
development of the legal process and
10:06
the development of, frankly, the kinds
10:09
of deeper understandings that come out
10:11
of research in lots of different
10:13
fields, not just in large. What
10:15
are the technical capabilities? What can
10:18
we do to mitigate bias? What
10:20
is the potential for explanation of
10:22
generative AI systems? It's a fascinating
10:24
area of advanced research. And what
10:26
is the development of ethical and
10:29
psychological behavioral understandings and what's going
10:31
on here? That is happening over
10:33
time, not at the same speed
10:35
as the technical development of AI,
10:38
but it's going to have a
10:40
really being impact, all those things
10:42
on being able to. realize the
10:44
full potential of the technology. Yeah,
10:46
the podcast is an interview show.
10:49
I spend 30 to 40 minutes
10:51
on each episode talking with a
10:53
guest and it's a range. I
10:55
speak with senior government officials from
10:58
multiple countries. I speak with technologists.
11:00
I speak with academics. I speak
11:02
with business executives who are leading
11:04
the responsible AI groups or AI
11:06
governance groups at some of the
11:09
largest companies. And I speak with
11:11
startups that are building tools to
11:13
address some of these problems. an
11:15
educational journey on how this broad
11:17
area of accountable AI is developing
11:20
and trying to help people understand
11:22
what the state of the art
11:24
is and also what the questions
11:26
are that they should be thinking
11:29
about. It definitely goes deep. I
11:31
appreciate it. Thanks for being here.
11:33
Absolutely. Really pleasure to do it
11:35
and thanks so much for the
11:37
interest. Kevin Warbach, everyone. Professor and
11:40
chair of the Department of Legal
11:42
Studies and Business Ethics here at
11:44
Wharton. He's also the faculty director
11:46
of our new Wharton Accountable AI
11:49
Lab. If you'd like to learn
11:51
more about that initiative, type in
11:53
Wharton Accountable AI Lab in your
11:55
browser. I also invite you to
11:57
check out his podcast. road
12:00
Accountable AI. AI.
12:02
For Knowledge at at I'm
12:04
Angie I'm Thanks for joining
12:06
us. Thanks for For more insight
12:08
from Knowledge at Wharton,
12:10
please visit at Wharton, please visit .edu.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More