Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
This is the BBC. This
0:03
podcast is supported by advertising
0:05
outside the UK.
0:07
BBC Sounds,
0:10
music, radio, podcasts. Welcome
0:18
to Understand Tech and AI,
0:21
the podcast that takes you back to basics
0:24
to explain, explore, unpick
0:26
and demystify the technology that's becoming part
0:28
of our everyday lives. I'm
0:31
Spencer Kelly from BBC Click and
0:33
you can find all of these episodes on
0:35
BBC Sounds. The
0:42
Terminator, War Games,
0:45
Blade Runner, Terminator 2,
0:48
Westworld, Metropolis, Terminator 3, 4,
0:52
5, 6, 7, 8, 9, 10.
0:54
Science fiction has brought us so many stories
0:57
about futures where the machines take
1:00
over, where we, the oh-so-clever
1:02
humans, create intelligent computers
1:05
that can do everything that we can do,
1:07
but better. And in
1:09
most of these stories, artificial
1:12
intelligence comes to the inevitable conclusion
1:14
that the world would be
1:15
a better place without us. Well
1:18
now that AI has infused
1:21
our lives, have we already set
1:23
these wheels in motion?
1:25
Are we heading for a future where AI in
1:27
some way takes control?
1:30
And will there be any way to stop it if
1:32
it does?
1:33
Once again, Dr Michael Pound, Associate
1:36
Professor in Computer Vision at the University
1:38
of Nottingham is with me. Mike, hi,
1:40
welcome to the end of the world. Thank
1:42
you very much. So we've got all
1:45
of these famous films which evoke
1:47
this fear of an all-powerful AI
1:49
that we lose control of and which ultimately
1:52
tries to wipe us out.
1:54
Is that what we should be worried about
1:56
AI becoming? I don't
1:58
think so, no. First of all because
2:00
that happening is very implausible, particularly
2:03
in any kind of reasonable time frame. But
2:05
also because there are many other issues with
2:07
AI that we should be worried about right now that we might
2:10
ignore if we focus on these sort of hypothetical
2:12
future problems. Can you give us a kind of general
2:14
idea of the sort of threats that
2:16
AI poses to us? One is sort
2:19
of fake news and automatically generated
2:21
content that could be used to sway people
2:23
in, for example, elections. And that looks
2:25
convincing because it's written well. Exactly. So
2:28
in six months or 12 months, phishing emails
2:29
that you receive just suddenly got much more convincing
2:32
because they're now written by an AI that has read
2:34
all of the internet. And so that's
2:36
a huge problem where we now can't tell
2:39
from a piece of text whether it's true or false. And
2:41
so that fact checking needs to come from somewhere
2:44
because people might not do it necessarily.
2:46
And maybe even if they have fact checked something and it ends
2:49
up getting disproved, the damage might
2:51
have already been done. Enough people will have read it that
2:53
it's had its impact and that might be sufficient for the person
2:56
that was trying to do this. People listening are very well
2:58
informed and they're thinking, well, I wouldn't be taken in
3:00
by something. But I suppose the question is, is if
3:02
you're bombarded with text and you just literally have
3:04
no idea whether that was written by a person or it wasn't
3:07
and it's a truth or a lie,
3:09
you're not always going to be able to fact check every single
3:11
thing you see, even briefly. So you
3:13
might discount truths as well as lies. That's right.
3:16
And people are predisposed to believe what they already believe and find evidence
3:18
that supports their own conclusions. And
3:21
so you end up with this kind of social media bubble,
3:23
but made even worse by the fact that AI
3:26
is being used to amplify this. Are
3:28
politicians worried about AI? Are
3:31
governments worried about AI? And what is
3:33
it that's worrying them if they are? That's
3:36
a very good question. They are worried about
3:38
things like fake news and image generation
3:41
being used in the various way and things like this that
3:43
are perhaps much more reasonable things to worry about. Recently,
3:46
many
3:46
big names in technology, including
3:49
a guy called Elon Musk, I don't know whether you've
3:51
heard of him, wrote an open
3:53
letter asking. I think the world
3:56
to just pause development of AI for
3:58
six months while we worked out how
3:59
to deal with it, how to regulate it and stuff. What
4:02
did you make of that letter? I
4:04
like the idea that we should regulate
4:06
the use of AI because I think it can be used irresponsibly
4:09
to make decisions that affect people's lives or fake
4:11
news scams, we've been talking about these
4:13
things. On the other hand, I think that just
4:15
pausing
4:16
will mean that half the people don't pause and
4:19
just secretly carry on working anyway. Of course,
4:21
the other issue is that everyone wants to
4:23
be at the forefront of AI, so all governments are trying
4:25
to push it as much as possible to make sure we're not
4:27
missing out. And there is that kind of contradiction,
4:30
isn't there really, where we're all trying to push
4:32
this as fast and far as we can,
4:34
but at the same time, maybe even the same people
4:36
are going, whoa, I think we've gone too far there, this
4:39
is getting out of hand.
4:40
That's right, and I think there are gonna be huge economic
4:42
benefits to AI systems, both in terms
4:45
of efficiency and things that we couldn't do before that
4:47
we can now do, and no country wants to miss
4:49
out on those things if they've overregulated and
4:51
no one else has. So it is a difficult balancing
4:53
act. Do you think there is an AI
4:56
arms race going on between countries? I
4:58
think that there are lots of governments who would be
5:00
interested in using AI for lots of different reasons,
5:03
and would like their AI to be better than everyone else's.
5:06
So I think there is a bit of an arms race going on. And
5:08
we're talking about arms, and we mentioned
5:10
some of the famous films. AI
5:12
can be used in warfare, can't
5:15
it? We've certainly seen pretty
5:17
decent looking robots that can navigate the real
5:19
world, and it's not
5:21
a big leap to think you can strap a
5:23
weapon to the top of that and send it into
5:25
battle. Do you think there is this sense
5:28
that we shouldn't let machines make
5:31
decisions certainly when it comes to weaponry
5:34
and destroying things? We should always keep
5:36
a human in the loop. I
5:38
think it would be very reasonable to have a pause
5:40
on the development of fully autonomous AI-based
5:43
weapons because we don't know how well they will
5:45
work. There's huge conflicts of interest
5:47
there, and it's just a recipe for a huge
5:49
number of problems. Ultimately, AI that
5:52
is perfect and never makes a mistake does not exist
5:55
as of today. So putting it in a weapons system would
5:57
be a very bad idea. So I
5:59
think there are...
5:59
some places where we should be regulating and
6:02
I think it would be important to have those conversations.
6:05
Now, Mike, let's take a short
6:07
break because the idea
6:09
of all powerful robots has fueled
6:12
the imagination of science fiction writers for
6:14
a very long time. Here's
6:16
Dr. James Sumner with the stuff
6:19
of nightmares. The
6:22
scientifically inspired tragedy of Frankenstein
6:25
set the template for intelligent, uncontrollable
6:28
creations. The creature in
6:31
Mary Shelley's original novel of 1818 is resourceful
6:35
and self-educated and plans
6:37
an intricate revenge on his creator that strongly
6:40
suggests he is the smarter of the two.
6:42
In the 20th century, speculations
6:45
about self-teaching AI systems
6:47
naturally inspired similar visions.
6:50
The classic example is the HAL 9000 computer
6:52
in 2001 A Space Odyssey
6:56
telling the human crew its duties logically
6:59
require it to kill them. As
7:01
early as the 1940s, the sci-fi
7:03
writer Isaac Asimov had begun
7:05
to push back against the standard narrative.
7:08
Asimov started from the assumption that human
7:10
engineers would build in safeguards
7:12
his three laws of robotics.
7:15
A robot may not harm a human being
7:18
or through inaction allow
7:20
a human being to come to harm. Number
7:22
two, a robot must obey
7:25
orders given it by qualified personnel
7:28
unless those orders violate
7:30
rule number one. In other words, a robot
7:32
can't be ordered to kill a human being. Rule number
7:34
three, a robot must protect its own existence
7:37
unless that violates rules one or
7:39
two. In practice, in
7:41
Asimov's fiction,
7:42
the three laws didn't quite work.
7:45
That was the point. If they worked perfectly,
7:48
there would be no story. At
7:50
the height of the Cold War, the scenario
7:53
that really spoked fear in the public imagination
7:55
was not super intelligent machines
7:58
but thermonuclear
7:59
under all too human
8:02
control. Some thinkers even
8:04
speculated that the world might be safer with
8:07
AI in charge. They might find it necessary
8:09
to take some of our toys away, some of our hydrogen
8:12
bombs and things, but there's no reason
8:14
that they would want to go after
8:16
the same things we want, because
8:19
they won't be interested
8:20
in them.
8:22
That was Dr. James Sumner,
8:25
who has given us such a brilliant long view of
8:27
all of the topics that we've talked about in this series.
8:30
Thank you, James. Now,
8:32
Mike, there is quite a
8:34
well-known story in AI that demonstrates
8:36
how artificial intelligence might not
8:38
maliciously
8:40
cause us harm, but if we don't give it
8:42
the exact correct goal,
8:44
it might do us harm accidentally. This is
8:46
the paperclip-making
8:48
machine. Do you want to kind of summarise
8:50
that for us? So, yeah, this is a thought experiment that's
8:52
been proposed to highlight the risk
8:55
of what we would call an artificial general intelligence. So,
8:57
AI that could do everything and learn very, very quickly.
9:00
We create this AI that's going to manufacture paperclips,
9:03
and its only goal is to manufacture paperclips, and
9:06
it gets rewarded, or essentially is made to feel good,
9:08
or what have you, the more paperclips it makes.
9:11
So, it begins by just ordering a load of raw materials
9:13
and making a load of paperclips, and then it realises that
9:15
if it could take over the mine, it could get a
9:17
lot more raw
9:18
materials and make a great many more paperclips. And
9:20
in the end, it realises the only thing standing in its
9:22
way is that all these pesky
9:25
humans keep trying to eat and use land for wheat
9:27
and things like this. So, never
9:29
mind all that, we'll get rid of them, and then we can
9:31
just make all the paperclips all the time. And
9:34
it's this thought experiment that goes from a sort of
9:36
a very sensible AI that's running a factory
9:38
to an AI that's basically wiped
9:40
out the human race in favour of endless
9:42
supplies of paperclips. The paperclip
9:44
machine is not going to be a reality,
9:47
but do you think there are real-world equivalents
9:49
that might happen on a different
9:51
level?
9:52
There is a risk that we will start putting autonomous
9:54
systems in the control of AI
9:57
under the assumption they'll act in a certain way, That
10:00
might not be incredibly impressive like
10:02
the paperclip example, but they might be wrong.
10:04
It might make unethical decisions because
10:06
of implicit bias, or it might make
10:09
simple mistakes that cause a huge knock-on problem.
10:12
So I think that uncontrolled
10:14
and unregulated use of AI does give
10:16
a risk of it being used poorly or
10:19
being used by mistake in a way it shouldn't. I
10:21
wonder whether we will never trust AI,
10:23
because the mistakes it makes along the way
10:25
are just weird. They're not mistakes that
10:28
humans would make. I'm imagining a self-driving
10:30
car, which ultimately I think
10:32
that technology will reduce the number of accidents,
10:34
but the accidents that they still have
10:36
will be weird. And I can imagine the newspaper
10:38
saying, well, a human would have never done that. So
10:41
therefore we mustn't go any further with self-driving
10:43
cars. So maybe we will never trust AI because
10:46
it won't be able to leap that hurdle. That's
10:48
a really interesting question. I mean, self-driving cars
10:50
are a great example of this because yes, you only need
10:52
the AI to make one silly mistake and suddenly
10:54
you think that it can't be trusted. Another example
10:56
is medical imaging. Many, many studies have
10:58
shown that people are quite happy for AI
11:01
to be involved in their medical diagnosis
11:03
if it makes doctors more efficient
11:04
or it eases their burden. But very
11:06
few people are happy for AI to be the
11:08
thing that makes the ultimate decision with no doctor involved.
11:11
And I think there's a long way to go before as a culture
11:14
and a society, we're ready to accept that kind
11:16
of thing. If AI
11:18
does go badly wrong,
11:21
can we just switch it off?
11:24
Yes, we can pull the plug and actually it would save you a good deal
11:26
of electricity cost as well. Yeah,
11:28
it does use quite a lot, doesn't it? It does. Yes.
11:32
I mean, at the moment, AI is just, shall we say, large banks of numbers
11:34
sitting on data centers. And so
11:37
they don't interact with any other systems. They're usually
11:39
only deployed in the one specific place where
11:41
they're used. Over time,
11:43
we might find AI distributed more
11:45
broadly on your end devices in your house and things
11:47
like this. But so far, I've
11:49
not seen a lot of AI, any evidence really,
11:52
that AI is being deployed in a way where I would
11:54
say it was unconstrained and couldn't be turned
11:56
off. At the moment, just unplug
11:58
the device. about
12:00
the worst case scenarios. We've talked about
12:02
the dangers and things we have to watch out for. Do
12:05
you think AI is going
12:08
to be harmful or do you
12:10
think AI is actually going
12:12
to help us improve our world?
12:15
I think AI is going to make our lives much,
12:17
much better overall. And I think, you know, I
12:19
work in AI, I'm really excited
12:21
that I work in AI and that this is going to be
12:24
such an incredible time for everyone. Yes,
12:26
there are things that we have to discuss as a society over
12:28
the ethical use of AI
12:29
and things like this. But there are applications of AI
12:32
already on generating new proteins
12:34
and new antibiotics, better understanding
12:37
medical images so we can help radiographers
12:39
and doctors work more quickly and more efficiently,
12:42
analysing plants so that we can grow more robust
12:44
plants that work, you know, in the face of climate
12:46
change and higher yields even in the case
12:48
of drought and things like this. There's AI being
12:51
used all over the globe to drive
12:53
science forward in
12:54
lots of other areas as well. And so I
12:57
think overall, the outlook is really, really
12:59
bright.
13:00
That feels like a good note to end on. Mike,
13:03
thank you so much for your time over this
13:05
series. Thank you very much indeed. As
13:07
I said all those episodes ago,
13:10
I am a geek. I love technology
13:13
and I love the fact that we are continuing to
13:16
innovate. But I'm also very
13:18
wise to the fact that tech can be used for good
13:21
and for bad things. It can be used
13:23
well and it can be used carelessly.
13:27
Even artificial intelligence is just a tool.
13:30
It will
13:30
be what we make of it and
13:33
what we allow it to become. I
13:36
hope that this series has helped you to understand
13:39
what's going on beneath the surface. And I hope it may help
13:42
you to make more informed decisions
13:44
about how you let tech and AI
13:47
into your life. If you missed any
13:49
of the series, don't forget all 10 episodes
13:51
are available on BBC Sounds.
13:54
Thanks for listening.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More