Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
Jonah: Too Long, Didn't Read. Brought to you by the Alan Turing Institute.
0:05
The National Institute of Data Science and AI.
0:10
Hello, welcome to a Too Long, Didn't Read special.
0:13
I'm Jonah, a content producer here at the Alan Turing Institute.
0:17
Smera: And I'm Smera, a Research Assistant in Data Justice
0:19
and Global Ethical Futures. Jonah: Ahead of our second season of Too Long, Didn't Read, we wanted to ease you
0:25
in with this special episode all about something close to our hearts, or at
0:28
least something that has been consuming our workouts for the last few months.
0:32
AI UK, the national showcase of data science and AI brought to
0:36
you by the Alan Turing Institute. On this episode, we're going to explore how drag shows can use
0:42
AI to explore society's biases.
0:44
How the UK could be the best place to further AI in healthcare and how children
0:49
are helping shape the AI of the future. Smera: AIUK is our annual two day event and this year it took place in
0:56
March at the QEII Centre in London.
0:58
It was packed to the brim. Brim with interactive demonstrations, workshops, and talks from some of the
1:04
leading minds in data science and AI.
1:06
You can imagine the setting Jonah, you're in Westminster, houses of
1:10
parliament across the street, British think tanks lining the streets of
1:13
White Hall, the crisp spring weather, and most importantly, data science.
1:18
Jonah: You make it sound very romantic. But, uh, we were kind of running around like headless chickens trying
1:22
to get interviews and things like that. So ARUK is open to the public and the entire event was streamed online,
1:29
free for all, but the audience does tend to be predominantly researchers,
1:33
business and government people. So since there was so much good and relevant stuff, we decided
1:38
to dedicate an entire episode to what we learned and who we met.
1:46
Smera: So to kick us off, we are joined by the one and only Lily
1:49
Hughes, the producer behind ARUK.
1:53
Welcome, Lily. I imagine AIUK was a lot of work.
1:57
Have you recovered? Hi, Samira. Lilian: Thanks so much for having me.
2:00
I don't think anyone ever really recovers from AIUK, but I am doing much better now.
2:04
I've rested, the sun's shining, back to my normal work hours.
2:08
It's all good. Jonah: Well, congrats on your work, Lily.
2:11
Um, what is AIUK for our listeners and why is it relevant?
2:15
Lilian: What AIUK is, it's really an opportunity to bring all these
2:18
different people from across the AI ecosystem in the UK together.
2:22
So Turing works with. academics, with researchers, with universities across the UK.
2:28
We work with industries, with corporate partners.
2:30
We work with government, policy makers, civil servants, and we
2:33
work with the third sector as well. And very rarely do events bring all of those different people together to talk
2:40
to each other and learn from each other. And that's what AIUK does.
2:44
Great. Yeah. Meeting of minds. A meeting of minds.
2:47
That's really cool. Thank you. Smera: Well, amongst the minds that were met, how is AIUK actually put together?
2:52
Who decides who, you know, which minds receive the showcase to
2:56
talk, discuss, and demonstrate some of the work they're doing?
2:59
Lilian: AIUK is a work of so many people.
3:03
So there's a couple of different groups that I work with the most,
3:05
I would say the most important of which is the program advisory group.
3:09
So this is a group that volunteers their time from Turing.
3:12
It's both business leaders at Turing and researchers at Turing, uh,
3:15
from all different levels across, across our, across the Institute.
3:19
They come together. They think about what we should be platforming at AIUK.
3:23
And then we go from there. What's possible, what isn't possible, how do we want to shape it, working together
3:29
to build those ideas, develop them, and eventually showcase them at AIUK.
3:33
And a showcase it Smera: was. Jonah: Yes, it was, uh, a lot of people involved and.
3:37
We were all some of them. Smera: So you've seen at least two AIUKs through.
3:42
So what's different about this year? What was your favorite bits?
3:45
Well, what are some main highlights that you came across and what
3:48
was different about AIUK 2024?
3:51
Lilian: I don't know if I can ever choose sort of favorite sessions.
3:54
They're all brilliant in their own way, but there were some really
3:57
fun things that we did this year. I worked, as I said, with the program advisory group to come up with
4:01
some content that's a little bit different from what we'd done before.
4:04
One of the things that I really enjoyed doing this year
4:07
was the opening provocation. This was a project spearheaded by Drew Hemnett, uh, called The New Real.
4:13
So we worked with Jake Elwes and collaborators to present a kind
4:18
of positive utopia of AI, a bit different from the, the fear you
4:25
sometimes hear about existential risk.
4:27
We were really thinking about how humans and AI and society, all of
4:32
this is going to come together. And there is a, there's an opportunity here for positive change.
4:37
And I wanted to encourage everyone at the event to be curious about
4:40
AI, to think about it in a way they might not have approached it before.
4:43
So we did this as the opening provocation to really start off on
4:47
that foot, celebrating AI, celebrating artists and celebrating being
4:51
Smera: curious. That's brilliant. I think that was a phenomenal show.
4:54
We did manage to catch up with me, the drag queen and the project
4:58
creator, Jake Elwes backstage at AIUK.
5:02
And we asked them why drag is a good vehicle to explore AI.
5:06
Me the Drag Queen: We're throwing rhinestones and wigs and big makeup
5:09
on scary tech so that people can understand that there are amazing ways
5:12
to use it as well as insidious ones. Jake: Yeah, and drag is a great way of doing that, because I guess AI systems
5:16
have a lot of bias towards normativity. So for us, if we're injecting drag kings and drag queens and drag things, gender
5:24
non conformity into those systems, it's a wonderful way of exploring, kind
5:29
of, biasing the AI towards otherness rather than normativity, and breaking
5:33
it down, and seeing when it glitches, and finding poetry, and when it fails.
5:37
I think Me the Drag Queen: I need technology, any, anything that impacts society in
5:42
such a way has to represent and reflect. all of society.
5:46
You can't just have it modeled on the majority because the rest of us do exist.
5:50
We're here. There may not be many of us, but we're here and we're kind of more
5:54
fabulous than you, so get us involved. I guess for me, it ain't a party if the queer's aren't there, darling.
5:59
So Smera: me, the drag queen and Jake, they also told us about how
6:03
AI has influenced their own work. I think this is important because we see now why representation is
6:09
important, especially in places where queerness is often on the margins,
6:13
if it is even included at all. Jake: I've been working, yeah I guess I've been working with like AI machine learning
6:18
for like coming up to 10 years now. Um, so right in the early days I was like AI in its infancy, the
6:25
earliest generative adversarial networks after Ian Goodfellow's paper.
6:29
I was in the basement of my art school like programming these systems and
6:32
hacking with them and and kind of finding poetry and when they failed and broke
6:37
down and back then it was like could only generate these tiny little images.
6:41
And I guess my thinking around AI has changed a lot, like I used to very much be
6:45
interested in, in these kind of questions of agency, uh, and consciousness and
6:49
how, how much kind of agency can I give the computer as an art making machine.
6:55
But then, you know, As I kind of carried on researching in this field and realizing
6:59
that questions about who's building these systems and who are they building
7:02
them for became more the forefront.
7:05
Looking at bias, looking at far more pragmatic, what's in the
7:07
data set, whose data set is it? Is it my data set?
7:10
I think early on it was about appropriating large data sets and kind
7:13
of making them fail and pointing out sometimes political things in that sense.
7:17
But then it became far less about the sort of metaphysical questions around
7:21
AI and more about the kind of pragmatic, How is this affecting people right now?
7:25
And how can we, yeah, offer alternate visions of kind of AI features?
7:32
Jonah: It's interesting, isn't it? This, it sort of shows how art is one of the first methods to take new
7:36
technologies apart and like question the ethical side of how they're used.
7:40
Maribeth Rohr, an artist and research engineer at Google DeepMind, also
7:44
fresh off the stage, told us about how AI relies so much on categorization
7:49
and how queerness can encourage us to think beyond categories.
7:52
Maribeth: Queer representation is important in AI because it's, um,
7:56
one of those spaces that breaks down the binaries that we have and the,
8:00
the fixed categories that we have. I think queerness pushes the boundaries of categorization
8:05
into spaces that are more fluid.
8:07
And in AI, we often are putting things into categories and boxes.
8:12
Classification was a, is still a very like common application of
8:15
AI and queerness challenges that. And it's important to be.
8:19
critically interrogating our technical systems. And so queerness brings that perspective and that's really important for building
8:24
like societally responsible and just also better performing AI systems.
8:30
Jonah: Such a cool way to start AI UK, Lily.
8:33
Um, I imagine it challenged a lot of people's preconceptions
8:37
that opening provocation. Um, we'll link.
8:41
Jake's work in the show notes, so be sure to check that out.
8:45
So that was something new for this year's ARUK.
8:47
Something else that was new for 2024 that wasn't there in 2023 were workshops.
8:52
Am I right with that? Lilian: We actually had some workshops in 2023, but they were only an hour long.
8:57
They were very limited in scope. This year we really expanded it.
9:01
We had seven two hour workshops throughout the two days, and
9:05
they were really designed to be. These very interactive, very intensive sessions, uh, where we could, they
9:14
were designed to solve problems. If that makes sense, I didn't want the workshop to be another panel.
9:18
We had plenty of space for panels on the stages.
9:21
So what is a workshop? What are you doing there? Are you building a community?
9:24
Are you finding new projects? Are you solving a problem?
9:28
Are you getting into data into research?
9:31
All these things, I was like, that's all on the table.
9:33
And there were some hugely creative workshops at AIUK.
9:36
I'm really, really proud of this part of the program, actually.
9:39
Jonah: Yeah. I thought they were really cool. Like I was running around with my camera and I spent quite a lot of time in,
9:43
um, the Lego play workshop where, uh, they were sort of building things to
9:48
discuss cyber defense and it was so good.
9:50
There was so much interaction. Lilian: And I think one of the things about events like this, and
9:54
actually this is, it calls back to what Jake was doing as well.
9:57
That, you know, these. events, these showcases, they're opportunities for us to experiment
10:02
and play outside of the everyday.
10:05
We can try things we haven't done before. We can meet people we haven't met before.
10:08
And I think if you're all around a table, all playing with Lego or all brainstorming
10:12
ideas with, you know, with strangers, with new friends, if you're watching a draft,
10:17
a deep fake drag queen, you know, it's going to challenge what you want to do.
10:21
What you think and what you encounter in your every day.
10:24
And it's so important to do that work. It's so important to experiment and to play, especially in a field like AI.
10:30
So I think the workshops were really trying to encourage that.
10:33
Jonah: During another workshop, I spoke to Rebecca Cosgriff, who is the deputy
10:37
director for the data for research and development program at NHS England.
10:42
Um, Rebecca was a lead. on a workshop about unlocking healthcare data for safe, transparent, and fair AI.
10:48
And she told me what she was learning from having this interactive sort of session.
10:51
Rebecca: Yeah, I think one of the key learnings for me today was that there
10:54
was actually really significant consensus across England, Scotland, and Wales, which
10:59
were represented on the panel, but also on some of the table discussions across
11:03
industry, academia, and the NHS on some of the key enablers of AI, including
11:08
things like the proactive curation of data before it's provided out to researchers.
11:13
Um, it's really important that organizations like the Alan Turing
11:16
Institute have rapid access to granular, multimodal healthcare
11:20
data generated by the NHS and other data sources to answer some of our
11:24
really key imperative questions on how we improve care for patients,
11:29
support the NHS and drive innovation.
11:31
Smera: Speaking about healthcare data, there was also another panel
11:35
on improving disease detection.
11:37
Chair of the Board of the Turing and Director of the Natural History
11:41
Museum, Doug Gurr was on this panel and he told us why the UK is not only
11:45
best place, but probably the only place where such advances can happen.
11:49
Doug: So healthcare data is probably the most sensitive area.
11:52
You've got to be able to bring patients with you.
11:54
You've got to bring trust. And that's why you need a regulatory environment that can reassure.
11:59
that we're going to do this in an ethically sound, sensitive, safe
12:02
way, but at the same time a way that doesn't constrain that innovation.
12:06
And the UK is, I would say probably, but actually I'm going to go for
12:10
certainly, the best place in the world to do this, because only really in the
12:14
UK do you have those amazing data sets. And So with the opportunity to bring together the data science talent, the
12:20
clinical talent, get the government involved and actually bring everybody
12:23
around the table so that for the first place in the world, we can truly reap the
12:28
benefits of what AI can do for healthcare.
12:30
Smera: The element of trust raised by drug is very crucial with the
12:33
technology we are confronting. Almost daily we see reports on AI for good, especially in
12:38
healthcare, but we also see how AI has the capacity to disrupt labor
12:43
markets, influence the economy. Giving a lot of people reasons to be skeptical.
12:47
Hence, I think that trust factor should be at the center of developing an
12:51
inclusive and equitable path forward, particularly in using AI for healthcare.
12:56
Jonah: Yeah, definitely. So when I was exploring the demonstration area, I caught up with a team who are
13:01
developing digital twins of hearts.
13:03
So you can have a digital copy of your own heart with.
13:06
all the accurate medical data, and then they can run simulations of, say,
13:11
different drugs or stresses on your heart and see how it would respond.
13:15
That's pretty cool, isn't it? Smera: I think my own heart may be too broken for a digital twin.
13:20
Ah, Samara! Anyway, this is phenomenal work.
13:23
It was really exciting to see them 3D printing these hearts right
13:27
in front of us on the expo floor.
13:30
But beyond just the excitement of seeing a 3D printed heart, the future
13:33
of this work could revolutionize health care for a very particular group.
13:38
Can you guess which one? Jonah: Is it a vulnerable group?
13:40
It is a vulnerable group. Do you want to get specific?
13:43
Is it, um, children?
13:47
Children. Smera: So we can use digital twins to advance children's health care without
13:52
actually involving real human children.
13:55
Children have very different bodies that are constantly changing, and
13:58
we cannot merely copy paste adult healthcare responses to that of a child.
14:03
Moreover, I think the ethics of, you know, trialing and testing these
14:06
healthcare responses on a vulnerable group like children is at the
14:10
forefront of the concerns that are faced by regulators and governments.
14:14
Of course, at the heart of all of this is data and the Turing has been working
14:19
very closely with children to better understand how to safely use that data.
14:24
From the, what can children teach us about AI session?
14:27
We grabbed Turing fellow, Veri Aitken, and Steph Wright from the Scottish AI Alliance
14:33
to talk about their project, which puts children's voices at center stage.
14:37
Steph: Well, in Scotland's AI strategy, we had a commitment to adopt, uh, the
14:42
UN's policy guidance on AI and children.
14:45
And we wanted to explore how we can engage with children to get their
14:50
input into our shared AI futures.
14:53
Uh, it just so happened at that time, Vari and her team at Alan Turing
14:56
Institute were also interested in that. And, um, I thought, what better organizations to bring together
15:02
than the Academic Excellence of ATI with the children's rights based
15:08
approach of the Children's Parliament.
15:10
Mhairi: When we think about child centered approaches to AI, it's important
15:12
that we're not just thinking about safeguarding children from the risks.
15:15
Of course, that's one really important dimension of it, but often there can be
15:20
kind of overly paternalistic approaches, which are all about identifying from an
15:24
adult's perspective what the risks are and safeguarding or protecting children
15:27
from those risks or perceived risks AI.
15:30
But if we don't actually speak to children and understand from children's
15:33
perspectives what their experiences are, what their interests are, what
15:36
their concerns are, we might miss some really important aspects of this.
15:40
Um, and it's also important that this isn't just about identifying risks
15:43
or safeguarding children from risks. It's also about finding ways that we can maximize the value and
15:48
maximize the benefits of technology and innovation for children.
15:51
When we speak to children about AI, the big themes that come out, the kind of
15:54
central areas that, that they really. Yeah. I want to focus on discussing, um, uh, quite consistently around themes
16:00
of fairness, um, and particularly how these technologies might work
16:04
differently for different children. Uh, and I think the, uh, the children that we've spoken to certainly seem
16:09
to really kind of intuitively, uh, gravitate towards the concept of fairness.
16:13
It wasn't something that we introduced. It wasn't something that we planned to have as a, as a central theme of the
16:17
engagement, but it was really what, what the children wanted to talk about.
16:20
Um, and they grasped very quickly that, that AI might have different outcomes
16:24
for different groups of children. Um, and that they were really wanting to understand more about how we could
16:29
develop these systems to make them fairer, to make sure that they, uh,
16:32
had, you know, equitable benefits for, for different groups of children.
16:35
That's part of the value of having children in these conversations,
16:38
because, well, as an adult, we might, a fairness is maybe a kind of a
16:41
concept, you know, an abstract thing. And something that we know is important, but adults often make
16:45
kind of, uh, justifications.
16:48
Oh, well, that might not be fair, but it's because of this, this, this, and this.
16:51
Whereas children will say, that's not fair. That's not okay.
16:53
You know, we need to do something about it. We need to make that fair. And actually that's sort of the value of bringing that children's
16:58
perspective into these discussions. Yeah. I love it when the plan comes together.
17:01
So obviously the collaboration kicked off. We're now approaching the end of phase two.
17:06
The first year was, um, Uh, led by the Turing Institute, uh, to
17:11
explore children's rights and AI.
17:13
The second year, which we've just come to end, is about exploring how to dig
17:18
deeper into operationalizing some of the, um, findings in phase one of the
17:25
project, especially around, you know, safety, uh, AI in education and, um, And,
17:32
and bias, which were all these concerns that children expressed, they were
17:35
particularly interested in exploring. So phase two was about, you know, partnering up with actual organizations
17:41
with actual projects or policies they were developing that the
17:45
children can, you know, meaningfully,
17:49
Jonah: That's really interesting. Maybe this will begin to fix the trend of future generations having
17:53
to fix past generations mistakes.
17:56
Talking of futures, I was at AIUK last year and heard a lot of
18:00
predictions for the year ahead. And a lot of them were about how generative AI like ChatGPT was going to
18:05
go stratospheric and how we will start to encounter more misuse of those tools.
18:13
This year I caught up with Micheal Wooldridge who chaired a session all about large language models and asked for his thoughts. Mike: So I think there's two things I think are really interesting to
18:18
keep an eye on in terms of risks. So the first is about misinformation and disinformation, particularly in elections.
18:25
And as I'm talking now, within the next year, we're going to
18:27
have more than a billion people worldwide going into elections.
18:30
We've got elections in the UK, we've got elections in the US, elections in
18:34
India, the world's biggest democracies.
18:36
And the fear that was just beginning to be voiced a year ago was that
18:40
AI was going to be used to generate disinformation on an industrial scale.
18:45
Now the worrying thing is we are beginning to see the signs of that happening.
18:50
We're beginning to see fake news stories.
18:52
And actually, interestingly, we're beginning to see news stories where people
18:56
are claiming that it's AI generated, even though it's actually original, which
19:00
is not something that we anticipated.
19:03
So I'm Keeping, I'm looking at that nervously, um, the government's announced
19:07
initiatives to try to deal with that. Let's hope that they get it right.
19:10
And I think the Turing will be, will be front and center in those discussions
19:14
about, about how to get that right.
19:16
The other thing is the age old question of AI and employment.
19:21
And again, a year ago, we were looking and contemplating whether large
19:27
language models were going to lead to unemployment on a, on a large scale.
19:31
For example. We're just beginning to see the first signs in some sectors of the
19:36
impact of large language models. So we're beginning to get, for the first time, uh, some understanding
19:43
of how this technology is going to affect the workplace.
19:46
And I think this is going to be a crucial year.
19:49
At the end of this year, I think there's a real chance that we will have seen
19:54
some really significant signs of how AI in general, but large language models in
19:59
particular are affecting the workplace.
20:01
So I think that is something that we should really keep
20:03
an eye on over the next year. Jonah: Lily, are there any sessions that we haven't touched
20:09
on that you want to cover? I'm probably drawn to some of the, the big headlines, um, some of the more
20:15
accessible sessions, um, Uh, but I'm very aware that there's loads of stuff that's
20:19
been talked about that will be interesting to lots of different audiences.
20:23
Lilian: Yeah, absolutely. So there were about 50 sessions, all in all, at AIUK.
20:27
So there are so many I wish I could talk about, and they really do range in topic.
20:32
We had sessions on AGI and LLMs.
20:34
There were sessions on productivity that.
20:36
It might not seem sexy, but that's the stuff that's going to impact my life.
20:41
That's going to make it easier for me to fill out forms, for me to
20:45
interact with the state, that's going to make my life better as a citizen.
20:49
And I think that stuff's really important and really interesting.
20:52
It's really important to have these conversations that we might not be
20:55
having in other spaces, or we are having in siloed conversations where
20:59
just experts are talking to each other, but at ARUK, Like I said, everyone's
21:04
there, policy makers, experts from across different fields, researchers,
21:08
students, professors, industry leaders.
21:11
So you're getting all sorts of people together to talk about things
21:14
that they might not have previously talked about or heard about.
21:16
Defense and security, for example. I think it's really important to platform those issues at AIUK and
21:22
have a discussion outside of the defense and security ecosystem.
21:26
Obviously, the people who are interested in it. They're welcome to come, they learn, and they can contribute to that conversation.
21:31
But people who might not be familiar with that field, it's great for them
21:35
to see that content as well, I think. Smera: No, I fully agree, Lily.
21:37
I think with the recent developments, we can see how AI and tech is being
21:42
positioned as a very important tool in a nation's defense arsenal.
21:47
Throughout history, we've seen how advances in tech have been driven
21:50
by investments in research and development by departments of defense.
21:54
For instance, you know, I remember in our episode on chip wars, We saw how defence
21:59
investment was critical to improving the microchip architecture and capacity.
22:05
Jonah: I do. Yes. That was a good one. Chip Wars.
22:07
You can check it out on Series One. So there was a session at AIUK called The Secret Session, which was a chat
22:13
between Tim Watson from the Alan Turing Institute and Stephen Mears,
22:16
who works for the Defence Science and Technology Laboratory, DSTL, the
22:20
research arm of the Ministry of Defence. We spoke to him about the impact AI is having on defense and security.
22:25
Steve: So I think, uh, defense and AI, obviously a topic that a lot of people
22:30
feel concerned about, but for me as a scientist working in the defense area,
22:36
I really see this as a transformational technology that can really support
22:41
our armed forces in the difficult role that we undertake, that they undertake.
22:46
So, um, everything from things like commander control,
22:50
where they have to make. difficult decisions, how can we use AI to help get them the best
22:55
information, to help them make the best possible decision in the
22:59
different environments that they're in? Intelligence, surveillance and reconnaissance, how can we use AI to
23:05
help make sense of massive amounts of data and help them understand
23:09
what's going on around them? And then perhaps closer to home, how might we be able to use AI to counter
23:15
disinformation and misinformation and help ensure that people can really
23:21
understand and the authenticity and provenance of the information
23:25
they're seeing on the internet. Jonah: Lily, thank you so much for joining us for this, uh,
23:32
Too Long Didn't Read special. Uh, and congratulations to you and all your colleagues.
23:37
And that also obviously includes us.
23:39
Congratulations to everybody involved in AIUK.
23:42
Lilian: Thank you so much, Jonah. It's been great to be here and I'm so glad we can re watch it on YouTube as well.
23:47
Yes, Jonah: all the sessions, cut downs, lots of exciting content will be on our
23:50
YouTube, but we'll probably talk about that when it actually appears there.
23:54
Speaker 9: Thanks, Mera. Nice to see you again after all this time.
23:57
And hopefully I'll see you in a month for a brand new season.
24:02
Jonah: Yep, and uh, thank you of course to Jesse, who's in the
24:05
background furiously scribbling away notes, that is, not just drawing a
24:09
picture, um, and you for listening.
24:12
See you soon for series two.
24:15
Toodaloo!
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More