Misinformation in democracy, a robot who serves apples and generative porn

Misinformation in democracy, a robot who serves apples and generative porn

Released Friday, 31st May 2024
Good episode? Give it some love!
Misinformation in democracy, a robot who serves apples and generative porn

Misinformation in democracy, a robot who serves apples and generative porn

Misinformation in democracy, a robot who serves apples and generative porn

Misinformation in democracy, a robot who serves apples and generative porn

Friday, 31st May 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Jonah Maddox: Too Long, Didn't Read. Brought to you by the Alan Turing Institute, the National

0:06

Institute for Data Science and AI.

0:10

Welcome to Too Long, Didn't Read. AI news, developments and research delivered directly to your ear

0:16

holes from the experts and me.

0:19

I'm Jonah, a content producer here at the Turing, and

0:22

Smera Jayadeva: I'm Smera, a researcher in data justice and global ethical futures.

0:26

Jonah Maddox: Smera, in season one, you were the resident expert, brilliantly

0:30

answering questions on everything AI, from the ethics of AI labor, to history of the

0:34

chip wars, and even briefly stopping to advise Santa on a potential AI workflow.

0:40

But you have been given, I'm gonna say, a promotion.

0:44

You're my co presenter now! Smera Jayadeva: Yeah yes, that's right.

0:49

We have a slightly new format this season.

0:51

This time you and I will be discussing the AI news, but we'll also be seeking various

0:56

expert voices from a wide range of AI and data science disciplines, ultimately to

1:02

create an even more comprehensive podcast.

1:04

Jonah Maddox: Exciting stuff. On this episode of TLDR, we will be talking about misinformation, not a rising

1:11

star in an academic beauty pageant, but a very serious threat for democracy.

1:15

Smera Jayadeva: We'll also see if the age of dynamic, real time robotics is actually

1:20

upon us, and what it means for your job.

1:22

Jonah Maddox: And we look beyond the headlines when talking

1:24

about generative AI and sets.

1:30

That's funky music on the guitar. 2024 is the year of elections.

1:39

Over 80 countries and half the global population will be voting this year.

1:43

Since there's no collective noun for a group of elections, let's go with a group.

1:47

Electiontastic. Smera Jayadeva: Yeah, taking on elections is a huge endeavor.

1:53

Take India, it's seen as the world's largest democracy.

1:56

Elections began in late April and will go on till June.

2:00

And we're talking a massive population of nearly a billion people.

2:04

Jonah Maddox: Yeah, and as if sorting through the candidates campaign promises

2:07

wasn't tricky enough before, we now have good old AI to consider and how

2:12

it can help people spread falsehoods.

2:15

Smera Jayadeva: Yeah, and we discussed some of this in the first episode of our first season.

2:18

Remember Jonah, way back when? But essentially we face three strands of information manipulation.

2:25

First we have disinformation, which is the big bad boy, wherein

2:29

information is falsely construed with the intent of manipulating audiences.

2:35

Then we have misinformation, but without the intention of actually causing harm.

2:40

Finally, we have the complexities of malinformation, where information

2:45

is exaggerated or conflated to obscure the truth or the narrative.

2:49

This is also where secret or classified information is often shared at a

2:54

strategic time just to influence voters.

2:56

Jonah Maddox: Check out series one, episode one for the fuller explanation

3:00

on this we'll link it in the show notes. Smera Jayadeva: And there's also a few points to keep in mind when it comes

3:05

to the misuse of data and information.

3:08

For one, any group or individual manipulating information doesn't

3:12

necessarily have to follow a single route.

3:15

Even if they're intentionally planning on manipulating information, one can

3:19

begin by exaggerating historical events and follow it up with intentionally

3:24

false and misleading information only to galvanize voters towards their cause.

3:29

For instance, you know, I mentioned India early on and actually was

3:32

in India during the elections and there's a good chance that by the

3:35

time the This recording is out.

3:38

India is probably still going to be counting the votes.

3:41

But if one were to track the misinformation or disinformation

3:44

campaigns in the country, it's, it's rarely a day without reports of false

3:49

information making rounds on social media platforms or communication channels,

3:54

be it Twitter or X or even WhatsApp.

3:57

In fact, the world economic forum said India has the highest risk of

4:01

miss and disinformation in 2024.

4:05

Jonah Maddox: So, it's rife, and it's relevant, and we're going to deal with it.

4:08

It's probably time we should, uh, bring on our special guest to navigate this.

4:14

DLDR Expert Guest! This month, we are joined by an expert who has worked as an analyst

4:19

within the Defence and Security Research Group at RAND Europe.

4:22

She's led projects which assess the impact of emerging technologies

4:26

on the information environment and worked to identify the

4:28

implifications of disinformation and conspiracy theories in Europe.

4:32

That sounds cool. Her research has informed strategy and policy at the UK Home Office, UK Ministry

4:37

of Defence, the European Commission and the United Nations Development Programme.

4:41

From the Turing Centre for Emerging Technology in Security, CETAS,

4:45

we are very happy to welcome research associate Megan Hughes.

4:48

Woo, sounded a bit like the sort of beginning to blind date there where

4:53

Graham kind of brings him on like, from the Turing Center of Emerging Technology.

4:58

Hello Megan, I'll let you speak now. Megan Hughes: Hi Megan.

5:02

Thank you so much for having me. Looking forward to hopefully an interesting discussion.

5:06

Definitely. Jonah Maddox: So can you give us a very brief explanation of what CTAS

5:10

is and what a research associate does? Megan Hughes: Sure.

5:12

Yeah. So CTAS is the Center for Emerging Technology and Security

5:16

at the Alan Turing Institute. And I'm a research associate within the team and we work on policy

5:22

research relating to emerging technology and national security.

5:26

So we look at kind of the implications of emerging tech technologies and we

5:30

try to advise actors like the government on what they should do in response.

5:34

Jonah Maddox: Okay. So, we've learned sort of quickly in the intro from Smyrna about

5:38

misdisc and malinformation. But could you tell us a bit more about how it plays

5:42

Megan Hughes: out during election time? Sure. Yeah.

5:44

So I'll kick us off with a kind of traditional influence operation.

5:48

If we look back at the 2016 U.

5:50

S. presidential election, Presidential elections. We can see quite a typical example of a state sponsored influence operation.

5:58

So this was when Russian actors looked to influence US

6:02

voters ahead of the elections. And they did a number of things.

6:06

So it wasn't just a kind of misinformation, disinformation campaign.

6:09

It was much broader than that. So we had things like hack and leak techniques.

6:13

where hackers got into the Clinton campaign emails and then shared

6:18

these emails over a period of a few weeks to kind of distract

6:21

from the main campaign messages.

6:24

But specific to misinformation, Russian actors created a network

6:28

of fake accounts of bots. We're looking at about 50, 000 of them, and they were all sharing emails.

6:35

divisive content, fake news stories, reposting hashtags to make them

6:39

go viral, like Hillary for prison.

6:42

That was one of them. And they were also publishing political advertisements, criticizing Clinton.

6:47

So that's looking a few years ago, and that's looking at something that,

6:51

like I said, I'd kind of term that a traditional influence operation.

6:54

When we look to the past. past few years, and we look at AI examples from elections that have taken

7:00

place since we've looked at the start of 2023 and research I've been, I've

7:04

been doing, and I can talk to you about three examples of AI misinformation.

7:09

So you've got AI generated voice clones.

7:12

I don't know if you've, if you saw coverage on the Biden Robocalls.

7:16

So this is where we had a deepfake audio.

7:19

clips of Joe Biden urging voters not to turn out and vote in

7:24

the New Hampshire primaries. We've also got an example, if we look to Poland in their recent election, the

7:31

opposition party actually published a deepfake audio clip of the prime minister

7:36

reading a set of real leaked clips.

7:38

So you can see how kind of generated voice content we're seeing come up.

7:42

General AI generated content as well.

7:45

Over in the U S there are reports of whole news sites that have been generated by

7:49

AI sharing completely fake news stories.

7:52

So that's more text based content that's quite easily shareable.

7:55

And lastly, coming closer to home, looking at the London mayoral elections,

8:00

we saw AI powered bots, that were again, sort of similar to the tactics in the

8:05

Russian operation, circulating hashtags.

8:07

So the hashtag London voter fraud was circulated quite a

8:11

lot ahead of the elections. So those are some examples of techniques and tactics that

8:15

we've seen being employed. Right. And is there any evidence of them having the desired effect?

8:22

So that's really interesting. So when we look at Misinformation generally.

8:28

So if we kind of take the AI out of the context, a lot of studies have shown that

8:33

only a small minority of people actually see the majority of misinformation.

8:39

So I think there was a study in 2016 on X, formerly Twitter and it showed

8:44

that only 1 percent of X users actually were exposed to 80 percent of the

8:50

fake news content on the platform. And if you're exposed to misinformation, it doesn't necessarily

8:57

mean you'll be persuaded by it. So fake news, we know, is more likely to enhance existing views.

9:03

It's not as likely to radically change your behavior, so not as likely to

9:08

kind of influence voting intentions. And studies have quite consistently found that in relation to elections,

9:14

misinformation hasn't meaningfully The outcomes of elections.

9:18

And that's because there are loads of factors that contribute

9:21

to someone's voting choices. What we can look at is what's new with AI.

9:26

So looking forward to kind of upcoming elections.

9:29

Well, AI might make a difference in.

9:32

The amount of disinformation and misinformation that might be disseminated.

9:37

So it might help people, help actors reach more people.

9:40

It also might help to personalize misinformation.

9:44

So this is called micro targeting and it's a concept where personalized campaigns are

9:48

aimed towards individuals or groups, and it has been shown to be quite effective.

9:54

I think something that's quite relevant is the platforms on which

9:58

people are finding their news.

10:01

So we know that young people between 16 to 24, the majority

10:04

of them find their news online. So I think it's 80 percent find their news online.

10:10

And most of that is through social media. Not to kind of, you know, scare anyone because it's perfectly easy to.

10:16

Look at to see BBC news on social media.

10:18

It doesn't mean that people are just getting all of that news from fake sites.

10:22

But what's important is traditional social media sites use graph models where they

10:29

show you content based on the content that your network, your social network

10:34

is sharing and liking and engaging with.

10:36

When we look at TikTok, which is obviously going to be a big

10:39

player when it comes to sharing information before elections, TikTok.

10:44

doesn't use that model so much. So TikTok actually shows you information that comes from

10:50

outside your social network. It actually uses algorithmic recommendations to bring in new content.

10:57

So if we look at kind of what's new with AI in terms of being able

11:00

to personalise Disinformation or misinformation in, in able to,

11:05

being able to reach more audiences. Could we see more effective use of misinformation on platforms like TikTok?

11:12

Maybe, but that's not to kind of sow worries.

11:16

Smera Jayadeva: So would you say TikTok's the answer to echo chambers then?

11:19

We're breaking, we're breaking what, what things were before.

11:22

Megan Hughes: I wouldn't recommend spending all your time looking

11:26

for information on TikTok. I think it yeah, maybe, maybe The answer to echo chambers, but I know that groups

11:31

like, like Meta are exploring, changing.

11:34

They're kind of using the graph models, the social graph models.

11:37

So who knows, but I think you're definitely right that echo chambers

11:41

exist on traditional social media sites.

11:43

And we know that people are likely to kind of share things that they agree

11:47

with, with people that agree with them. Right. Smera Jayadeva: So surely voters are used to being sold something when it comes

11:53

to electoral promises and manifestos.

11:56

That's all the basis of electoral campaigning.

11:59

But doesn't that mean we've always been vigilant towards such, you

12:03

know, trends in communication?

12:05

Megan Hughes: Sure. In the interest of talking about a really, you know, timely, hot topic, can

12:12

I suggest we go back to ancient Rome?

12:15

Yeah, you know, just down the road, yeah.

12:19

Just down the road, you know, finding the really relevant facts here.

12:23

But there is an anecdote. There's a point.

12:25

So, so if we go back to the Roman Republic.

12:28

It's facing civil war. Octavian, who is Caesar's adopted son, wants to really get the public on side

12:35

so he can win against Mark Antony, one of Caesar's most trusted advisors.

12:41

So what does he do? He spreads a bunch of rumors that Mark Antony is a drunk.

12:46

And because he's having an affair with Cleopatra, he doesn't have

12:50

any of the traditional Roman values that would make a good leader.

12:53

I hope you see the point now. I'm trying to point out that this is a very, very early

12:59

example of misinformation, disinformation even, campaign.

13:03

So we can trace this back thousands of years.

13:05

Misinformation is definitely not a new problem.

13:09

It's something that, as you say, we've been dealing with for a while.

13:12

When we look at the impact of new technologies like AI,

13:16

there are some differences. So, you know, I mentioned being able to disseminate misinformation

13:21

more easily and to more, you know, more people, but there's also a

13:25

concept called the liar's dividend. I'm not sure if you've come across this.

13:30

No. So this concept was coined by a couple of US law professors and the concept

13:35

is that people can now claim that true information is false and you can avoid

13:42

accountability by relying on public scepticism and the belief that the

13:46

information environment is completely inundated with false information.

13:50

So that's something that, you know. We might expect to see we've we've seen an example of it actually in relation

13:57

to elections in Tamil Nadu in India, a clip came out of a minister accusing his

14:03

own party members of illegally finding finance or fraud, basically, and he

14:09

came out and said, No, I dismiss that. That's not true.

14:11

I never said that. But a later analysis of the clip by technical experts found that it It's

14:17

quite likely that the clip was authentic.

14:19

So that's one example we've seen.

14:22

We've not seen lots of examples of this, but it's definitely

14:25

something that, you know, there's potential there for it to happen.

14:29

Jonah Maddox: Yeah. So as that begins to happen, people's trust in truth will sort of disappear.

14:33

Bit by bit break down. It's funny, isn't it, that you think of kind of this as a sort of

14:38

highbrow topic, but it's basically just playground tactics, isn't it?

14:42

Completely. Smera Jayadeva: With all of this, how do we authenticate real information?

14:46

I know you said there are a couple of experts, but if there's so much of this

14:49

going around, are there any ways we can.

14:52

you know, trying to ascertain the truth, at least for an audience

14:55

that might not have that much time. So is there maybe someone out there doing this work for them?

15:00

I think Megan Hughes: there are a few things. So there's things that platforms can do and there's things that we can do.

15:05

And I think the first piece of advice I'd give is to maintain

15:09

a healthy level of skepticism.

15:12

It's important not to. believe all the hype and not to worry too much, because just as you mentioned

15:18

Jonah, if we get kind of really confused about the state of the information

15:22

environment and we think, you know, the waters are completely muddy, we

15:25

can't find true information anywhere. That's not going to help anyone.

15:28

And it creates a kind of sense of, of public anxiety that

15:31

might actually undermine things like real election results.

15:35

In terms of kind of practical things that, that people can do and platforms can do

15:40

as well, we've seen that, uh, pre bunking is a method that can be quite effective.

15:46

So this is a prevention rather than the cure method where you anticipate

15:50

the use of disinformation and you warn people about it before it spreads and

15:55

you provide factual information on a topic so people are kind of aware that.

16:00

Disinformation might be coming their way. I Jonah Maddox: read about pre bunking, that's not a word I'd encountered before.

16:04

Yeah, Megan Hughes: and it's actually been effective. They've used they've done some early studies on climate disinformation.

16:10

Um, and I think that platforms like Meta have actually started using

16:15

pre bunking techniques online.

16:17

So it's, it's proven kind of effective and platforms are deploying techniques.

16:21

Jonah Maddox: Looking at the stat you gave us about how few people are

16:24

actually exposed to misinformation means that The majority of information

16:28

we're getting is information and we should be told yeah, you can

16:31

believe a lot of what you're getting. Is that happening?

16:34

Megan Hughes: I think you're completely right. And it's really important that, you know, we do need to be encouraging

16:40

trust in the information environment.

16:42

So I think when you log on to Facebook, I think in the campaign period if you

16:46

share a post that's to do with a political party, for example, I, if I remember

16:51

rightly, a little comment comes up saying, you know, have you, have you checked this

16:54

source or have you checked the content? And I think that's a great example of something that could be done to just kind

17:00

of make people pause and think, Oh, okay.

17:03

Smera Jayadeva: So on the different methods that we're probably, we can

17:06

use either as an individual or that platforms are taking on, I've also

17:09

heard about the, The Coalition for Content Provenance and Authenticity.

17:14

So essentially content watermarking, is that going to have any real impact?

17:18

What can we see in the future when it comes to C2PA?

17:21

Megan Hughes: I think it's a great question. I think C2PA is.

17:25

a step in the right direction.

17:28

So it's a group of organizations that have come together and they have committed

17:32

to developing technical specifications to be able to trace the origin of media.

17:38

And there's lots and lots of ongoing research on watermarking, but there

17:43

are a lot of problems with it. So there's the adoption problem.

17:47

So if one platform adopts a form of watermarking and they're putting

17:52

notices out saying, you know, Oh, this content is AI generated, there might

17:57

be an assumption by users that any content that then isn't watermarked.

18:02

is legit. And that might not be strictly true.

18:05

So there's, there's, there's a, there's an adoption problem there.

18:08

And even if watermarking kind of becomes very good, I think we can assume that

18:14

sufficiently capable and sufficiently motivated actors, they'll get around it.

18:19

So it's a step in the right direction, but it won't be a kind of A great

18:23

solution that solves all of our problems.

18:26

So Megan, could you tell us about this CTAS report?

18:29

Sure. Yeah. So, so this has been a great project to work on and it's, it's ongoing.

18:34

So we've got a publication coming out soon.

18:37

That's, that's a briefing paper and then a longer form report due out later

18:41

this year, and what we've been looking at is the impact of AI enabled threats

18:47

to the to the security of elections.

18:50

And. We've been looking at examples of AI misuse from 2023 to date, and the kind

18:57

of takeaway that I'd like listeners to, to think of is that examples are quite

19:02

scarce and where they do exist they're really hyped up by mainstream media.

19:07

The, the risk isn't really in AI use during elections.

19:11

You know, there's, there's a small risk, but the major risk is the heightening

19:16

of public anxiety and the undermining of the general information environment.

19:21

And what we don't want is for people to lose trust in genuine,

19:25

authentic sources and information.

19:27

So that's the kind of key top line I'd want people to take away from our report.

19:31

Jonah Maddox: Yeah. Yeah. That's a really good point. Let's make sure that we're not Contributing to the hype about

19:36

misinformation with this podcast. So I suppose that that kind of leads us to any final thoughts from you.

19:44

A concluding statement, if you will. Megan Hughes: Sure.

19:47

I think the, the key message is, you know, misinformation has been

19:51

around for thousands of years. AI is relatively new to us all, but it is just a tool, so people will use it

19:58

for good and for bad, but please don't worry that it's going to hugely impact

20:04

all of the upcoming elections in this very important year for, for democracy.

20:09

There's a lot of hype, but we're yet to see any real evidence that AI has

20:13

actually impacted any election results.

20:15

So just think critically, check your sources.

20:18

Think about the content of news and that's it.

20:21

Smera Jayadeva: All right. So just before we leave, there's one final question.

20:26

Is you may hypothetically in a world where you are facing off for prime

20:30

ministerial elections in the UK, Megan, what legislation should you be elected,

20:35

what legislation would you spearhead?

20:38

Megan Hughes: Oh, That's a really good question.

20:41

I have to really think carefully because there's a lot of public

20:45

accountability with a public podcast.

20:49

I think that the Online Safety Act has made some good steps, but I think I

20:56

would like to see stronger legislation surrounding pornographic deepfakes,

21:01

because, you know, we've spoken about AI in the context of election security,

21:06

but 95 percent of online deepfakes are pornographic material, often of women.

21:13

So, you know, that's a huge problem that I think it got discussed a lot.

21:17

with what happened to Taylor Swift but the kind of conversation spiked

21:21

and then as dropped down a bit. So I think that that's a really important topic that we need to, we need to have

21:26

really strict laws in place to deal with.

21:29

Smera Jayadeva: I mean, that's a great point. I'd, I'd vote for you just on that.

21:34

Jonah Maddox: There's my campaign. We'll be coming to a bit of the the deep fake stuff later in the episode.

21:38

Thank you very much, Megan. You've been a wonderful guest.

21:41

We'll let you get back to saving the world.

21:44

Megan Hughes: Thank you very much for having me. This has been lots of fun.

21:51

Smera Jayadeva: Okay, Jonah. So for our second story, I really wanted to talk about robotics.

21:55

Robotics. So I saw a really interesting video the other day from Figure AI about

22:00

their new robot named Figure01 and OpenAI software has been integral

22:05

to the development of this robot. And the reason why.

22:08

I think I was so surprised by it is because of the way in which the

22:13

robot responds to some of the tasks that the person's asking them to do,

22:17

not only in terms of its movement, but also the way the robots spoke.

22:21

I think that was the first time I actually confronted the fact that, you know, this

22:25

isn't Something that's, you know, a few decades away, but this is something that

22:29

we're actively working on right now. Jonah Maddox: Yes.

22:31

It's a pretty amazing video. We will link it of course, in the, in the show notes.

22:36

For our listeners that haven't seen it, the launch video for figure

22:39

zero one has someone asking this shiny Chrome robot for some food.

22:44

It gives him an apple. And then proceeds to clean up a mess while explaining why it chose the apple because

22:49

it was the only edible thing on the table. I know the task of giving someone an apple doesn't sound hugely impressive,

22:55

but you do need to watch it to see how different it is at least of how I thought

23:00

of humanoid robots were progressing.

23:03

It's mad. Smera Jayadeva: Yeah. And you know, this, this startup figure AI is backed by some

23:07

of the biggest names in tech. Jeff Bezos, Microsoft a lot of companies have invested, I think over

23:12

a billion into the development of this technology and what's key to this new

23:17

shift that's happening is that open AI's recent generative AI software has

23:21

been a key part of the entire puzzle.

23:24

It's making the robot more dynamic and it's making that text to that

23:28

natural language speech a lot more impressive for the general audience.

23:32

And I think it, it really shows how quickly tech has been evolving.

23:36

I mean, if we see, you know, the industrial revolution times of the

23:39

early 1800s, early and mid 1800s to, you know, the quick jumps, the

23:45

rapid jumps that we saw from the year 2000 to now where, you know, we had

23:49

some basic computing and now we have Really, really, really smart phones.

23:54

And I just wonder if we're seeing this right now, what can we can expect

23:58

in like the next two or three years? Jonah Maddox: Yeah.

24:01

So are we going to see a massive increase in robots around us now?

24:04

Is, are we prepped for this? Smera Jayadeva: As I said before, you know, generative AI has been

24:08

instrumental to giving that boost to the robotics industry to make it more

24:12

dynamic and respond in real time.

24:15

But if you watch the product videos, it's far from our imagined idea of a

24:20

perfectly mobile and you know, a robot that's able to respond that quickly.

24:25

If you see some of these videos, especially of the ones

24:27

that look like little dogs. Yeah, it's a bit creepy to say the least, but But that's just talking about, you

24:33

know, more performance related aspects. I think there's also the general challenges of generative AI, some

24:39

of which we've already covered. Yes, Jonah Maddox: the impact on vulnerable communities.

24:44

Or is it the biases? Or the safety concerns?

24:46

Or the explainability? Smera Jayadeva: Or all?

24:49

Yes. It's pretty much all of that.

24:52

I mean, this isn't to say there aren't great users for robotics though.

24:56

We can use them to navigate difficult terrains.

24:59

For instance, NASA is working on a robot to navigate celestial bodies.

25:03

So you don't need to put a human at risk on, on the moon instead, a robot

25:08

may be able to walk around and, you know, pick up some space material

25:12

to bring back for research purposes. Yep.

25:15

But it is a giant leap for machines, jokes aside, there are studies showing that

25:22

there is success with AI and robotics in healthcare for mobility access and so on.

25:29

Interestingly, we can also integrate them into the larger internet of

25:33

things network infrastructure. And this might bring us one step closer to what we envision smart

25:39

homes and smart cities where all our devices are interconnected.

25:44

And they're perpetually consuming our data about our every

25:47

movement, our every decision. You know what I'm going with

25:51

Jonah Maddox: this. You say it like it's a bad thing, but I feel like I'm still so naive to how

25:56

this data collection really impacts me.

25:59

It's too easy to accept the TNCs we're bombarded with.

26:03

So what can we expect in the next few months? Smera Jayadeva: So for the next few months for manufacturers and this

26:08

ranges from Amazon to Boston Dynamics to Hyundai to Nvidia to Tesla, you

26:13

know, everyone's getting in on it. It's a rather even playing field as of now.

26:17

So if we're Jonah Maddox: imagining a sort of Jetsons esque future, then presumably

26:21

the production costs need to come down.

26:23

Smera Jayadeva: Well, if we continue on an unregulated path where robots

26:27

are affordable, it would actually come at the simple cost of your

26:31

data, your Agency, or even your job.

26:34

Who needs them? Do you think, do you think it is that dire?

26:39

It is interesting, especially from a market analysis point of view.

26:43

And, you know, if you take the language of these websites, these

26:47

robotics websites, it might lead you to believe we need these machines

26:50

to fill up these jobs and so forth.

26:52

And that we, in fact, are the more lazier humans, but that's

26:55

me reading between the lines. But fundamentally, many of.

26:59

The repetitive manufacturing jobs, which robots could replace not only very,

27:04

very low paying, but incredibly taxing.

27:07

So if one wanted to upscale and move out of, say, working in a warehouse where they

27:12

have rather repetitive tasks, they might not have the time because they're stuck

27:17

in endless shifts just to make ends meet.

27:19

Thus creating the working poor Jonah Maddox: side note.

27:22

Right. Or side thought. If you were to lose work, like, production lines, you could lose

27:27

the creativity that's born from it. Right. Is an interesting nugget.

27:31

Gordy Berry, who founded Motown Yeah. Was inspired by the production line.

27:35

He worked. on it and building cars in Detroit, he thought you could do the same

27:39

with a musician, like bring them in, send them up the production

27:42

line and come out with a hit.

27:44

He even had a quality control system like the car factory did where they would

27:48

make sure each song was like the best it could be before it left the hit factory.

27:52

Even Rerecording them with different singers and things like that.

27:55

So yeah, remove all repetitive jobs and we might not get another Motown.

28:00

Smera Jayadeva: Oh, wow. But I mean, are you saying we should continue keeping workers in very

28:05

repetitive factory jobs, Jonah? In case we get another

28:08

Jonah Maddox: Motown. Easy for me to say.

28:11

Yeah. Although I must say I used to be a very unskilled builders, builders laborer.

28:17

And that is easily the time that I've been most prolific in making music

28:22

and art and feeling really creative, not quite to Motown standard, but,

28:26

Smera Jayadeva: in all seriousness, there needs to be a lot more analysis

28:29

and review of what's going to happen to the state of our markets and,

28:33

you know, what economic models will look like with greater automation.

28:37

You know, we have a lot of fundamental assumptions about labor costs, about

28:41

knowledge, about information and so forth, but it really needs to, you

28:45

know, Get a proper deep dive as we see greater and greater automation.

28:54

Jonah Maddox: Click bait. I know what you are up to with your tantalizingly open-ended question and

29:00

error of seductive mystery . I thought I was kind of impervious to it until this

29:04

month when I found myself paragraph deep into an article titled OpenAI is exploring

29:10

how to responsibly generate AI porn.

29:13

Smera Jayadeva: Let me guess, they're not actually exploring

29:16

how to generate Porn at all? Basically, you're right.

29:19

Yes. Jonah Maddox: So what happened was this month OpenAI released draft guidelines for

29:24

how the tech inside ChatGPT should behave.

29:26

And with regards to not safe for work content, it says,

29:30

Basically, we don't do that. However, the article that I read that was in Wired and also Guardian focuses

29:36

on this note lifted from the document, and I quote, we're exploring whether we

29:41

can responsibly provide the ability to generate NSFW content in age appropriate

29:46

contexts through the API and chat GPT.

29:49

We look forward to better understanding user and societal expectations

29:52

of model behavior in this area. Smera Jayadeva: See you.

29:55

Can kind of see where the article got excited.

29:57

Jonah Maddox: Yes, can see where they got it from. But, but they were also told by an OpenAI spokesperson that we

30:03

do not have any intention for our models to generate AI porn.

30:06

So this segment is kind of at risk of becoming clickbaity itself.

30:10

Clickbait of a clickbait. Smera Jayadeva: But it does raise some important questions, I think,

30:15

about the future of generative AI and where we need to be more careful.

30:19

The platforms want users to have maximum control, but also don't

30:24

want them to be able to violate laws or other people's rights.

30:27

I think we touched upon it in our series where we looked at deep fakes

30:31

being used for generative porn, and since then there have been the very,

30:35

very public questions about it. Deepfakes of Taylor Swift.

30:37

Jonah Maddox: Yes. Yeah. As Megan touched on earlier in the episode.

30:40

And we'll obviously link the episode where Smera and Jesse talk about that from

30:45

last series as well in the show notes. So a month or so ago, the UK government created a new offense that

30:51

makes it illegal to make deepfakes. Sexually explicit deepfakes of over 18s without consent and OpenAI are

30:58

very clear that they do not want to enable users to create deepfakes, but

31:01

it is happening on some platforms.

31:04

I read an unpleasant article about the rapid rise in the number of schools

31:07

reporting of children using AI to create indecent images of other children

31:11

in their school, which is very sad.

31:13

Smera Jayadeva: I know, but I mean that we're talking about

31:15

something within schools in April, we saw the first of what will.

31:20

hopefully be a larger crackdown of sex offenders using AI.

31:23

A 48 year old man from the UK was prosecuted and banned from using

31:29

AI tools after creating more than a thousand indecent images of

31:33

Jonah Maddox: children. Yeah. So we, we need better tech, better regs and a better education

31:38

towards sex and respect in general.

31:40

Aside from the illegal and abusive uses of AI when we're talking about sex,

31:45

I can't see a future where some form of pornography isn't created by AI.

31:50

I imagine it's often the fringe communities that the tech isn't

31:52

specifically made for who improvise to make what they want and end up discovering

31:57

some new use case that no one thought of.

31:59

Surely it's going to play a part somewhere in the future of AI.

32:03

Smera Jayadeva: I would actually be more worried about AI driven porn.

32:06

There is no transparency on the data used to train some of the generative AI models.

32:10

And we also have the problem of poor explainability.

32:13

If we can even say there's any form of explainability.

32:16

In this case, there may be a chance that someone's photographic data

32:21

that may have been used to train a model and maybe somewhere down the

32:24

line, there's some gen AI porn, which looks very oddly familiar to you.

32:28

And I personally do not want to wake up to a future 20 years down the

32:31

line where a photo I uploaded on Facebook, completely non harmful ends

32:36

up being part of a training data set. That has very non welcome users.

32:41

Jonah Maddox: Yeah. And I wonder if there's something in the idea that if AI companies do explore

32:46

the more questionable avenues the resulting new architecture developed

32:49

could enable people with ulterior motives to jailbreak the system and use it

32:53

for their own even more dubious means. Smera Jayadeva: Oh yeah, definitely.

32:56

I mean, better tech doesn't mean we eradicate crime as much as

33:00

criminal justice, AI systems might make you want to believe.

33:04

The more interconnected our networks, I think there are more risks of cyber

33:08

operations, be it data theft, data leaks, or even model replication, where

33:12

they can reproduce some of these models and the outcomes and at the risk of

33:17

the person whose data is being used. Jonah Maddox: Yeah.

33:20

Okay, let's wrap it up there I suppose just to bring it full circle, back to

33:25

clickbait and having learned from Megan about being aware of what we read and

33:28

where we get our information, I suppose the message here is to be vigilant

33:32

although this clickbaity headline led us down a valid rabbit hole, sometimes you

33:35

could find yourself in a more spurious place, ew, think before you click.

33:41

Smera Jayadeva: At least we're on the right track when it comes to the law. It's good to see that, you know, there are active steps being taken to

33:47

make sure that people are protected and that there are court rulings now

33:51

that can be upheld in future cases.

33:53

Hopefully it's not the case, but you know, knowing how the world

33:56

tends to use tech, it wouldn't be surprising if we hear more about this.

34:00

as this technology improves. Jonah Maddox: Yeah, keep you posted.

34:07

Well, that's about it for this month. But before we go, Smera, I want to continue a tradition from the last series,

34:13

and that is our positive news segment.

34:15

So what made you feel optimistic about AI this month?

34:18

Smera Jayadeva: There's a lot been happening, but there's one story I want to focus on.

34:21

It's this big breakthrough with DeepMind's AlphaFold 3, essentially.

34:26

I've Jonah Maddox: heard of it. Smera Jayadeva: So the big breakthrough is that this AI system can now map

34:30

out protein structures quicker than ever to give cures for diseases.

34:34

So essentially improve drug discovery. Would you like to know exactly how that works?

34:39

Because I spent some time going into the physics and the biology behind it.

34:42

Jonah Maddox: I absolutely would, because I did read the sort of the

34:45

headline of this story and thought that sounds positive, but then I

34:49

read the rest and understood nothing. So I would love to hear Some help there, please.

34:52

Okay, Smera Jayadeva: keep in mind I'm not a doctor by any means.

34:55

If I was, my parents would be so proud of me, but okay.

34:58

Basically, proteins are the workhorses of the cell.

35:00

They're important for everything, and each protein is made up of

35:03

complex amino acid sequences.

35:06

The issue is that these sequences and how they make up the protein

35:10

is governed by these very complex physical and chemical interactions,

35:14

which has meant that humans trying to map it out have taken a lot of time.

35:18

Apparently, it's like a 50 year grand challenge for medicine and biology.

35:22

But now there's a computer that can do it for us.

35:24

And if it means it can map out. proteins, it's the future of drug discovery.

35:29

Why, you ask, is the future of drug discovery?

35:31

It's because drug molecules bind to specific sites on proteins.

35:35

So, if we know where those sites are on a protein to bind the drug molecule to, then

35:39

we find a way to make that drug effective.

35:42

Jonah Maddox: Very nice. Shout out AlphaFold3. Shout out AlphaFold3.

35:45

You like it. So, that's it for this month.

35:50

Thank you very much again to Megan Hughes.

35:52

Our excellent guest, thank you to Jesse behind the scenes, thank you to Smera.

35:58

I should also just mention that Smera, this week I watched you

36:01

perform at the Pint of Science event in London, which, where you were

36:07

performing your imagined future. You came from Mars from the year 2060 or something?

36:12

Smera Jayadeva: Yeah, 2064. Yeah, I came down from Mars.

36:14

It was a very hectic moment of traveling for me.

36:17

I don't usually come back down to terrestrial earth but I luckily got

36:20

the funds from a specific sponsor.

36:23

It was Jonah Maddox: Lidl, right? Yeah, it was really good.

36:25

And yeah, for the, for those interested in that our YouTube will have the

36:29

point of science in the future. That's Mira.

36:31

Well done. Smera Jayadeva: Thank you for everyone who listened this far and we can't wait

36:35

to see you next month with a new set of stories that we will cover in detail.

36:40

Jonah Maddox: Bye.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features