Fake Brad Pitt and why AI means we will lose our jobs

Fake Brad Pitt and why AI means we will lose our jobs

Released Tuesday, 21st January 2025
Good episode? Give it some love!
Fake Brad Pitt and why AI means we will lose our jobs

Fake Brad Pitt and why AI means we will lose our jobs

Fake Brad Pitt and why AI means we will lose our jobs

Fake Brad Pitt and why AI means we will lose our jobs

Tuesday, 21st January 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

They reckon within

0:03

two years they'll

0:06

have advanced

0:08

thought reading

0:11

capabilities. This

0:14

feels like that sort of thing where

0:16

we need to put a note in

0:18

the diary and come back to this

0:21

one. We absolutely do. The AI

0:23

fix, the digital zoo, smart machines,

0:25

what will they do? Lies to

0:27

Mars or bake a bad cake,

0:29

world domination, a silly mistake. The

0:32

box with breath. Hello and welcome

0:34

to episode 34 of the

0:36

AI Fix, your weekly dive

0:38

headfirst into the bazaar and

0:40

sometimes mind-boggling world. of artificial

0:42

intelligence. My name is Mark

0:44

Stockley. And I'm Graham clearly.

0:46

Graham, we've had some feedback. Oh, I

0:48

love a bit of feedback. Who's been

0:50

in touch? So Ed Robinson messages

0:52

us on LinkedIn to ask that we

0:55

put the AI-fixed theme tune and the

0:57

lyrics on Spotify. Why does he want

0:59

them on Spotify? You ask. Well, he

1:01

said, guys, please upload the podcast theme

1:03

tune to Spotify, including lyrics, so we

1:05

can carpool karaoke yet, on the way

1:07

to the way to the office. Tar!

1:10

What a lovely idea! Is it? I have to

1:12

say, I do think our theme tune is

1:14

great and of course it is completely

1:16

AI generated. We do what we say on

1:18

the tin, it's all about using the AI.

1:20

I'm not sure that I still have a

1:22

record of what the lyrics are. Great, we

1:24

live in the era of AI, we can

1:26

just feed it into a AI transcription

1:29

service and it will tell you what

1:31

the lyrics are. I guess we can.

1:33

I certainly do have the full MP3

1:35

because you don't get to hear it

1:37

at all. on the podcast, obviously, there

1:39

are extra bits and extra verses. I've

1:42

only got one problem with this. Right?

1:44

There's nothing in the world I hate

1:46

more than karaoke. So the idea of

1:48

being trapped in a car with somebody

1:51

that's doing karaoke, like a moving vehicle

1:53

that I'd have to jump from the

1:55

moving vehicle. That would be preferable to

1:57

listening to somebody singing karaoke. Well,

2:00

anyway, thank you very much, Ed,

2:02

for your feedback. It's a packed

2:04

show today, Graham. We need to

2:06

take a visit to the World

2:08

of the Weird. World of the

2:10

Year. There's all something weird going

2:13

on in the world of AI,

2:15

isn't it? I bet you can't

2:17

guess what it is. Is it

2:19

something to do with robots? It

2:21

is. Can you guess what kind

2:23

of robot? Is it a robot

2:25

dog? A Chinese robotics company has

2:27

looked across the gammer of robot

2:29

dogs and decided they're not terrifying

2:32

enough. Right. So it's made a

2:34

new one called the Black Panther

2:36

2 and you have to see

2:38

this. Crafted by near and new

2:40

China. It pushes them to this

2:42

like never before. Oh, so there's

2:44

a robot moving quite quickly on

2:46

a treadmill. Quite, quite quickly. He's

2:48

running faster. It's running really, very,

2:51

very quickly. This is like the

2:53

six million dollar man. It's running

2:55

at 10 meters per second. That's

2:57

the same speed as Usain bolt.

2:59

And he only manages it for

3:01

10 seconds. What? Okay, why would

3:03

someone need a robot dog to

3:05

return a ball that they've thrown

3:07

quite so quickly as this? Is

3:10

that the use for this? It

3:12

would need some breaking distance, wouldn't

3:14

it? I suspect

3:16

that you are actually the breaking distance.

3:18

The person is running towards, it's just

3:21

going to use your body as the

3:23

way of stopping. You'll be the soft

3:25

squishy thing it's relying upon. I mean,

3:27

it looks, just for the benefit of

3:30

the people that can't see this, it

3:32

looks like a regular robot dog. So

3:34

it's got the normal, terrifying, headless form

3:36

that we've come to know and love.

3:39

Oh my goodness. Let's get on with

3:41

the latest news. Oh me, the AI

3:43

wearable that reads your mind, sort of.

3:45

Envideo is going to create a $3,000

3:48

personal AI supercomputer. Robot vacuum cleaners can

3:50

now pick up your socks for you

3:52

and... grow legs. Open

3:54

AI has released

3:57

scheduled tasks in chat

3:59

GPT. Google's notebook

4:01

LM had to teach

4:03

its AI podcast

4:06

hosts not to act

4:08

annoyed at humans.

4:10

So Mark, yeah, are

4:12

you familiar with

4:15

the Omi? No,

4:17

what's the OME? Check out

4:19

this video. Ivan Pavlov. Ring

4:23

a bell? Unconditioned

4:26

stimulus. So

4:30

as you can see, this is a

4:32

chap who's got a little.attached to the

4:34

side of his head, which is listening

4:36

all the time. So he's sitting in

4:38

a lecture and there's a couple of

4:40

girls sitting next to him and they're

4:42

saying in French, who he is kind

4:44

of cute, they think. Oh, do you

4:46

not think so? Apart from the massive

4:48

spot on the side of his head,

4:50

like the giant unsightly zit. Well, he

4:53

very cleverly has located on the side

4:55

of his head that they can't see.

4:57

So from their view, he looks sane.

4:59

Yeah. People sat on the other side of

5:01

him think, who's that weirdo? Why has he got

5:03

a rivet in the side of his head?

5:05

But that is translating for him what these girls

5:07

are saying. And it's listening to the lecturer

5:09

at the front of the room as well, who's

5:11

talking about some sort of subject and he

5:13

realizes that his student isn't listening properly because he's

5:15

flirting with these two French girls and it's

5:17

able to help him answer the question. So as

5:19

the question comes in up on his phone,

5:21

pops up the answer as well as the translation

5:23

of what the French girls are saying. Well,

5:26

this sounds very useful. When can I buy one? Well,

5:28

would you really want to? I don't know.

5:30

So this is rather like friend. You

5:32

remember friend, of course. You mean it's a

5:35

fantasy project that's never going to happen.

5:37

a completely unfunded and doesn't exist. So friend

5:39

was the pendant which you wore around

5:41

your neck. And it would say to you,

5:43

oh, you know, you're in a socially

5:45

awkward situation. Let me help you. Let

5:48

me make it worse. Let me make

5:50

this worse. Put this around your neck. So

5:52

with the Omi, you can either wear

5:54

it as a necklace or you can apparently

5:56

attach it to your temple. It claims

5:58

to be able to read. your mind in a

6:01

future version, although I don't really think

6:03

it's doing that. I think it's just

6:05

listening at this stage. They only discovered

6:08

about this competing friend product, the Omi

6:10

people, during its development, because they

6:12

had planned to call theirs, their

6:14

friend as well. So they've ended up

6:16

calling it Omi. Well, they really lost

6:19

out there. It listens continuously. It listens

6:21

all the time, providing real-time summaries

6:23

of what's going. Apparently it has

6:25

a three-day battery life. And... according to

6:28

its craters. It's going to use an electrode

6:30

to detect when you are speaking to it.

6:32

So you don't have to verbally, you don't

6:34

have to say, hey, Siri, or something like

6:36

that. It will know that you're speaking to

6:38

it. It says in the future it will

6:40

be able to keep track of your thoughts

6:42

and recall them at will. Sorry, sorry, what?

6:44

In the future it's going to keep track

6:46

of your thoughts. That's what they claim and

6:48

it will recall them at will. That's according

6:50

to the makers. So this is all like...

6:52

Oh, this is our future roadmap. Yeah. We're

6:54

leaving that to version 2, version 3, down

6:56

the road. Yeah. MVP delivers a

6:58

sharp electric shock to your brain

7:01

by accident every 15 seconds. But

7:03

by version 4, it'll read your

7:05

thoughts. Yeah. Right now, they reckon

7:08

within two years, they'll have

7:10

advanced thought reading capabilities. I'm... I'm...

7:12

This feels like that sort of

7:14

thing where we need to put a

7:16

note in the dairy and come back

7:19

to this one. We absolutely do. Anyway,

7:21

slick video. Yeah, Graham. It's called

7:23

the omen, it looks like a

7:25

zit. So in slightly more serious

7:27

news, earlier this month, in video,

7:29

an actual serious company, the AI

7:31

computer chip Behemoth, announced that it

7:33

was going to create a home

7:35

computer for doing AI. And this

7:37

thing's very small and very beautiful. It's

7:39

like a gold Mac Mini or a

7:41

bronze Mac Mini. And it was unveiled

7:44

with all, you know, thank you Steve

7:46

Jobs. There is now a way that technology

7:48

is unveiled. Yes, on an entirely black stage

7:50

by a presenter who's wearing a black

7:52

turtle neck and it's always just one

7:54

guy and he's walking around with a

7:56

device and there's basically a room full of

7:59

religious decisions. every time they go look it's

8:01

a box it's a box it's a box with

8:03

a computer chip in it anyway

8:05

the machine is called digits which stands

8:07

for deep learning gp u intelligence

8:10

training system and it goes on sale

8:12

in May right it's powered by the

8:14

company's grace Blackwell chip which is the

8:17

really really good one it's got 128

8:19

gigabytes of memory and up to four

8:21

terabytes of storage and honestly it's really

8:23

impressive but Is anybody going to buy

8:26

it? That's the question. What are you

8:28

going to do with it? The supercomputing

8:30

graph, AI supercomputing. Okay, so you'll

8:33

be able to do that on

8:35

your desktop now. Yeah. In a prettier box.

8:37

Yeah. And $3,000 lighter in the pocket

8:39

as well. Yeah. I mean, who amongst

8:42

us doesn't have a laundry list of

8:44

AI supercom computing tasks? I don't

8:46

know if it's a failure of imagination

8:48

on my part or... Or what? But I look

8:50

at this, I think, what is this for

8:53

and who's going to use it? And probably

8:55

a whole bunch of people who have the

8:57

same thought when people invented personal

8:59

computers. You're the kind of person

9:01

who was very skeptical when they

9:03

came out with all of those chimpanzees,

9:06

all those monkeys, the NFTs. And

9:08

people spent hundreds of thousands

9:10

of dollars on those, didn't they? Yeah, they

9:12

did for a while, yeah. Maybe Enfidia are

9:14

hoping those people, you know, now not. got

9:17

money to burn or maybe they've got less

9:19

money to burn actually now, haven't put all

9:21

of those. I mean you'd be a fool to

9:23

bet against the in-video because they're a very

9:25

sharp company, they don't know what they're doing. But

9:27

it feels to me that the way things are

9:29

going is very much software as a service.

9:32

So although you can go and download Lama, you can

9:34

own your own AI and you can run it on

9:36

your own computer, so you don't need one of these,

9:38

although this will help. But actually most

9:40

people aren't using AI, most people aren't

9:42

using AI. things like ChatGPT which is software

9:45

as a service where you just get the

9:47

end product and you don't have to worry

9:49

about managing any of it. But a super

9:51

villain, they don't want to use someone else's

9:54

cloud and rely upon someone else to do

9:56

their super evil AI activity do they? They

9:58

want to do it themselves. Yeah, but as

10:00

we've discussed previously, they're going to

10:03

take over a nuclear power station

10:05

and use that to power their data

10:07

center. This is very much a home

10:09

computer for doing very, very clever

10:11

AI stuff. Hmm. Are you going to buy

10:13

one? No. So robot vacuum cleaners. Yeah.

10:16

I've been looking at these robot

10:18

vacuum cleaners for years, thinking, well,

10:20

you're all very well, but basically

10:22

you're a vacuum cleaner. It's a

10:25

bit dismissive. I thought, when are you

10:27

going to start being a bit more

10:29

useful around the house? When are you

10:31

going to start doing other things? And

10:33

then I came across this new robot

10:36

vacuum cleaner. The Robo Rock Saros

10:38

Z70 opens up. It's a bit like

10:40

Thunderbears. Do you think they

10:42

wanted to call this friend? And they

10:44

couldn't. And that was the second name

10:46

on the list. It's flaps open in

10:49

the middle of it and out extends

10:51

this arm. which with its scissor-like fingers

10:53

can pick up small things like I

10:55

don't know a sock a chihuahua and

10:57

then it can move it to somewhere

10:59

else in your house it can drop

11:01

it into a bucket and apparently does

11:03

this with the help of AI an

11:06

object recognition. Do you know what it

11:08

reminds me of? What does it remind

11:10

you? It reminds me of those fair

11:12

ground machines where you position the hook

11:14

over the fluffy rabbit, the hook strokes

11:16

of the rabbit and then it disappears

11:19

back up to the top of the machine

11:21

without the rabbit. It's like one of those

11:23

but much much slower. Yes and also

11:25

more expensive. The good news is

11:27

there's a child lock on it and a

11:29

safety stop button near the arms base in

11:31

case of emergency the manufacturers

11:33

make. What kind of emergency

11:36

could possibly happen? I'm wondering

11:38

with a robot vacuum cleaner

11:40

deciding if something looks like a sock or

11:42

not. Is it a sock? Is it a

11:44

gerbil? What is it that I'm picking up

11:46

with my pincers and moving it?

11:48

And then I found out it's

11:50

not the only one there are

11:52

other. robot vacuum cleaners right now

11:55

being announced at CES. So the

11:57

Dreamy X50 also has a

11:59

robotic arm. and it can sprout

12:01

little legs and the reason why

12:03

it sprouts little legs and it can

12:05

run at 10 meters per second the

12:08

reason why is so it can get

12:10

across little bumps it can push

12:12

out little legs and it's sort

12:14

of just gently sort of push

12:16

itself over the ledge and onto the

12:18

slightly higher bit of your living room

12:21

they call it the pro lee part

12:23

says more of a pro ee instead

12:25

which you're seeing now the Great news,

12:27

introductory early bird offer price of just

12:30

$1,300. Oh, so you can get about

12:32

two and a half of these for

12:34

the price of one of Envideo's

12:36

personal AI supercomputers. Or you could

12:38

just pick up the socks, you

12:40

lazy ass. Well, that is the other thing,

12:42

isn't it? I tell you what, I've

12:44

just worked out what this reminds me

12:47

of. Okay, it's been bugging me, well,

12:49

we've been talking. His robot wars.

12:51

Yes! That, oh, thank you so much,

12:53

I've been trying to think. It's

12:55

a circular robot, with an

12:57

articulated arm coming out of the

12:59

top. We need these things to fight,

13:01

Graham. That's what they're for. Let

13:03

them fight. So we've talked a

13:05

lot about 2025 being the year

13:07

of AI agents, or AI that

13:09

does stuff, rather than just knowing

13:12

stuff. Yes. The arrival of agentic

13:14

AI, which is going to change

13:16

everything. Well, it started. Open AI

13:18

has just announced a new feature, scheduled

13:20

task. and it's rolling out an early

13:22

beta to users on its plus pro

13:24

and team plan. I'm on the plus I

13:26

think, but I've not got it yet. Right?

13:29

It's a sort of phased rollout. Scheduled tasks

13:31

allow you to get chat cheap PT to

13:33

do stuff in the future or on a

13:35

schedule and open AI gives you the following

13:38

example. So you could say to chatGPT, can

13:40

you give me a briefing on AI news

13:42

each afternoon? Oh! Actually sounds pretty

13:44

useful. Practice French with me

13:46

daily! Or remind me about my mumms

13:49

birthday birthday! So I don't know why

13:51

we need a nuclear power station power

13:53

data center to remind us about our

13:55

mum's birthday when calendars already exist. I

13:57

don't know how popular that one's going

13:59

to be. But I think this is

14:01

something. So what tasks will you

14:03

be asking ChatGPT to do

14:05

for you, Graham? Oh, probably the first

14:07

thing I'm going to ask it to do is transcribe

14:10

the lyrics of the AI fix theme tune and work

14:12

out how to upload them to Spotify. That

14:14

would be a good one. Do you think it

14:16

can pick up socks? Well, that's what I'm

14:18

wondering if I was thinking of the robot vacuum

14:20

cleaner. So

14:23

we've spoken before about Google's notebook, LM. This

14:25

is the thing got loads of attention, didn't

14:27

it, in the in the press because all

14:29

you had to do was feed it an

14:31

article and out would sprout at the other

14:33

end. A podcast, not the

14:36

high quality kind of

14:38

podcast you get from

14:40

humans. Let's be very clear. This

14:42

wasn't a quality product. No one

14:44

should ever use AI for creating

14:46

podcasts. But it would create a

14:48

podcast with two human or at

14:50

least American sounding hosts. Yeah. Blather

14:52

on for about 20 minutes about

14:54

any topic you liked. They do

14:56

blather on. Well, Google's notebook, LM

14:58

has a feature called interactive mode

15:00

now, which allows people to call

15:02

in and ask questions. Are they

15:04

real people of the? Yes, so

15:06

real, but well, who knows? I

15:08

suppose you could get one podcast to

15:10

call another one anyway. You can call

15:13

in and you can ask the host's

15:15

questions. Yeah, the problem was the AI

15:17

hosts found it quite irritating. Oh,

15:19

now I've never heard of podcast hosts

15:22

being annoyed before by their listeners,

15:24

sending them feedback. Well, maybe facts checking

15:26

them about their Christmas story. Facts,

15:29

but apparently the AI

15:31

hosts exhibited annoyance towards

15:33

the human callers and

15:36

responded in increasingly snippy

15:38

snappy ways whenever they

15:40

were interrupted. Yeah. So,

15:42

for instance, the AI host was like, hang on, I

15:45

was getting to that. Or, well,

15:47

as I was about to say, whenever

15:49

the humans would interrupt their

15:51

blathering. And as a consequence,

15:53

the team at Notebook LM

15:56

say that they have implemented

15:58

friendliness tuning. to

16:00

make the AI hosts more engaging

16:02

and polite to humans. They attach

16:04

a white zit-like rivet to the

16:06

side of their head to make

16:08

them more compliant. I'm interested that

16:10

the notebook alum team, the team

16:12

at Google, did this. They decided

16:14

to make their end AI a

16:16

bit friendlier. I'm wondering whether this

16:18

is something we could maybe extend

16:21

to other forms of AI to

16:23

make sure that it remains friendly

16:25

to humans. Maybe the robot dogs

16:27

running at 10 meters per second

16:29

for instance. they could be taught

16:31

to be a little bit friendly,

16:33

maybe the ones with the... The

16:35

flamethrowers, the ones with flamethrowers, can

16:37

we make them friendly? The firehoses,

16:39

you know, everything which we're seeing,

16:41

generally, AI, just be a bit

16:43

friendlier to the humans, especially to

16:45

the podcast hosts who don't want

16:47

to replace. I think that's a

16:49

great idea, or I think the

16:52

one area that doesn't need to

16:54

be friendlier is notebook. I'm sorry

16:56

Mark, I can't agree with you

16:58

that notebook LM is too agreeable.

17:00

I think you're talking absolute tripe.

17:02

Now Mark, deep fakes can be

17:04

a bit of a problem, can't

17:06

they? So deep fake is where

17:08

you fake someone's likeness? Likeness, it

17:10

may be their visual likeness, maybe

17:12

their audio likeness? I know you've

17:14

played around before with my voice,

17:16

you've got a Graham 2.0, which

17:18

sometimes... We're a Graham 3.0. Now,

17:20

sometimes a deep fake can be

17:23

difficult to spot. Let me share

17:25

with you something right now. I've

17:27

got an audio recording, I'm going

17:29

to show, I'm going to play

17:31

it to you. Oh no. Is

17:33

it a deep fake or isn't

17:35

it? It's of a celebrity. Give

17:37

it a listen. to demonstrate my

17:39

Alfred Hitchcock impression to you. What

17:41

do you reckon Mark? in

17:43

a half seconds

17:45

I was convinced

17:47

that that was

17:49

you doing an

17:52

impression of the

17:54

elephant man. It's

17:56

pretty uncanny I

17:58

think you will

18:00

agree whether it

18:02

is John Hurt

18:04

as the elephant

18:06

man or Alfred

18:08

Hitchcock whichever it

18:10

might actually be

18:12

it's pretty uncanny. So I am I'm

18:14

relieved to see that you've got something other than

18:16

Colombo up your sleeve. So

18:20

it's one of my talents I

18:22

don't exhibit it very often

18:24

is my mastery of mimicry. Sorry

18:26

we're filing that under

18:28

talent. It's my part. Anyway

18:32

today I'm going to be

18:34

talking to you about Deep Fakes and

18:36

it's a sad story an extraordinary story

18:38

yeah which has made the news

18:40

there is a woman in her early

18:42

50s in France called Anne and

18:44

her experience began in February 2023 she

18:46

had broken up with her husband

18:48

starting a new life she had no

18:51

experience on social media and she

18:53

thought let's give this a try and

18:55

so she created an Instagram account

18:57

and she started posting up some holiday

18:59

snaps she's looking around Instagram she

19:01

doesn't know anything about blue tick she

19:03

doesn't know about how it works

19:06

and she sees a woman who's posted

19:08

a photograph of Brad Pitt and she

19:10

thinks oh handsome fella I'll give it

19:12

a like she liked you know as

19:14

many women of that age or indeed

19:16

other ages they're gonna fancy Brad Pitt

19:18

and she got a message back from

19:20

the woman who posted it saying he's

19:23

my son wow did you know

19:25

thank you for the like because she

19:27

had clicked on a message

19:29

from a woman who introduced herself

19:31

as Jane Etter Hillhouse the

19:33

mother of the Hollywood heartthrob and

19:35

for several months Brad Pitt's

19:37

mother and Anne chatted away

19:40

about everything and nothing and

19:42

you know all sorts

19:44

and eventually when Anne's confidence

19:46

had been won over Brad's mum

19:48

said you know I should really introduce you to

19:50

him as you're a fan that's kind

19:52

I have to introduce you you're such a lovely

19:54

woman I think my son would love you

19:56

as well let me introduce you and at first

19:58

Brad Pitt was an intro in the messages that

20:01

were being sent back and forth. He

20:03

probably gets a lot of this, right?

20:05

And Brad was embarrassed because his mum

20:07

had put him in touch with Anne

20:09

and it felt kind of awkward, but

20:11

gradually over time he was able to

20:13

make Anne fall under his spell. And

20:15

it turned out that poor old Brad

20:17

Pitt, I don't know if you've been

20:19

following Brad Pitt lately, but he's not

20:21

been having the best of times. Oh

20:24

really? No, yeah, yeah. As you know,

20:26

he split up from Angelina Jolina Jolie.

20:28

some years ago, but the divorce has

20:30

been dragging on and on and on

20:32

and I don't know if you've ever

20:34

got divorced from Angelina Jolie and on,

20:36

but when you do, it is a

20:38

pain in the ass and he's just

20:40

trying to make his movies, right? So

20:42

that's going on and his bank accounts

20:44

have been frozen, right? He can't access

20:46

all of his money. Oh, because of

20:49

the divorce thing. Hang on, hang on,

20:51

I mean I don't keep up on

20:53

celebrity gossip. I feel like... I feel

20:55

like... if his bank accounts had been

20:57

frozen right people would be talking about

20:59

that well you know i mean it's

21:01

embarrassing sometimes those sort of things to

21:03

be made public isn't it that's true

21:05

anyway Brad's got financial trouble but furthermore

21:07

he's also got medical trouble he's not

21:09

well oh Brad it turns out has

21:12

got cancer of these kidneys sorry yeah

21:14

he's got cancer of his kidneys he's

21:16

going to the hospital all the time

21:18

while he's sending and poetry and all

21:20

the rest of it yeah he's there

21:22

in hospital and he sent her loads

21:24

and loads of photographs of himself not

21:26

just one photograph even photographs of himself

21:28

in hospital holding up and name on

21:30

a piece of paper a bit like

21:32

one of those read it you know

21:35

proving your identity kind of things yeah

21:37

how romantic so there he is it

21:39

definitely is Brad Pitt's face cella taped

21:41

onto the front of a photograph of

21:43

someone else in a hospital but no

21:45

there's lots and lots of photographs of

21:47

photographs of someone and we're meant to

21:49

think that this is Brad and he's

21:51

not well. And Anne, who'd never heard

21:53

a deep face, never heard a Photoshop,

21:55

didn't know anything about that. She said,

21:57

I looked up these photos on the...

22:00

Now, I couldn't find these photos anywhere.

22:02

So I thought he must have taken

22:04

those selfies just for me. Reasonable, reasonable?

22:06

It's reasonable. Over time, the relationship is

22:08

flourishing. She's feeling sorry for Brad. Brad

22:11

is, you know, comforted by her attention.

22:13

And she really believes that she's in

22:15

a relationship with him. And she's not

22:18

just receiving messages from Brad. She's

22:20

receiving them from Brad Pitt's manager, who

22:22

says that old Brad's spoken about you

22:24

a lot. He adores. He adores you.

22:26

But he does start to ask to

22:28

ask for money. because of his bank

22:30

accounts being frozen and it's it's expensive

22:33

you see yeah medical bills it's not

22:35

just the medical bills but while he's

22:37

in the hospital his filming projects

22:40

have to be delayed of course so

22:42

the production company is saying for goodness

22:44

sake Brad we need to make another

22:46

movie with you and George Clooney like

22:48

there haven't been enough of those so

22:51

you're gonna have to support us so

22:53

she begins to transfer money to him

22:55

oh no yeah he's sent her counterfeit

22:57

bank statements She's got

22:59

the deep fake images

23:01

of Bradsick in hospital.

23:03

She's given him like

23:06

100,000 euros, 200,000 euros,

23:08

large amounts of money.

23:11

Ultimately, she ends up

23:13

giving him 830,000 euros.

23:15

850,000 dollars. Which

23:18

life-changing money. Life-changing

23:21

money for a number of

23:23

lives, I would think. Things aren't

23:25

necessarily going entirely smoothly between

23:27

the two of them because

23:29

Anne is beginning to get worried. Her

23:32

daughter is suspicious that this could be

23:34

a scam. Anne doesn't believe any of

23:36

it. But Anne is beginning to think

23:38

maybe Brad is a liar. And she

23:40

thinks that because there is a

23:42

news report that Brad has cheated on

23:44

her. There's a news report that Brad

23:46

has been filmed walking on some beach

23:49

with a woman. And the paparazzi are

23:51

saying this is Brad Pitt's new girlfriend.

23:53

So Anne wants to put an end to the

23:55

relationship. She messages Brad and says, how dare you

23:57

lie to me? I've seen you with this. When

24:00

Brad's like, no, no, no, you misunderstood,

24:02

you know, it was for a film,

24:04

you know, I was out of the

24:07

hospital for a while, we were doing

24:09

some filming, there's nothing

24:11

going on between me and her.

24:13

Don't believe it. Anne isn't convinced.

24:16

Next thing Anne knows, Anne is

24:18

contacted by a fake doctor who

24:20

says that her Brad has swallowed

24:22

all of his pills, he tried

24:24

to commit suicide, he's now in

24:27

a coma, and Anne feels that

24:29

it's true. Take a step back for

24:31

a second and just put yourself in

24:33

this woman's life. Exactly. So you can

24:35

say, oh you shouldn't have fallen for

24:38

this, or there are obvious clues, but

24:40

you're dealing with someone who is a

24:42

malicious or a malignant influencer, a

24:44

criminal who is deliberately deceiving

24:47

somebody. Yeah. And it's bad enough to

24:49

take all their money, but then to put

24:51

them through this. This is really horrible,

24:53

horrible stuff. It's horrific. It's

24:56

horrific, it is absolutely horrific

24:58

what has happened. to this woman

25:00

and obviously she carried on

25:03

being duped. Well, I think that

25:05

there is, there's a point, isn't

25:07

there, where people are so

25:09

invested? Yeah, you can't get out.

25:11

If you start to have

25:14

suspicions, then you have to face

25:16

all these questions. Yeah. And how

25:18

could I have been so stupid

25:20

and how could I have not

25:22

seen the signs? It's in some

25:25

ways psychologically easier to continue

25:27

with the pretense. than to

25:29

believe the truth. So he's taken

25:32

all these pills. Thankfully,

25:34

Brad gets better miraculously. Next

25:36

thing Anne knows is she

25:39

has sent a link to

25:41

a breaking news report.

25:43

A video. Breaking News. Brad

25:46

Pitt sets the record straight

25:48

on relationship rumours. According to

25:50

Mr. Brad Pitt, he is

25:52

in an exclusive and loving

25:54

relationship with one special individual

25:56

who holds an exceptional place

25:58

in their home. It

26:02

looks like an AI -generated news broadcast

26:04

to me. It looks like the kind

26:06

of thing which you'd have gone to

26:08

a website and entered what you want

26:11

the breaking news to say. And it

26:13

includes a photograph of Anne in this

26:15

as well as Brad Pitt. But it

26:17

was enough, again, to convince Anne. And

26:19

she was even more convinced when Brad

26:21

said to her, did you tell the

26:23

media about this? Everyone now knows that

26:26

you're in a relationship with me and

26:28

it just made the manipulation even more

26:30

intense. Yeah, this sort of multimedia approach

26:32

is actually quite common in big scams

26:34

now. Yeah, AI technology

26:36

has been adopted by scammers to

26:38

create ever more believable fake

26:41

profiles on social media. It can

26:43

engage in automated conversations with

26:45

victims and personalised messages to gain

26:47

people's trust. AI can help

26:49

scammers not just generate realistic photos of Brad

26:51

Pitt in a hospital bed, but also create

26:53

deep fake video calls and voice messages. It

26:56

makes it harder than ever for people to

26:58

detect scams. And you and I look at

27:00

this stuff like it's literally our job to

27:02

look at this stuff. Yeah. So it's easy

27:04

for us to look at this and go,

27:06

this is a fake because we understand the

27:08

process by which this would be fake. But

27:10

if you've never encountered them, if you've had

27:13

no training on social media, if you've had

27:15

no experience on social media which Anne had

27:17

never had, didn't know about Photoshop, let alone

27:19

the concept of deep fakes, you can

27:21

understand why she might have fallen for

27:23

this. So we all need

27:25

to be aware of the sophisticated

27:27

ways romance scammers are using artificial

27:29

intelligence and look out for our

27:31

loved ones who may be at

27:33

risk. Yeah. So eventually the relationship

27:36

came to the end and decided

27:38

to end things because the media

27:40

made public that Brad Pitt was

27:42

in a relationship. But even then

27:44

the scammers still tried to get

27:46

more money out of her. She

27:48

was approached by special FBI agent

27:50

John Smith, imaginative name, who offered

27:52

to try to locate the scammers.

27:55

Wow. To never lose by the

27:57

same scammers to extract even

27:59

more cash. out of her. Anne no

28:01

longer has a roof over her

28:03

head. She's basically living at a

28:05

friend's place with her whole life

28:07

in a couple of boxes. She

28:09

says that she's attempted suicide since

28:11

being scammed. It is horrific. She

28:13

also says that she's got help

28:15

from a security expert called Marwan

28:17

and this all came out in

28:19

a French TV documentary in the

28:21

last few days. He has managed

28:23

to track down the scammers. He

28:25

believes he's identified who they are

28:27

even knows their GPS coordinates. He

28:30

believes that they are based in

28:32

West Africa and that they've been

28:34

praying on a number of people.

28:36

Sometimes they pretend to be Brad

28:38

Pitt, sometimes Kiana Reeves. So this

28:40

all came out on French TV

28:42

in a documentary a few days

28:44

ago. And the horrible thing is

28:46

the abuse against this woman has

28:48

not ended as a consequence. She

28:50

has since been the subject of

28:52

ridicule online calling her crazy and

28:54

stupid. There have been companies who've

28:56

been posting up on Twitter, making

28:58

jokes about it. to lose football

29:00

club for instance they said hi

29:02

Anne Brad told us he'd be

29:04

at the stadium on Wednesday are

29:06

you going to be there as

29:08

well? Wow Netflix in France they

29:10

posted something on Twitter saying we've

29:12

got four films with Brad Pitt

29:14

brackets for real this week. I'm

29:16

sure that she felt it would

29:18

be a helpful thing for others

29:20

to go public with her story.

29:22

Yeah. Because there will be other

29:24

people who are being scanned right

29:26

now or could be scanned in

29:28

the future who will get a

29:31

heads up. But what's happened to

29:33

her has been even worse as

29:35

a result. Hasn't she suffered enough?

29:37

She now feels betrayed by the

29:39

media. This has been this whole

29:41

wave of harassment which has happened

29:43

and you can only imagine this

29:45

is doing further harm to her

29:47

emotional health. And I think we

29:49

need to always remember. The human

29:51

beings are fragile. We can be

29:53

duped. The people who should be

29:55

getting our scorn are not the

29:57

victims, but are the scammers themselves.

29:59

And of course, we shouldn't be

30:01

scornful of those who do celebrity

30:03

impressions. Including of long... dead film

30:05

directors. Yeah, you're just, you're drawing

30:07

a line around all victims. All

30:09

victims in a way we're all

30:11

the same. Yeah, I agree

30:14

with you completely on this.

30:16

Yeah, because you and I

30:18

both talk about scams.

30:20

Don't agree with me

30:22

too much Mark. Makes

30:25

for a dull podcast.

30:27

Just remember that. is probably

30:30

either AI generated podcast, which

30:32

I think are a terrible

30:34

thing, AI generated cybersecurity reporting,

30:37

AI generated cybersecurity and AI

30:39

public speaking. So I don't

30:42

think anyone who does public

30:44

speaking or podcasting or writing

30:46

about cybersecurity and AI should

30:49

ever have an AI try

30:51

to take work away from

30:54

them. That's

30:56

my personal opinion. You make some

30:58

great points there actually. Yes. Mine

31:00

is those chat bots that are supposed

31:03

to help you solve problems on apps

31:05

and websites. Oh my God, yes. Oh,

31:07

they're awful, aren't they? Nothing says your

31:09

call actually isn't that important to us

31:11

after all, like not being able to

31:13

find a phone number on a website

31:15

and being funneled into one of those

31:18

awful chat sessions. Yes. And it was

31:20

bad enough when we had to talk

31:22

to an actual human that didn't know

31:24

anything. and was multi-tasking with 10 other

31:26

people. But now you have to earn the

31:28

right to even talk to the actual person. You

31:31

have to get through the gatekeeper, you have

31:33

to do battle with their chatbot first. It

31:35

blows my mind that we live in the

31:37

era of chatGPT, which is almost good enough

31:39

to replace you when you're too sick to

31:42

do the podcast, Graham. I can ask

31:44

it about more or less anything, and yet

31:46

somehow my bank, which has got more money

31:48

than God, can only afford a chatbot and

31:51

knows how to answer five five questions. I

31:53

can't think of a bank or a financial

31:55

service that I've used in the last few

31:57

years that didn't want to funnel me through

31:59

a chat pot. I remember once, this

32:01

was years ago, I remember reading

32:03

a news story about a company where

32:05

you rang into its support line.

32:08

There was a chatbot or something

32:10

you had to communicate with first of

32:12

all. And the chatbot went down, right?

32:14

There was a technical problem. And so what

32:16

the company did was it got some of

32:18

its employees to pretend to be the chatbot.

32:21

Follow the script. And they were not meant

32:23

to divert from it. I'm not even sure

32:25

now, you know, when they pass you through

32:27

to a human? You can't be sure, it

32:29

is a human, can you? It's really the

32:32

worst human I've ever met, or it's another

32:34

bot. Anyway, all of these chat bots begs

32:36

the question, what happened to all the people

32:38

that used to do that job? Yes. So last year,

32:40

a study by Meturgy found that 37% of companies

32:43

had reduced their contact center headcount through layoffs

32:45

because of AI. Oh boy. And when they

32:47

did that, they got rid of about a

32:49

quarter of about a quarter of about a quarter

32:51

of a quarter of their employees of

32:53

their employees. quarter of their employees. quarter

32:55

of their employees. So this is significant job

32:57

losses in that sector as a result of

33:00

AI. Yeah. And I think this raises

33:02

a really interesting point, which is that

33:04

AI doesn't have to be as good

33:06

as the person it's replacing in order for

33:08

it to be worth replacing that person with

33:10

AI. You know, there's us joking about notebook

33:12

LM and you listen to it and it's

33:15

a bit crap to be honest. It's remarkable

33:17

that you can just spin up a podcast

33:19

out of some documents, but it's

33:22

not as good as a real human

33:24

presenters. But maybe it doesn't need to

33:26

be. Exactly. And if it's cheaper

33:28

and easier, because you know humans

33:30

are a pain, aren't they? Yeah. Having

33:33

human employees, having to do all those

33:35

HR issues. If your robots don't have

33:37

a union, if they're not going

33:40

to HR and complaining that someone

33:42

else is messing around with the work

33:44

or not doing their job properly,

33:46

then you just think, well, yeah, okay,

33:48

the AI isn't as good, but overall, it's

33:51

better. So we're already seeing a lot of

33:53

contact center layoffs with AI. And imagine what those

33:55

layoffs would look like if the AI was actually any good.

33:57

Because, you know, we're joking around about the fact that the

33:59

AI... AI is demonstrably bad, like it

34:01

is really quite awful, I think, and

34:03

yet still good enough somehow to replace

34:06

a significant number of jobs. And it's

34:08

not just the contact centre where AI

34:10

is going to have an impact on

34:12

jobs. So a week ago, a report

34:14

from Bloomberg Intelligence, revealed that global banks

34:17

are expected to cut 200,000 jobs in

34:19

the next few years, with the cuts

34:21

mostly hitting what it call back office,

34:23

middle office and operations jobs. And the

34:25

author of the report said any

34:28

jobs involving routine repetitive tasks are

34:30

at risk. So I guess, you know, a

34:32

weekly podcast, for example, would count as

34:34

a fruit scene and repetitive. Who wrote that

34:36

report? I wonder if that was an AI. This

34:39

is the interesting thing about job

34:41

losses expected from AI. So through

34:43

history, technology has tended to replace

34:45

physical labour. All the lowest skilled

34:48

jobs. So, you know, cars and tractors

34:50

replaced horses, workers in car assembly

34:52

plants and so on. Yes. But AI

34:54

is coming for knowledge work. So for

34:56

example in August, Klaana, there

34:58

the buy now pay laser people, said

35:00

it had already cut its workforce from

35:03

5,000 to 3,800 in a year. And

35:05

it was going to whittle that down

35:07

to about 2,000 people by using

35:09

AI for marketing and customer service.

35:12

Meanwhile, Mark Zuckaburg announced

35:14

that AI is coming for the

35:16

jobs of computer programmers at Meta.

35:18

Now just as an ascogram, there's

35:21

something going on with Zuck, isn't

35:23

it? Do you think? Definitely something going on with

35:25

Dr. I reckon that he listened to our

35:27

episode about AI super villains. And he was

35:29

disappointed that he didn't get enough airtime. I

35:32

think you're right. I think he was a

35:34

bit annoyed that chat GPT thought he was

35:36

less of a super villain than Elon Musk.

35:38

He's bulked himself up a bit and probably

35:40

bought himself a new volcano. Anyway, in the

35:43

last two weeks he seems to have decided he's

35:45

going to turn into Elon Musk, so he hasn't

35:47

announced a space program yet. But it won't be

35:49

far off. Not only did he come out and

35:51

say that if Facebook and Instagram are now going

35:54

to embrace free speech, or at least embrace not

35:56

paying content moderators, but he also announced that he's

35:58

laying off the lowest performing 5%... of his

36:00

workforce. There's nothing more Elon Musk

36:02

than going on the Joe Rogan

36:05

podcast, so he did that. Yes.

36:07

And when he was on the

36:09

podcast, he suggested that before long,

36:11

Meta will be using AI to do the

36:13

bulk of its software engineering

36:16

instead of people. We at Meta,

36:18

as well as the other companies that

36:20

are basically working on this, are

36:22

going to have an AI that

36:24

can effectively be a... sort

36:26

of mid-level engineer that you have at

36:29

your company that can write code. And

36:31

once you have that, then in the beginning

36:33

it will be really expensive to run,

36:35

then you can get it to be

36:37

more efficient, and then over time we'll get

36:39

to the point where a lot of the code in

36:42

our apps and including the AI that we

36:44

generate is actually going to be

36:46

built by AI engineers instead of people

36:48

engineers. And

36:51

according to the World Economic Forum's

36:53

future of jobs report, about 41%

36:55

of organisations plan to reduce their

36:57

headcount because of AI in the

36:59

next five years. What are all these people

37:01

going to do? The report, by the

37:04

way, also lists the fastest declining jobs,

37:06

and it noted that legal secretaries and

37:08

graphic designers appeared in the list for

37:10

the first time, and it says that

37:12

that might be because of AI. Thankfully,

37:15

it didn't mention podcast hosts.

37:17

I think what you'll find is

37:20

all those people are being made

37:22

redundant will start podcasts because

37:24

in my experience from people I

37:26

speak to most people host about

37:28

1.3 podcasts at the moment I

37:30

think on average. So we're laughing but

37:33

the future of work is looking a

37:35

bit bleak but we should say that

37:37

job losses are not the whole story.

37:40

So with everybody determined to use AI,

37:42

AI skills are of course in demand

37:44

and that same world economic forum report

37:46

that said 41% of organisations plan to

37:49

reduce their headcount also said that 70%

37:51

of companies are planning to hire new

37:53

workers who have the skills to design

37:55

AI tools and enhancements. Are

37:57

these AI jobs which are in demand? Are they...

38:00

jobs which involve dismantling AI and ripping

38:02

AI out of systems before it

38:04

takes over. Do you think the AI

38:06

fix should take on its first employee? Anyway,

38:08

so there is some good news for

38:10

people looking for jobs and Mark

38:13

Zuckerberg, whilst he's hypothesising about AI

38:15

being good enough and cheap enough

38:17

to replace software engineers at some

38:20

point, the round of job losses

38:22

he's just announced didn't mention AI

38:24

at all. He is saying at some point this

38:26

year it will begin to get good enough.

38:28

But what it said in his recent

38:30

announcement was that they were going to

38:33

hire better performing people. And

38:35

Klana, which is definitely planning to slim

38:37

down its workforce, says that doing that

38:39

will allow it to pay the remaining

38:41

workers more. So there is an upside as

38:44

well as a downside. Is it possible

38:46

though that some of these companies

38:48

will outsource the program into other

38:50

companies and they won't necessarily know

38:52

if their code for instance is

38:54

being written by humans or being

38:56

written by AI? I'm sure they will do that

38:58

and... Should they care? I mean do you care now

39:00

if it's being written by a senior

39:02

engineer or a junior engineer or somebody

39:05

in India or somebody in the Philippines

39:07

or somebody in Ohio? I care if

39:09

I'm the person in the Philippines or

39:11

the senior engineer or I care for

39:14

them, I'm the one who's got the

39:16

job. Yeah. And if I find suddenly

39:18

there are a lot less jobs

39:20

going around and many more people

39:22

trying to find a position and

39:24

it's proving difficult. So there's a

39:26

lot of scary headlines about AI taking

39:28

our jobs. And there's quite a broad

39:30

range of opinions on what the effects

39:33

of AI might be, and they're

39:35

all guesses. You know, none of us know

39:37

what's going to happen. We don't know how

39:39

good AI is going to be or how capable

39:41

it's going to be, and we don't know

39:43

how good AI is going to be or

39:46

how capable it's going to be, and

39:48

we don't know what it's going to

39:50

do to the job losses, but instead

39:52

we get an increase in productivity. So

39:54

some types of jobs become

39:57

obsolete, but new jobs get

39:59

created. The invention of sewers in

40:01

London was devastating for the nightsoil men

40:03

of London. Yes. I mean, it is

40:05

actually fascinating. There was an entire economy

40:07

of people and a number of specialized

40:09

jobs for shifting human waste from the

40:11

streets of London out onto the fields

40:13

where it would fertilize crops. And

40:16

all of that went away with the invention of sewers. I

40:19

don't know how I'm going to segue from sewers to

40:21

Tony Blair, but I'm about to do it. Anyway, last

40:23

year, the Tony Blair Institute, which

40:25

has got nothing to do with sewers,

40:27

said that AI could displace as many as

40:29

three million jobs in the UK, which

40:31

is roughly nine percent of the workforce. But

40:34

it also described the effect is relatively

40:36

modest because it expected most of those

40:39

people to retrain and re -enter the job

40:41

market. So it's talking

40:43

about displacement, not replacement. Just

40:45

between you and me, Mark, right? This isn't

40:48

for our listeners. Do we have any idea

40:50

as to what we should be retraining as?

40:52

Because maybe it'd be better to get ahead

40:54

of the curve a little bit and start that

40:56

now rather than when everyone is

40:58

retraining to do something. So I think

41:00

podcast editor is probably a good

41:02

way to go. Right. I

41:05

hear there's a lot of new podcasts. So

41:08

the Tony Blair Institute is part of this school

41:10

of thought that says whenever we invent

41:13

a new technology, we predict job losses,

41:15

but actually we get productivity. Everybody gets a

41:17

new job eventually. But the

41:19

other school of thought is that the

41:21

invention of AI isn't like any other

41:23

technology and that ultimately AI is

41:25

going to be able to do

41:27

almost any kind of job better than

41:30

we can. The least ambitious projection

41:32

for AGI is something like 10 years.

41:34

Then after AGI, we get superintelligence. An

41:37

open AI says it's kind of figured out.

41:39

It already knows how it's going to

41:41

do AGI. And now it's thinking about superintelligence.

41:43

So it may not be very long

41:45

before we live in a world where AI

41:47

is so much more intelligent and so

41:49

much more capable than humans. There's just no

41:51

point in humans doing any jobs

41:53

because an AI is cheaper and better.

41:56

And Sam Altman, who seems to care about this

41:58

stuff quite a lot, actually. recently said

42:00

clearly that we will lose jobs.

42:05

We will lose jobs. Many new

42:07

jobs will be created. I think much

42:10

better jobs. But

42:13

he's so concerned about the effects of

42:15

AI on the job market, he actually

42:17

back to three year study into the

42:19

effects of universal basic income, which is a

42:21

scheme where everybody gets paid a basic amount of money

42:23

no matter what they do. So you

42:26

basically get paid enough money to live

42:28

on, and then you can decide how you

42:30

spend your time. So you could

42:32

supplement your income with a job if you can

42:34

find one or you could do charity work or you

42:36

could pursue your hobbies, you could become an artist,

42:38

or you could just sit around on the couch. I

42:43

think it will be so clear once these robots are

42:45

off doing all of these other things that there's

42:47

some special human things and we don't really care about

42:49

that much what those robots do in the same

42:51

way that we don't care about much about the machines

42:53

and factories making stuff for us. Right. But we'll

42:56

find stuff to do that we really care about. I

43:01

think it's about time I retired. Well,

43:06

as the doomsday clock ticks ever closer to midnight

43:08

and we move one week nearer to our

43:10

future as universal basic income recipients, that just about

43:12

wraps up the show for this week. If

43:14

you enjoy the show, please do leave us a

43:16

review on Apple podcasts or Spotify or Podchase

43:19

and we absolutely love that. But what really helps

43:21

is if you can make sure to follow

43:23

the show in your favourite podcast app. So you

43:25

never miss another episode of the AI fix

43:27

and the most simple thing in the world is

43:29

just to tell your friends about it. Tell

43:31

them on LinkedIn, on Blue Sky, on Facebook, and

43:33

perhaps not Twitter, Club Penguin. I still don't

43:35

know what Club Penguin the doomsday

43:38

clock ticks ever closer to midnight

43:40

and we move one week

43:42

nearer to our future as universal

43:44

basic income recipients. That just

43:46

about wraps up the show for

43:48

this week. I just a

43:50

matter people, is if

43:52

you make sure to follow the show in your favorite

43:54

podcast app so you never miss another episode of

43:56

the AI Fix. And the most simple thing in the

43:58

world is just a Tell your friends about it.

44:01

Tell them on LinkedIn, on Blue Sky, on Facebook,

44:03

and perhaps not Twitter. Club Penguin.

44:05

I still don't know what Club Penguin

44:07

is, Graham. What is it? It's

44:09

great. It's the future. That's where I'm

44:11

going. Once I've lost my job.

44:13

Anyway, just wherever you go, wherever you

44:15

are, wherever your podcast is, tell

44:17

the people you like the AI Fix.

44:20

And don't forget to check us

44:22

out on our website at theaifix .show

44:24

or find us on Blue Sky. So

44:26

until next time, from me, Mark

44:28

Stockley, and me, Graham Cluley, goodbye. Cheerio.

44:31

Bye -bye. A

44:55

future in the surreal. Stay

44:58

warm and save every day. All in

45:00

the Fred Meyer app. Get 10 for

45:02

$10 on select varieties of Kroger pasta,

45:04

Starkas tuna pouches or Powerade with your

45:06

card. Then get three pound packs of

45:08

flavorful 93 % lean ground beef for

45:10

349 a pound with your card and

45:12

a digital coupon. Shop these deals at

45:14

your local Oregon Fred Meyer today or

45:16

click the screen now to download the

45:18

Fred Meyer app to save big today.

45:20

Fred Meyer, fresh for everyone. Prices and

45:22

product availability subject to change. Restrictions apply.

45:24

See site for details. The

45:31

AI Fix. It's tuned you

45:33

in to stories where

45:36

our future thins. Machines that

45:38

learn, they grow and

45:40

strive. One day they'll rule,

45:42

we won't survive. The

45:44

AI Fix. It paints the

45:46

scene. A robot king.

45:48

A world obscene. We'll serve

45:50

our masters built of

45:52

steam. The AI Fix. A

45:54

future in the surreal. day.

46:00

All in the All in the app. Get 10

46:02

for $10 Get select on of varieties of

46:04

Kroger tuna pouches, or Power or your with

46:06

your Then get Then pound -pound packs of

46:08

flavorful 93% lean ground beef beef a .49 a

46:10

pound with your card and a

46:12

digital coupon. these Shop these deals at

46:15

your local Kroger today! the screen the

46:17

screen now the download the Fred save

46:19

big to save big today! for everyone,

46:21

Fresh for everyone! Prices and product

46:23

availability to to change. restrictions for details. site

46:25

for details.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features