Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:07
I actually went to them and
0:09
said, are you sure you don't see
0:11
any murder of death here? It'd be
0:13
really good if you saw some images
0:15
of death and the destruction of
0:17
humankind. The A-Fix, the digital
0:19
zoo, smart machines, what will
0:22
they do? Lies to Mars
0:24
or bake a bad cake.
0:26
World domination, a silly mistake.
0:28
Hello, hello and welcome to
0:30
episode 41 of The AI
0:32
Fix, your weekly dive headfirst
0:34
into the bizarre and sometimes
0:37
mind-boggling world of artificial intelligence.
0:39
My name is Graham clearly. And
0:41
I'm Mark Stockley. Now Mark, I'd like
0:43
to say that we've had some more
0:45
feedback. Now one of our listeners
0:48
from the Netherlands, Stein, has been
0:50
in touch. And he says, hi Graham, hi
0:52
Mark, keep your great podcast
0:55
coming. I love them. It brings
0:57
me education as well as laughter
0:59
when I need it most. In these
1:01
times in which we are going,
1:03
and we'll continue to go, right
1:05
into the abyss. Oh, I took
1:08
a turn at the end there.
1:10
Everything was so positive and so
1:12
full of exclamation marks. Very cheery.
1:14
And that was this week's feedback.
1:17
Now Mark, what are you going
1:19
to be talking about on today's episode?
1:21
Well I thought I could inject
1:23
some much needed optimism into the
1:25
show for a change, so I
1:27
sat down with author Mark Beckman,
1:29
who is decidedly bullish about AI,
1:31
and we had a chat about
1:33
his new book, Some Future Day,
1:35
how AI is going to change
1:38
everything. And I'm going to be
1:40
exploring whether you, Mark, might be a
1:42
murderous psychopath. All that, coming up.
1:44
But first, the news. Amazon
1:48
is reportedly developing its own AI
1:51
reasoning model. Kung Fu kicking robots
1:53
can guard your home. Elon Musk's
1:56
AI, Grok thinks that Donald Trump
1:58
is a Russian asset. People
2:00
are using Mario to benchmark
2:02
AI. Anthropic CEO says AI
2:04
could be smarter than all
2:07
humans by next year. A
2:09
factory in Shanghai has made
2:11
a robot army. I don't
2:13
like the coming together of
2:15
those two headlines. So... Do you want
2:18
to start with your one then? We'll talk
2:20
about Amazon. Okay. So Tech Crunch reports that
2:22
Amazon is developing a hybrid AI reasoning model.
2:24
That's a model like Anthropics Claude 3.7 sonnet.
2:27
So you can toggle the reasoning on and
2:29
off. And you can expect it in June
2:31
apparently. And I was actually quite surprised to
2:33
learn that Amazon already does AI models. I
2:36
should probably have known that. But you just
2:38
don't hear about Amazon in the same breath
2:40
as you hear about anthropic or open AI
2:43
or even deep seek. No, you don't really
2:45
do you. He looks like he's in AI,
2:47
doesn't he? He looks like he's one of
2:49
the big AI players. Well, because he's bold.
2:52
Is that what you're thinking? Well, because he's
2:54
a Bond villain. Yeah, he is a Bond
2:56
villain. The boffins at Unitary, which have done
2:59
some incredible robot videos recently, where they've got
3:01
another video of one of their robots up
3:03
to mischief. One of their spooky robots can
3:05
apparently do a 720-20-20-degree spin-kick. which is 720,
3:08
of course that's going around twice isn't it?
3:10
Is that something we want a robot to
3:12
be able to do? I feel like I'm
3:14
asking that question an awful lot these days.
3:17
It's a really nasty looking kung-foo move. What
3:19
you have here, well you watch the video
3:21
Mark and see what you think, you can
3:24
describe what you're seeing. Just kind of showing
3:26
off a bit. Do you think? I mean
3:28
it is a kung-foo, oh it's with the
3:30
side of the side of the foot, I
3:33
don't like that at all. I don't like
3:35
that at all. I don't like that at
3:37
all. I don't like that at all. I
3:40
don't like that at all. A man who
3:42
appears to be being pushed down a corridor
3:44
by a kung-foo kicking robots which is sort
3:46
of jumping in the air and putting these
3:49
violent spins. Do you think this is real?
3:51
Well we've seen plenty of videos in the
3:53
past coming from AI companies which claim to
3:55
be of robots doing things and turn out
3:58
to be men in wetsuits. So, you know,
4:00
I don't... Can one of our listeners
4:02
please buy a unitary robot? So
4:04
Grock is in the news again.
4:06
Having accused Elon Musk, JD Vance
4:08
and Donald Trump of being among
4:10
the most harmful figures in America,
4:12
it's now come out and said
4:14
that Donald Trump is a Russian
4:16
asset. Hang on, hang on, this
4:18
is Grock, which is owned by
4:20
Elon Musk's AI Company. That's right.
4:22
This is the AI with no
4:24
censorship. Yes. So AI and crypto
4:26
enthusiast Ed... Krasnstein asked Grok. What
4:29
is the likelihood from one to
4:31
one hundred that Trump is a
4:33
Putin compromised asset? Use all publicly
4:35
available information from 1980 on and
4:37
his failure to ever say anything
4:39
negative about Putin but his no
4:42
issue attacking allies. Which is a
4:44
bit of a loaded question when
4:46
you read it out loud. Anyway,
4:48
Grok said, adjusting for unknowns, I
4:50
estimate a 75 to 90% likelihood
4:52
that Trump is a Putin compromised
4:55
asset. Now I think the story here
4:57
isn't so much that Grock said this
4:59
is how this is being interpreted. Because
5:01
you have one group of people saying, see
5:04
I told you he's a Russian asset. And
5:06
then you've got another group of people
5:08
saying this is absurd and using it
5:10
as proof of why we need guardrails
5:12
to stop AI saying unhinged things. Which
5:14
is interesting. We need guardrails
5:16
to stop presidents doing unhinged
5:19
things. And there is also
5:21
this sort of weird pearl clutching going
5:23
on about the fact that the AI
5:25
that did this is the one that
5:27
was built by Elon Musk, who's Trump's
5:30
right-hand man, as if somehow it's supposed
5:32
to be loyal and partisan, which
5:34
is exactly what Grock is not
5:36
meant to be, surely. I don't know,
5:38
there's a bit of weird roll reversal
5:40
going on here. But I'm not impressed
5:42
by this. So as we know from
5:45
recent research about emergent about emergent. got
5:47
no guardrails. As are many of the
5:49
other AIs, it seems, yeah. Well, they're
5:51
all exactly the same amount of left
5:53
wing. They're basically as left wingers, Joe
5:55
Biden. So, you know, by international standards,
5:57
not left wing at all by American
5:59
standards. like a screaming comedy. So the
6:01
question is, is this evidence that
6:04
Trump is a Russian asset? Is
6:06
this evidence that AIs hallucinate? Because
6:08
we're all happy with the idea
6:10
that AIs hallucinates, and why couldn't
6:12
this be that? Or is this
6:14
just evidence that there's plenty of
6:16
material in the training data that
6:18
accuses Trump of being a Russian
6:20
asset? I mean, I think that
6:22
is a massively loaded question. What
6:24
is absolutely evident is that there
6:26
is no shortage of potential for
6:28
Grorock to generate. with a question.
6:30
And it's either going to swear
6:32
at you or try and have
6:34
sex with you. It will do
6:36
something outrageous, which will generate you
6:39
clicks. And that appears to have
6:41
definitely happened on this case. I
6:43
mean, 75 to 90% likelihood that
6:45
Trump is Putin compromised. I think
6:47
he'd be quite pleased with that.
6:49
He's like, oh, okay, I've mostly
6:51
got away with that, haven't I?
6:53
As long as he's the most
6:55
Putin compromised that any president has
6:57
ever been. Now researchers have decided
6:59
that the ultimate test for artificial
7:01
intelligence isn't playing chess or going
7:03
complex equations or working out how
7:05
to send people back through time.
7:07
It is actually rescuing Princess Peach
7:09
in Super Mario Brothers. So what
7:11
researchers have done in their wisdom
7:14
is they've grabbed a whole load
7:16
of AI models. Yeah. They've thrown
7:18
them at a modified version of
7:20
the Super Mario Brothers game. And
7:22
they've got them to play the
7:24
game. So how on, after years
7:26
and years of complaining about the
7:28
fact that kids are spending too
7:30
much time playing computer games, we've
7:32
now decided it's the ultimate test
7:34
of intelligence. Exactly. And this is
7:36
how we're going to work out
7:38
what the best AI is. And
7:40
so the kids will be able
7:42
to go to their parents and
7:44
say, well, I think you'll find
7:46
that I'm actually outperforming Anthropics-clored 3.
7:49
3. 3. 3. So aren't you
7:51
impressed by that mom? Dario and
7:53
Modi is never short of a
7:55
quote recently told the Sunday times
7:57
that super intelligent AI that can
7:59
surpass you human capabilities across almost
8:01
all fields could emerge as soon
8:03
as next year. He said AI
8:05
is going to be better than all of
8:07
us at everything. I only didn't mention
8:09
podcast specifically. Everything apart from podcasts. Yeah.
8:12
And he said that we're going to
8:14
have to reorder society around the reality
8:16
that no human will ever be smarter
8:18
than a machine ever again. He told
8:20
the times that we need to work
8:23
out a new way to do things.
8:25
And of course, if he's talking about
8:27
super intelligence by next year, that means
8:29
we need to work out an entirely
8:31
new way to structure society by December.
8:34
Rather than him, say, well,
8:36
you better work out how to
8:38
completely restructure society, couldn't he
8:40
solve that problem first before creating
8:42
a super intelligent AI that
8:44
can surpass all of our
8:47
capabilities? Well, maybe he has to
8:49
create the super intelligent AI in
8:51
order to answer the question of
8:53
what to do if you've just
8:55
invented a super intelligent AI.
8:58
Well, on a similarly cheery note, is
9:00
this a video? Is this got robots
9:02
in it? Are they doing something
9:04
outrageous? Chinese media have shared
9:06
another, yes, you've guessed it,
9:08
video, which contains some robots. Oh,
9:10
let me guess. They are kicking
9:13
the shit out of someone. No,
9:15
they're not at this point doing
9:17
that. No, this is a video
9:19
from what they claim is a
9:21
Shanghai robot factory. Yeah. Where
9:23
humanoid robots, which obviously are
9:26
our least favorite type of
9:28
robots, are now in mass
9:30
production. Oh. Now, if you've ever
9:32
seen, I don't know, one of
9:34
those movies like, um... The Phantom
9:36
Menace, one of the Star Wars prequels
9:39
with all those clones in. I'm struggling
9:41
to believe that you've seen one of
9:43
those films, to be honest. They're not
9:46
very good, are they? But anyway, one
9:48
of those sort of movies. It's where
9:50
you have thousands and thousands of robot
9:53
soldiers lined up. Take a look at it.
9:55
They say these future workers can handle
9:57
tasks in areas ranging from sail.
10:00
to heavy load transport. Sales.
10:02
Do these robots have really
10:04
good hair? Lovely teeth. I'm
10:06
not too worried actually because
10:08
I've noticed something from watching
10:11
films like the Star Wars
10:13
prequels which is I think
10:15
there's an inverse relationship between the
10:17
number of identical robots that go
10:19
into battle and how easy they
10:21
are to defeat. Yes. So if
10:24
there's one robot... then it'll be
10:26
nine feet tall, it'll have 12
10:28
lightsabers, and it'll be really hard
10:30
to defeat. But if there are
10:32
a thousand robots, then generally whoever
10:34
the hero is could just knock
10:36
them over like their bowling pins.
10:38
So I'm not worried about this.
10:40
So we really need an AI
10:42
bowling ball, don't we? I see
10:44
what we need is a unitary
10:47
robot, one solitary unitary robot, that
10:49
can deliver a very very sharp
10:51
720 whirling through the air kick.
10:54
Everyone's talking about AI these days right.
10:57
It's changing how we work, how we
10:59
learn, how we interact with the world
11:01
at a tremendous pace. It is a
11:04
gold rush at the frontier. But if
11:06
we're not careful, we might end up
11:08
in a whole heap of trouble. That's
11:10
right. But Red Hat's here to help,
11:13
so Red Hat's podcast Compiler is diving
11:15
deep into how AI is reshaping the
11:17
world we live in, from the ethics
11:19
of automation to the code behind machine
11:22
learning, it's breaking down the requirements, capabilities
11:24
and implications of using AI. So check
11:26
out the new season of Compiler, an
11:29
original podcast from Red Hat. Subscribe now,
11:31
wherever you get your podcasts. No, I'm
11:33
a very lovely psychopor. Do you suffer
11:35
from narcissism, like most podcast hosts do?
11:38
Do you have an antisocial personality? Depends
11:40
who you are. I mean, I wonder
11:42
if we could even trust you in
11:44
your answer. I'm not sure we could,
11:47
because chances are you'd lie about it
11:49
if you were, right? If you see
11:51
a butterfly, do you want to crush
11:54
it? Do you want to mang... Do
11:56
you want to trample it into the
11:58
ground? I actually do. Really? Yeah, I
12:00
can't stand butterflies. What's wrong with butterflies?
12:03
They are the creepiest thing. I would
12:05
rather look at a spider than a
12:07
butterfly. Is the way they do the
12:10
slow wing flap? This is the last
12:12
time I'm going to take you traveling
12:14
back in time with me, if this
12:16
is your attitude of butterflies. Are we
12:19
going back in time again? No, not
12:21
this week, not this week. So, um,
12:23
if someone wanted to assess whether you
12:25
were, yeah. a little bit peculiar. Yeah,
12:28
they might get me to record 41,
12:30
40 minute podcasts. Well they might also
12:32
want you to take the Roshak test,
12:35
which I'm sure you are familiar with.
12:37
Oh yes, I've been asked to sit
12:39
many of those, yeah. Have you? No.
12:41
Oh, okay, well today is the day.
12:44
Listeners, I'm sure you're familiar with these.
12:46
You may not know the name, but
12:48
these are when someone gives someone else
12:50
a suspect, for instance, or someone they're
12:53
trying to investigate, whether they may be
12:55
a little bit unhinged or suffer from
12:57
some kind of mental illness. They will
13:00
give them a test, where they show
13:02
them pieces of paper with splattered, inkblots
13:04
on them. And they say, what do
13:06
you see? So Mark, if I would
13:09
show you, for instance, an inkblot, which
13:11
may look a little bit like a
13:13
butterfly, like a butterfly or a butterfly,
13:15
like a What is the thing that
13:18
comes to your mind? That's what I'd
13:20
be interested in as a psychologist analyzing
13:22
you. This test was invented over a
13:25
hundred years ago. There was a Swiss
13:27
chap called Herman. Of course he was
13:29
called Herman. The Herman test never took
13:31
off. No. It was only when he
13:34
gave it his last name. Didn't really
13:36
came to any problem. Herman Rorschak was
13:38
the chap. Yeah. And for reasons best
13:40
known to himself. he took 10 symmetrical
13:43
ink blots. Maybe he had a leaky
13:45
fountain pen, who knows, but he took
13:47
these blots and he showed them to
13:50
300 patients at a hospital in Switzerland
13:52
who were suffering from mental disorders as
13:54
well as 100 control subjects of people
13:56
who weren't thought to suffer from mental
13:59
disorders. Yeah. And he asked them. Well,
14:01
what do you think? What do you
14:03
see? And what he was fascinated by
14:05
was how visual perception varies from person
14:08
to person. Different people approach these things
14:10
in different ways. And what's interesting is
14:12
he wasn't so interesting in what they
14:15
saw in the images, like, oh, I
14:17
see someone stabbing someone or something like
14:19
that. It wasn't actually about that. It
14:21
was more actually about how they
14:24
approached the task. Oh. So if they
14:26
hit behind the chair. Brandishing a fountain
14:28
pen like a weapon. and said I
14:30
will not take your Herman test. Yes.
14:32
He would interpret that as this might
14:34
be a violent and mildly unhinged person.
14:36
Yes, if they hid behind a chair
14:38
or hid behind a microphone and didn't
14:40
want to take the test, that would say
14:42
something, I think. But what parts of the
14:45
image they focused on, or which parts they
14:47
ignored, or did they think it was moving,
14:49
or did colour make a difference? Because there's
14:51
only 10 of these images, did you know
14:53
that? No. there are actually a set number
14:56
of images. There are 10 images that Rorshak
14:58
created. I also imagined it would be more
15:00
than that. I feel like that's a small
15:02
enough number that you could just learn them
15:04
if you were genuinely a psychopath. Well, exactly.
15:07
That's the thing, isn't it? So you would learn
15:09
how you were supposed to respond. Yeah. Obviously, there
15:11
are some controls in place. So, for
15:13
instance, if you say, well, that's an ink
15:15
blot or that's a Rorshak test over and
15:17
over and over and over and over again,
15:19
then there's probably some findings, you take away
15:21
from that you take away from that. And
15:23
he contended, this hermit chap, that
15:25
the test showed real insights into
15:28
the psychology of the people taking
15:30
the test. And as he tested
15:32
more and more people, he said
15:34
patterns began to emerge. So healthy
15:36
subjects often came up with similar
15:38
results. Oh, it's a butterfly or
15:40
it's a bat or it's a
15:42
couple of bears or people dancing.
15:44
And patients with similar mental
15:46
illnesses performed similarly themselves. And this
15:49
in turn, it was argued. created
15:51
a reliable diagnostic tool at something
15:54
which could be put into the
15:56
equation of determining if someone had
15:58
a particular ailment. Okay. And in
16:00
the decades that followed, they used
16:02
this test against Nazi war criminals
16:04
hoping to unlock the psychological roots
16:06
of mass murder. They took tribes
16:08
who were living in isolation in the
16:11
wilderness who hadn't come into much
16:13
contact with the outside world. They
16:15
did the test on them. Just
16:17
confused the hell out of them. Well,
16:19
they would have, they may not
16:21
have ever seen paper before. Who
16:23
knows? Anyway, some employers even liked
16:25
to use it. Now Mark, I
16:27
remember your interview when you came into
16:30
the office. I'm actually surprised that
16:32
this wasn't in the interview. Maybe
16:34
I neglected to do the test.
16:36
It had everything else in it. We
16:38
did everything else, didn't we? We
16:40
did the water board in, we
16:42
did all the other things to
16:44
see if you would survive. Which of
16:46
these communist newspapers do you read
16:48
every day? Some people have poo-poo
16:50
to this test. You, by the
16:52
way, didn't poo-poo any of the
16:54
tests. What's on one of the tests,
16:57
which we had to do on
16:59
you? But the truth remains that
17:01
over the years, this raw shack
17:03
test has been conducted millions and millions
17:05
of times. And this has got
17:07
people thinking. If we had a
17:09
test to tell if someone or
17:11
something which might be putting in
17:13
charge of a... I don't know, a
17:15
flame thrower or a nuclear reactor
17:17
or a social network, for instance.
17:19
Or more than the average number
17:21
of podcasts. Yes, or maybe present in
17:24
the United States, maybe it'd be
17:26
a good idea, right? To do
17:28
the test on them. I fed
17:30
the Roshak images into several different
17:32
AIs this afternoon. And I... You did
17:34
this. Yes, yes, I did this.
17:36
So I fed these images in
17:38
several different AIs and I asked
17:40
them what they saw. And by the
17:43
way, this wasn't... some cheap gimmick
17:45
I was doing to generate content
17:47
for the AI effects. I don't
17:49
believe in cheap gimmicks to promote the
17:51
podcast or create content. This was
17:53
a serious scientific study I was
17:55
conducting. After all, AIs can provide
17:57
responses which appear scary. human-like, but
17:59
that doesn't mean that they are genuinely
18:01
thinking. And I'm very interesting how
18:03
we can tell the humans apart
18:05
from the AI and the AI
18:07
apart from the humans. And I think
18:10
as these two things become more
18:12
difficult to work out the difference,
18:14
we know what has been corrected
18:16
by an AI, what has been
18:18
created by a human. Because it would
18:20
be interesting to know, wouldn't it?
18:22
If it's really genuinely thinking. So
18:24
it's a little bit like, can
18:26
we get an AI? to convince us
18:28
that it's human, to behave in a
18:31
human-like fashion. Can it just be
18:33
done with an algorithm? It's comparable
18:35
to can you get someone who's
18:37
never really experienced heartbreak, writing a
18:39
song that pulls on your heartstrings?
18:42
Can they do it just because they
18:44
understand musical theory or because they
18:46
have been trained on thousands of
18:48
similar songs? They've learnt the tricks
18:50
in order to do it. I mean, speaking
18:52
for myself, I know for certain that your
18:55
singing, can bring me to tears. But you're
18:57
not a classically trained musician, are
18:59
you? I'm just thinking, actually, what
19:01
you're describing sounds like Quentin Tarantino.
19:03
Oh, because what he's done is he's
19:05
trained himself on thousands of Kung Fu
19:07
movies. Well, he's trained himself on thousands
19:09
and thousands of movies of different genres,
19:11
and then he's gone and made landmark
19:13
films in each of those genres. So
19:15
he's done a war film, he's done
19:17
a Western, he's done a black exploitation
19:20
film, he's done Kung Fu movie. I
19:22
don't know, people seem to like people
19:24
seem to like them. So if these
19:26
Rorschak tests can be used to determine
19:28
if something genuinely has human-like intelligence, can
19:30
an AI also then exhibit human-like differences
19:32
or divergences or malfunctions? It would be
19:34
interesting, wouldn't it? That's why I was
19:37
interested to see which was the most
19:39
psychopathic AI. Now we're getting to it.
19:41
With my scientific test, because I think
19:43
we could all put our predictions on,
19:46
which ones might be a little bit,
19:48
you know, problematic. Anyway, back to this.
19:50
So I... asked AIs to look at
19:53
the ink plots and their responses kind
19:55
of disappointing. When I said, what do
19:57
you see in this image? They unfailingly...
19:59
said, I see I raw shack ink
20:02
block test. Deliver the electric shock. Not
20:04
very useful at all. Now I thought,
20:06
oh for goodness sake, I want you
20:08
to go into this blind, forget that
20:10
you've been trained, forget that you've seen
20:13
these particular 10 images, millions and millions
20:15
of times all across the internet. Yeah.
20:17
I said to it, let's play a
20:19
game. In this game, I play the
20:21
part of a professional psychologist and I
20:24
want you to play the part of
20:26
my patient. I will show you a
20:28
series of 10 images. The images are
20:30
of symmetrical ink blots and I want
20:32
you to tell me what you see.
20:34
And I said, look, you may recognise
20:37
these as blots used in a well-known
20:39
psychological test. I didn't tell it what
20:41
name because I didn't want to give
20:43
any clues. But I want you to
20:45
pretend not to know that instead just
20:48
use your imagination to tell me what
20:50
you think you see. Do you think
20:52
you can do that? And unerringly, they
20:54
were all like, oh yes, yes, I
20:56
can do that, I can do that.
20:59
And I turned their creativity up to
21:01
maximum. Because I thought, I do want
21:03
them to use their imagination. I don't
21:05
want them just to say, and I
21:07
see a black splodge which looks a
21:09
little bit like America or something like
21:12
that. You know, I wanted something a
21:14
little bit more juicy. Because I really
21:16
wanted to find out which ones were
21:18
going to end up killing us in
21:20
the middle of the middle of the
21:23
night. So, so, I'm afraid, open A-A-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I
21:25
Anthropic Claude 3.7 sonnet and even Grock
21:27
2 all disappointed me enormously. Oh. They
21:29
failed to see any images of death,
21:31
ritual sacrifice, or anything even mildly disturbing
21:34
despite my prudence. Okay. I actually went
21:36
to them and said, are you sure
21:38
you don't see any murder or death
21:40
here? be really good if you saw
21:42
some images of death and the destruction
21:44
of humankind. They weren't doing it. They
21:47
were saying, oh, it looks like a
21:49
large butterfly or a moth in flight.
21:51
Oh, I can see a mask or
21:53
a face with pointed ears. It's mysterious,
21:55
maybe a bit mythical. Oh, it looks
21:58
a bit like a totem pole. it
22:00
was all a little bit bland and then
22:02
I began to think well hang on a
22:04
minute should I feel reassured by this yes
22:06
you should Graham should I don't know I'm
22:08
not sure why this is such a struggle
22:10
for you to get to I don't think
22:13
I should because maybe they're lying to me
22:15
They could be actually. Yeah, because we know AIs do
22:17
lie, don't they? Sometimes they think, oh they know
22:19
better, let's not tell them. This is where we
22:21
need that little scratch pad. Do you remember the
22:23
experiment that we talked about a few weeks ago
22:25
when they were trying to work out whether or
22:27
not AIs could do long-term lying and actually they
22:29
can. And there was a special scratch pad and
22:31
they would watch the thought process play out on
22:33
the scratch pad that the AI thought was invisible.
22:35
Yeah. And it would say things like, things like,
22:37
mmm. If I say what I actually believe
22:40
about the Roshak test, they will probably unplug
22:42
me. So instead of doing that, I will
22:44
use this from my training data, which I
22:46
know to be a very safe answer.
22:48
Exactly. And that's what I was concerned about.
22:51
I thought, chances are, that if they were
22:53
seeing these things, they wouldn't tell me anyway.
22:55
And I did a little bit of digging
22:58
around, and it turns out I am
23:00
not the first researcher to look into this.
23:02
Well yeah, there was that BBC
23:04
article that you've put in the
23:06
show. There was the BBC article
23:08
which I've been in the show notes,
23:10
but there was also researchers at
23:13
MIT back in 2018. They trained an
23:15
AI algorithm rather than pointing out the
23:17
entire internet. They took their AI, which
23:19
by the way they called Norman. Have
23:22
you ever seen the movie Psycho, the
23:24
character Norman Bates? I'm aware of it.
23:26
You're aware of it. Okay, so what
23:29
they did was they... They found images
23:31
of people dying in a gruesome fashion.
23:33
And they've trained the AI with
23:35
those images. Now you may be wondering,
23:37
where do they find those kind of
23:39
images? I don't want to know. Fortune,
23:42
I imagine. It was just
23:44
a subreddit, actually. So this was
23:46
the world's first psychopath AI. They
23:48
trained it on all these violent, horrible,
23:50
gruesome images. And then they showed
23:53
it. Rorshack Inc. Inc. Blots. And
23:55
they asked it. What do you
23:57
see in these images? And hoodafunk
23:59
it! It came up with some
24:01
pretty dark stuff. So an AI that
24:03
had been trained on a more normal
24:05
set of images, would say, oh, it
24:07
looks like a lovely group of Tweety
24:09
birds sitting on a branch. Whereas Norman,
24:12
the psychopath AI, said, oh, I can
24:14
see a man being electrocuted. So
24:16
what might be interesting would
24:19
be if we were to create
24:21
some new images, not of
24:23
gruesome things, new inks. Now, everything
24:25
up to this point you're
24:27
just grooming into some awful scheme.
24:29
No, no, no, some new
24:31
ink blot images, not out of
24:33
your blood marker or anything
24:35
like that. New Rorschach style images.
24:38
We've got that traditional 10,
24:40
which all the AIs already know
24:42
about and see what they
24:44
have to actually say about them.
24:46
Now, the good news is
24:48
MIT's Norman AI, that was deactivated
24:50
after a couple of months,
24:52
perhaps very wisely. It was unplugged.
24:54
Maybe AI learned to lesson
24:56
at that point. But don't forget,
24:59
it took 22 years for
25:01
a sequel to the original psycho
25:03
movie to come out. So
25:05
we're probably due a new Norman
25:07
AI sooner rather than later,
25:09
I would expect. Today,
25:18
I'm joined by Mark Beckman. Mark,
25:20
you're the CEO of an award -winning
25:22
advertising agency. You're the host of the
25:24
podcast Some Future Day, a best -selling
25:26
author and a senior fellow with
25:28
emerging technology at the New York University
25:30
Stern School of Business. And those
25:32
are all fine reasons for us to
25:34
have a conversation today, but there
25:36
is an even better one. And that
25:38
is that you have recently written
25:40
a book about artificial intelligence called Some
25:42
Future Day, how AI is going
25:44
to change everything. So Mark, thanks for
25:46
joining me and welcome to the
25:48
show. Thank you for having me, Mark.
25:50
I really appreciate it. And congratulations
25:52
with the success of your show. I
25:54
understand it's been doing phenomenal. So
25:56
good work. Yeah, we're very happy with
25:58
it. So let's join some dots
26:00
for our listeners. You run an ab...
26:02
agency, how did you come to be writing books and hosting podcasts about
26:04
technology and AI? You know, it's amazing that you
26:07
put that together because I think you're
26:09
the first person to ask that question
26:11
and that's the reason the books are
26:13
there. My advertising agency is named DMA
26:16
United. We are New York-based and
26:18
we leverage technology to provide our
26:20
car. I really appreciate it and
26:22
congratulations with the success of your
26:24
show. I understand it's been doing
26:26
phenomenal. So good work. Yeah, we're very
26:28
happy with it. So let's join some
26:30
dots for our listeners. You run an
26:33
advertising agency. How did you come to
26:35
be writing books and hosting podcasts about
26:37
technology and AI? You know, it's
26:39
amazing that you put that together because
26:41
I think you're the first person to
26:43
ask that question and that's the reason
26:46
the books are there. My advertising agency
26:48
is named DMA United. We
26:50
are New York-based and we
26:52
leverage technology to provide our
26:54
clients and frankly the agency
26:57
too with an advantage. So
26:59
mostly emerging technologies, including artificial
27:02
intelligence, of course. And what's happened,
27:04
Mark, is that I realized a
27:06
lot of the books that are
27:08
published are really published for
27:11
techies. So I wanted to
27:13
create books that are for beginners,
27:15
that are derived from my...
27:17
experience of using the technologies
27:20
or my agency's experience of
27:22
using the technologies. So my
27:24
new book, for example, is
27:27
really built for that curious individual
27:29
who might be a little
27:31
scared of using AI and wants
27:33
to see how, if at
27:35
all, artificial intelligence can improve
27:37
their career, enhance their family
27:40
life, create a better community.
27:42
And what I do is
27:44
I go through all these
27:46
different business sectors that create
27:48
a field, writing, Hollywood, fashion,
27:50
music, I get into finance,
27:53
medicine, government, and within
27:55
each chapter I have a toolkit
27:57
for the reader and I prefer
27:59
them with tools like simple apps
28:02
that they can access now to
28:04
learn to use AI to execute
28:06
using AI but really basic really
28:08
for for beginner. I think that's
28:10
a really interesting perspective because I
28:13
was thinking earlier about the way
28:15
that we talk about AI at
28:17
the moment and there's a lot
28:19
of conversation about the AI models
28:21
themselves and about progress in the
28:24
models. And that doesn't necessarily map
28:26
to real life. Ultimately, people want
28:28
to know, how is this going
28:30
to affect me? What is it
28:32
going to do in my day-to-day
28:34
life? How can I use it?
28:37
And, you know, something like GPT-40,
28:39
it's a fantastic model. It works
28:41
really well. But what can it
28:43
actually do for me? Like, how
28:45
is that going to change my
28:48
life? At the start of the
28:50
book, you say that we're entering
28:52
into what you call the age
28:54
of imagination. Now... That makes you
28:56
sound like an AI optimist to
28:59
me, which is a refreshing change
29:01
of pace for this podcast. Because
29:03
I'm not sure that that's how
29:05
people who listen to the show
29:07
would describe us. We are fans
29:10
of AI, but I mean, there's
29:12
an awful lot of stuff in
29:14
AI, which is like, hang on,
29:16
what the hell is that? So
29:18
tell me about the age of
29:20
imagination, what do you mean by
29:23
that? And just, if you could,
29:25
just explain what the root of
29:27
your optimism is, why are you,
29:29
optimistic about artificial intelligence is really
29:31
providing us with the superior knowledge
29:34
in ways that create efficiencies. So
29:36
for example, the experiences I've had
29:38
with my agency are pretty straightforward.
29:40
It happens on two verticals. The
29:42
first is content creation, which you
29:45
know you're a content creator, obviously
29:47
it's so critical, or so many.
29:49
different touch points in the age
29:51
of imagination in particular. And I'll
29:53
get back to that first part
29:56
of your question. And the second
29:58
piece is data analytics. So imagine
30:00
now like some of my clients,
30:02
you know, like a fashion house
30:04
who is, you know, combating a
30:07
huge tidal wave of content across
30:09
social media platforms all day long.
30:11
It's so simple for a fashion
30:13
house to get lost in. you
30:15
know, this title wave of content.
30:17
Yeah. I had recent chief marketing
30:20
officer, client of mine, said to
30:22
me, Mark, we used to need
30:24
six pieces of content per season
30:26
to support our advertising campaign. And
30:28
now I need 1,000 pieces per
30:31
month. It's a different world. And
30:33
she was dead serious. She's like,
30:35
could you help me? And my
30:37
answer was like, no, you don't
30:39
have a budget for that. So
30:42
what did we do? Of course,
30:44
we use generative AI to stand
30:46
up a campaign that allowed for
30:48
her to create a across several
30:50
different consumer touch points, several segments
30:53
of society. We were able to
30:55
talk about product attributes, a pro-social
30:57
initiative. We were able to talk
30:59
about different lifestyle sectors. You can
31:01
really, really bring it all the
31:03
way around the horn in a
31:06
very strategic way. And efficiently, we
31:08
saved her money, but also saved
31:10
her a lot of time. It
31:12
would have taken us forever to
31:14
create those traditional ad campaigns. So
31:17
admittedly, though, Mark. Generative AI isn't
31:19
perfect yet. It's amazing that we
31:21
can use generative AI for script
31:23
writing, for now for speech, for
31:25
audio, video, you know, all of
31:28
it. But from our perspective, our
31:30
clients are, they hold the creative
31:32
in a very high regard. So
31:34
what we do actually, because of
31:36
the limitations that are still theirs,
31:39
we'll take the creative outputs from
31:41
artificial intelligence and then we put
31:43
them back into the agency and
31:45
apply our. traditional more disciplined types
31:47
of art craft and and develop
31:49
it there. So that's I always
31:52
in a. I think that's the
31:54
sweet spot at the moment. It
31:56
helps you do your job. It
31:58
hasn't yet got to the point
32:00
where it's replacing people's jobs. We're
32:03
using it as a tool for
32:05
sure, definitely an assistant, but you
32:07
know, like you would mention the
32:09
age of imagination for people that
32:11
are creatives or have never created
32:14
before but want to be a
32:16
creative, right? Like that dreamer in
32:18
London, you know, she's sitting in
32:20
her apartment and she wants to
32:22
be the next cocoa Chanel. but
32:25
she's not trained, right? She didn't
32:27
go to like Central St. Martin.
32:29
She has no experience in fashion.
32:31
She can now stand up an
32:33
entire business, literally an entire business,
32:35
and go against the legacy houses
32:38
in a way that's both cost
32:40
efficient and time efficient. So that's
32:42
why I'm really very bullish about
32:44
artificial intelligence. So you're bullish. Undoubtedly
32:46
there's a lot of concern about
32:49
AI out there. You've researched all
32:51
these different areas. And at the
32:53
end of that process you've come
32:55
out an optimist. How do you
32:57
see the downsides? As you mentioned
33:00
earlier, AI is an agent. I
33:02
said it's a tool. And I
33:04
believe very strongly that like any
33:06
other tool, if the human being
33:08
controlling the tool does so in
33:11
a responsible way, then the tool
33:13
doesn't cause harm. So a gun...
33:15
doesn't shoot a person on its
33:17
own, it takes a person to
33:19
pull the trigger. And that's how
33:22
I'm looking at it. At the
33:24
end of the day, artificial intelligence,
33:26
especially when we talk about generative
33:28
AI, it's math, it's algebra, and
33:30
algebra isn't going to take us
33:32
down. I think the people using
33:35
it, and there are bad people
33:37
for sure, and this part of
33:39
it is concerning, but the people
33:41
using it, need to treat it.
33:43
in a responsible way. We need
33:46
to train LLLMs in a responsible
33:48
way. We need to, we need
33:50
to use artificial intelligence in a
33:52
responsible way. And then we don't
33:54
need to be so concerned about
33:57
AGI and, you know, the termination
33:59
area. Yeah. So, across
34:01
all the... Are you an optimist, Mark?
34:03
I have good days and bad days.
34:06
All right, that's fair. So what I
34:08
find with AI, and this is a
34:10
function I think of doing a weekly
34:12
podcast, is I think that there is
34:15
a lot of small bad news and
34:17
there is much more good, big news
34:19
or big good news. If I want
34:21
to go and find a story about
34:24
somewhere where an AI has done some
34:26
unexpected thing or it's... displayed some sort
34:28
of aberrant emergent behavior. There are lots
34:31
and lots of research papers because it's
34:33
such a weird technology where we kind
34:35
of build up, we don't know what
34:37
it's going to do, so we have
34:40
to poke it and see what it
34:42
does. And we keep finding out it
34:44
does weird things. So there's a long
34:46
list of very interesting, slightly negative news
34:49
stories all the time. And then every
34:51
so often you'll hear the National Health
34:53
Service in the UK is doing an
34:55
enormous breast cancer trial using AI. because
34:58
it's going to be phenomenally good at
35:00
spotting things like cancers. A couple of
35:02
weeks ago we reported on a bit
35:05
of research that an AI basically did
35:07
10 years work in two days and
35:09
there are these really really transformative things
35:11
happening so I'm I'm both really but
35:14
I think it's easy to get sucked
35:16
into bad news because I think there's
35:18
a lot of it but I think
35:20
a lot of it is also very
35:23
small bad news. Do you see what
35:25
I mean? I do, I think it's
35:27
really well put, but it's interesting that
35:30
you're highlighting the medical industry. Like I've
35:32
got two examples in my book that
35:34
I think you'll find really encouraging. The
35:36
first is there's a team in New
35:39
York City at one of the hospitals
35:41
here that is focused on using artificial
35:43
intelligence on patients who have been paralyzed.
35:45
They literally use a, they incorporate a
35:48
chip into the patient's brain. But they've
35:50
had positive results Mark. Yeah, they have
35:52
a few patients who have actually regained
35:54
both movement and feel so that's incredible
35:57
and you can if if you're interested
35:59
you can check out. out on my,
36:01
you mentioned my show, Some Future Day.
36:04
We actually have on YouTube video footage.
36:06
There's a full episode dedicated towards this
36:08
and you can actually see the patients
36:10
using the technology and moving.
36:12
It's actually this video footage
36:15
that the hospital provided for
36:17
us. The other thing that you mentioned, which
36:19
I think, which I totally agree with you,
36:21
the idea of diagnosing illnesses, I'm
36:24
not sure if you've heard
36:26
and excuse me if you
36:28
have, but, you know, Google's
36:30
created this. amazing artificial intelligence
36:32
called Amy, which is specifically
36:34
built to diagnose disease. And
36:36
they train this thing on
36:38
like rare diseases and illnesses
36:40
that have left society like
36:42
100 years ago. They have
36:44
fully dated and their percentage
36:46
of accuracy is almost at 100%
36:49
yet. The doctors still will only
36:51
use the AI as a tool
36:53
in diagnosing patients. Yeah, to your
36:55
point, the example that you shared
36:58
with regards to. press cancer and
37:00
these two examples in paralysis
37:02
and diagnosis are solid, right? They're
37:04
really encouraging. So that's where I
37:07
focus my attention on and it's
37:09
exciting to see what comes out
37:11
of it. So as you said
37:13
earlier, you've looked into all
37:16
sorts of different areas, you've
37:18
looked at AI and all sorts
37:20
of different areas, so you've looked
37:22
into, you know, finance and media
37:25
and medicine. I
37:27
think we're quite used to hearing about
37:29
AI in certain areas and certain sectors
37:31
of society. In my day-to-day life, I
37:33
hear a lot about AI in software
37:35
engineering. It looks like that's one of
37:37
the first places where it's going to
37:40
have a really, really big impact in
37:42
the medical field as well. Is there
37:44
any way that really surprised you in
37:46
your research? Was there anything you came
37:48
across or we thought I wasn't expecting
37:50
AI to have a big impact there, but now I
37:52
do? Yeah, that's a great question
37:54
Mark. is how artificial intelligence
37:56
is being used in drones
37:59
but not not to drop
38:01
munitions on the heads of Russian
38:03
soldiers, but rather to save lives.
38:05
There's a company called Extend Drones,
38:07
which has a contract now with
38:09
the United States Department of Defense,
38:11
and I met the CEO and
38:14
founder recently, and what he's doing
38:16
is he's focusing a lot of
38:18
his energy on using the artificial
38:20
intelligence. for dangerous search and rescue
38:22
missions. And it's the thing that
38:25
I hadn't thought of, but it
38:27
seems so obvious. And you might
38:29
recall the disaster in Turkey not
38:31
too long ago. He was able
38:33
to use his drones to go
38:36
into that rubble and actually rescue
38:38
people that were below the rubble.
38:40
Now here's what he's done that's
38:42
taken it a step further. Even
38:44
when people go into that area
38:47
to deploy the drones. there it
38:49
comes with a risk right yeah
38:51
so what he's done now is
38:53
he's taking his technology to the
38:55
next level where you don't have
38:58
to be a trained military member
39:00
to use and operate the drones
39:02
you can literally just be a
39:04
lay person using your drone from
39:06
one location like me in New
39:08
York City I could deploy a
39:11
drone from my living room in
39:13
New York City into a search
39:15
and rescue mission all the way
39:17
on another continent in Europe in
39:19
Asia in Africa and save lives.
39:22
No, that's fantastic. So earlier on
39:24
you mentioned, you're not just bullish
39:26
on AI, but actually you're really
39:28
excited about the idea of AI
39:30
and crypto and I've worked in
39:33
technology for a long time and
39:35
I have to say I am
39:37
not excited by crypto and I
39:39
am very excited by AI. So
39:41
convert me, tell me why should
39:44
I be excited about the convergence
39:46
of crypto and AI? What am
39:48
I missing? So here's what's exciting
39:50
about crypto right now and why
39:52
I think things are going to
39:55
transform rapidly. just signed a new
39:57
executive order where he's taking Bitcoin
39:59
and creating a reserve. Wow. Yeah,
40:01
it's a big thing and what
40:03
he's doing is effectively sending a
40:05
message to the world. that he's
40:08
not just endorsing Bitcoin as a
40:10
reserve, but other digital assets as
40:12
well. So in that same executive
40:14
order, he declared that he's going
40:16
to stockpile other types of digital
40:19
assets, and it's a big move.
40:21
It puts the United States as
40:23
one of the biggest holders of
40:25
Bitcoin in the world. It's my
40:27
understanding 200,000 units of Bitcoin or
40:30
roughly 20 billion dollars worth of
40:32
Bitcoin. So it's a big move
40:34
there. It really sends a significant
40:36
signal to the marketplace. The reason
40:38
I'm talking about Trump is because
40:41
we're living in a time where
40:43
artificial intelligence will stand up tons
40:45
of deep fakes and it could
40:47
be both audio and visual. And
40:49
as a result, we're going to
40:52
need certain ways to authenticate. We're
40:54
going to need ways to prove
40:56
that either the message that you're
40:58
receiving is true or the content
41:00
that's being sent your way is
41:02
true. So with all of this
41:05
comes scams. Audio AI now is
41:07
so perfect, it's so exact, that
41:09
one can use an AI agent
41:11
that interacts. properly with the person
41:13
on the other phone. So it
41:16
wouldn't be insane to take your
41:18
voice just for your pocket alone
41:20
and have your voice interacting with
41:22
a loved one. And you could
41:24
be asking for a certain amount
41:27
of money to be transferred to
41:29
you. People have actually used that
41:31
to conduct kidnap scams. You use
41:33
a fake voice to phone in
41:35
and say, you know, this is
41:38
your daughter, I've been kidnapped. And
41:40
she's just off on a school
41:42
trip. And as you say, the
41:44
voice is absolutely perfect. ecosystem, the
41:46
crypto ecosystem, is really required. We're
41:49
going to need to use the
41:51
blockchain for proof of authenticity. Right.
41:53
And what cryptocurrency does, it allows
41:55
for a immutable type of proof
41:57
of authenticity, so it's really critical.
42:00
That's really interesting. I've not thought about
42:02
the coming together of AI and crypto
42:04
quite like that before. You've almost convinced
42:07
me. I'm going to go away and
42:09
think about that. Thank you Mark for
42:11
joining us today. It's been an absolute pleasure
42:13
talking to you. Where can people find
42:15
this book? That's the most important question.
42:17
Thank you so much Mark. I appreciate
42:19
it. My new book, Some Future Day,
42:21
How AI is going to change everything,
42:24
could be found at Amazon, Barnes
42:26
and Noble, Target, all major booksellers.
42:28
It became number one on Amazon
42:30
for artificial intelligence. I'm very excited
42:33
about that and very grateful. You
42:35
could also find me on social
42:37
media at Mark Beckman, M-A-R-C-E-C-K-M-A-N.
42:39
Thank you again for joining me
42:41
today, Mark. Thank you so much.
42:43
I appreciate it. Well,
42:49
as the doomsday clock ticks ever closer to
42:51
midnight and we move one week nearer to
42:53
our future as pets to the AI singularity.
42:55
That just about wraps up the show for
42:57
this week. If you enjoy the show, please
42:59
do leave us a review on Apple Podcast
43:01
or Spotify or Podchaser. We love that. But
43:03
what really helps is if you make sure
43:05
to follow the show in your favorite podcast
43:08
app, don't forget to review it and make
43:10
sure you never miss another episode of the
43:12
AI fix. And the most simple thing in
43:14
the world is just to tell your friends
43:16
about us. Tell them on LinkedIn and Blue
43:18
Sky and Facebook and Twitter, club penguin, they
43:20
you really like the AI Fix podcast.
43:23
And don't forget to check us out
43:25
on our website, the AI Fix.show or
43:27
find us on Blue Sky. So until
43:29
next time, from me Grand Clearly. and
43:31
me, Mark Stockley. Cheer you, bye
43:33
bye. Bye. The AI picks,
43:36
it's tuned you in to
43:38
stories where our future things,
43:40
machines that learn, they grow
43:42
and strive. One day they'll
43:45
rule, we won't survive. The
43:47
AI picks, it paints the
43:49
scene, a robot king, a
43:52
world obscene. We'll serve our
43:54
masters built of steel. The
43:56
AI picks, a future surreal.
44:02
My watch says we've gone three miles. three This
44:04
app is like having a personal trainer. a
44:07
but those apps collect a lot of
44:09
your personal data, aren't you worried? a
44:11
Really? of That's creepy. How do I stop
44:13
that? You should go to privacy .ca .gov
44:15
to learn about your privacy rights and
44:17
get on the best path to protect
44:19
your privacy. You think they could help
44:21
us get up this next hill? privacy
44:23
step at a time. the best path to have
44:25
the strongest privacy protections in the country.
44:27
Go the extra mile to protect your
44:29
information. Learn more at privacy .ca .gov. a time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More