SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

Released Sunday, 23rd February 2025
Good episode? Give it some love!
SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

SPOS #972 – Tom Chatfield On How Tech Has Made Us What We Are

Sunday, 23rd February 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

it's Mitch with just a little update on

0:02

my new venture, Thinkers One. As

0:04

a reminder, we've built a way for

0:06

organizations of all shapes and sizes

0:08

to buy bite -sized and personalized thought

0:10

leadership video experiences from the best thinkers

0:13

in the world. So what's changed?

0:15

Well, we've realized that we need to

0:17

make these thought leaders much more

0:19

accessible. So when you head over to thinkersone

0:21

.com, you will see that the pricing

0:23

is very reasonable. Go and check it

0:25

out. And all of these experiences

0:27

are at least 15 minutes of pure,

0:29

brainy goodness. Now you

0:31

can bring world -class authors, speakers, academics,

0:33

captains of industry and more into your

0:36

everyday meetings or gatherings. It's

0:38

really simple to use and it's

0:40

such a unique way for business leaders

0:42

like you to buy completely personalized

0:44

video content. You can book these

0:46

thought leaders to pop in live at your

0:48

next meeting or have a thinker record

0:50

a presentation just for your team or even

0:52

as a great gift for clients. I'm

0:54

really proud of the lineup of thinkers that

0:56

we have. We've got great minds like

0:58

Tom Peters, Rita McGrath,

1:00

Michael Bunga -Stanye, Whitney Johnson,

1:03

Mark Bowden, Dory Clark, Jay Baer, and

1:05

so many more. So if you're looking

1:07

to add insights, energy, and big smarts

1:09

to your next meeting, corporate event, an

1:11

offsite, a lunch and learn, even as

1:14

a gift for that client, I know

1:16

that you're going to love this. I invite

1:18

you to head over to thinkersone .com and check

1:20

it out. Now, back to the show. Well,

1:22

hey there and welcome to

1:24

episode number 972 of Six

1:26

Pixels of Separation, the Thinker's

1:28

One podcast. My name

1:31

is Mitch Joel. It's Sunday, February

1:33

the 23rd, 2025. Let's

1:35

get on with the show. So

1:49

who are you and what do

1:51

you do? Hello, I'm Dr. Tom Chapfield

1:53

and I'm a philosopher of technology. I

1:55

write books designed on my courses and

1:57

think about what it means to use

1:59

technology well. That's great. You got

2:02

a great new book out called Wise Animals that

2:04

I totally loved. So thank you for writing that

2:06

one. Thank you. My great pleasure. Now,

2:08

I would love to be a philosopher in tech. Tell

2:10

me what I got to do. How does this work?

2:13

Well, It is, of course, a bit

2:15

of hand -waving. But for me, my background

2:17

is literature and philosophy, but

2:19

being a geek. So I did a doctorate

2:21

at Oxford in literature and philosophy, but

2:23

I was obsessed with the fact that all

2:26

of the stuff I was writing and

2:28

thinking about, what it means to live well,

2:30

to relate to others, to think well,

2:32

and so on, was bound up

2:34

with technology. And I felt there

2:36

was a real gap here and a

2:38

real need, a need to try and be

2:40

more thoughtful about the texture of the

2:42

technologized world. And so I started working

2:44

and writing books around things like video

2:46

games around online politics around kind of

2:48

studying a digital age. And I guess

2:50

I followed it from there. So I

2:52

call myself a philosopher because people don't

2:54

quite know what it is and then

2:56

I get to explain it. But what

2:58

I love to do is try and

3:01

get people to think twice about the

3:03

technologies they use, how they work with

3:05

and through technology and businesses, organisations. It's

3:08

helping people be more thoughtful. individual

3:10

institution and kind of make the

3:12

most of the human in a digital

3:14

age. Is it more about how

3:16

you editorialize on what you're seeing

3:18

in terms of research? Is that

3:20

what it would be? For

3:23

me, it's that. I mean, I'm

3:25

trying to come up with actionable

3:27

insights, really. So I chair several

3:29

organizations. I work at advisory board.

3:31

I have my own company. I

3:33

try and take research

3:36

around cognition, cognitive bias,

3:38

behavior. ethics. And

3:40

I try and help people understand how it can

3:42

be applied. So in some ways, although I love

3:44

to write, and I've written a

3:47

dozen books now, I think

3:49

it's my kind of core activity,

3:51

I'm really preoccupied with making

3:53

it actionable and powerful. And

3:55

in particular in boardrooms, for example, what

3:57

I often find myself doing is trying

3:59

to come up with if you'd like kind of thinking tools

4:01

and strategies in a common language. So with

4:03

AI, you often have organizations not

4:05

knowing how to talk and think about

4:07

what it is. not having the right

4:09

ways of talking about what these powerful

4:11

technologies are, how they can use their

4:13

own skills, where their skills gaps are.

4:15

And you end up with people saying,

4:18

well, you know, it's up to the

4:20

tech people, it's up to the CTO,

4:22

it's up to this product. They're not

4:24

making confident informed values led decisions. And

4:26

so this hopefully is where I come in.

4:28

So I'm a little bit obsessed with

4:30

not just sort of talking the talk, but

4:33

trying to make that talk

4:35

into kind of techniques and

4:37

practical interventions and ways of

4:39

doing as well as thinking.

4:42

Are those surprising conversations, meaning

4:44

are these executives hearing what

4:46

you're saying and then thinking

4:48

and pausing and reflecting on this idea

4:50

that that's true? Why do I allow

4:52

technology to sit within the IT department

4:54

or marketing? Or do you

4:56

find them still in the

4:58

mindset of it's a tool? So

5:01

there's often a self -selection thing here because, of course,

5:03

if someone is going to pay me to come in

5:05

and work with them, They're probably interested in the

5:07

kind of things I have to do. But

5:09

you certainly can see that there are sometimes

5:11

people within an organization who want to be

5:13

more thoughtful and make a change in people

5:15

who don't. And the question is what it

5:17

means to get everybody on the same page

5:19

or as far as possible. And

5:21

I think one of the things you

5:23

can do here is really focus on

5:25

people's day -to -day experiences. So

5:27

one of the things I always talk about in a

5:30

very simple way is, well, technology

5:32

is not just a tool. in

5:34

terms of being a neutral thing that you

5:36

just use or don't use. Technology

5:38

has certain biases, certain

5:40

assumptions baked into

5:42

it. Tech of technology is simple

5:44

as email. And then

5:46

ask yourself a deceptively silly question.

5:49

What does email want me to do?

5:51

What behaviors is email pushing me

5:53

towards? And you see when

5:55

you talk to people that they notice that

5:57

with email, what email wants you to

5:59

do is send more

6:01

email. It's like a to

6:04

do list written for you by other

6:06

people because it's cost -free and time

6:08

-free to send email messages and in

6:10

most modern workplaces you send email

6:12

to prove that you're working, that you're

6:14

attending that you're checking in, you

6:16

copy people in, you send attachments, you

6:18

send updates when you're away from

6:20

your desk, you have it

6:22

automatically email people to tell

6:25

them that you'll come back and send

6:27

them more email from a certain perspective. 50

6:29

% of what many highly

6:31

accomplished people do is effectively

6:33

work as email inbox emptiers. But

6:36

of course, by emptying their inboxes,

6:38

they fill up the inboxes of everybody

6:41

else in their organization and beyond. And

6:43

once you surface this, you can say, well,

6:45

actually, how is the tool

6:48

working for you? What would

6:50

an informed negotiation between your needs

6:52

and priorities? And if you

6:54

like the affordances, the kind of

6:56

structural predilections or biases the tool look

6:58

like, a lot of organizations of course

7:00

have done this thinking and are saying,

7:02

well, we're going to have communication protocols.

7:05

We're going to insist that people use

7:07

Slack or shared documents rather than

7:09

email for certain things. We're going to

7:11

insist that famously at an organization

7:13

like Amazon, if we're sitting down

7:15

to a meeting face to

7:17

face, we are preparing and discussing

7:20

a one pager or a

7:22

two pager. a reasoned argument or

7:24

debate. We're accessing data separately.

7:26

We're not sitting through an endless

7:28

PowerPoint. So once you get down

7:30

to these discussions, you can then do

7:32

the really interesting thing, which is say, well, let's

7:35

first of all just talk about

7:37

what you're doing, about what the

7:39

technologies in your working life do, and

7:41

then about what behaviors they push

7:43

you towards, what assumptions or

7:45

limitations they may encode. and

7:48

what it means to push back against these assumptions.

7:50

So I love the practical psychological stuff,

7:52

and I won't bore you yet with

7:54

examples, but there's a lot more where

7:57

that came from. Do you feel that

7:59

part of this is a cycle of

8:01

evolution? And what I mean by

8:03

that, Tom, is as you were

8:05

talking like this, I was reflecting on

8:07

the fact that I wonder if

8:09

we push this 20 years into the

8:11

future, that if you had simply

8:13

digital natives who were engaged in this,

8:15

that This wouldn't even be the

8:17

thought. It would almost be like a fish

8:19

talking about water. Do we

8:21

feel like these are problems because we

8:23

have this weird cycle where it's

8:25

mostly Gen X or isn't it hell,

8:28

who are in these sea suites

8:30

versus if somebody was a digital native, it

8:32

would just float through and they wouldn't be

8:34

thinking about life before type of thing, which

8:36

gives us the thought of perhaps we're not

8:39

doing it right. So I would flip

8:41

this around and say one of the

8:43

great assets of a kind of a

8:45

workplace with people from different generations and

8:47

also a world in which you've

8:49

got someone like me who can remember

8:51

a time before smartphones and ubiquitous

8:54

internet. We can then

8:56

try and collectively capture the best

8:58

of the old and the best of the

9:00

new. I spend a lot of time in

9:02

universities and schools and colleges talking to 15,

9:04

16, 17, 18, 19 year

9:06

olds, so -called digital natives. I

9:08

think the world is a bit of

9:10

a myth. In that again and again I

9:13

find that people are very very good

9:15

at certain things very fluent but also they

9:17

don't know what they don't know they

9:19

often may not realize that the young age

9:21

for example that resources like Wikipedia and

9:23

so on are user generated you look at

9:25

the edit histories find out where they're

9:27

from people certainly don't realize with

9:30

AI how a large language

9:32

model works that often it's

9:34

outlets are probabilistic. rather than

9:36

factual. And again, people don't

9:38

realise simpler things like how they're

9:40

being tracked, like how they might be

9:42

being surveilled, or indeed how to

9:44

make their computers safe, or the benefits

9:46

of being offline, the benefits

9:49

of certain emotional, empathetic soft

9:51

skills. I spend a lot

9:53

of time working with 17,

9:55

18, 19 year olds about

9:57

pausing, reflecting, thinking

9:59

twice. So for

10:01

me, the prize is that

10:03

exactly Like

10:06

you said you don't want to have a

10:08

world of fish you don't realize there

10:10

in water because of course the difference

10:12

between us and fish is

10:14

that we can reflect upon our environment

10:16

and the environment through which we

10:18

swim within which we live a human

10:20

made and human maintained. You've

10:22

heard I'm sure of the idea

10:24

of dark patterns of set ups in

10:26

websites and interfaces which are

10:29

covertly manipulative. where you're automatically opted

10:31

in to something where you

10:33

can't cancel your subscription, where your

10:35

data is harvested, where the

10:37

real options are hidden from you.

10:40

The problem is that there's not just one kind

10:42

of water out there. There's lots of different

10:44

kind of lakes and oceans that are being run

10:46

by companies for certain reasons. So

10:49

for me, I love going

10:51

to people and doing two

10:53

things really, saying, first of all,

10:55

let's celebrate your knowledge. And

10:57

let's celebrate your knowledge by getting you to share

11:00

with each other your best tricks and tips.

11:02

If you're 20, what would you tell your 16

11:04

year old self? If you're 16, what would

11:06

you tell your 12 year old self in terms

11:08

of communications protocols in terms of safety? And

11:10

a lot of people say, okay, here's the simplest

11:12

thing I do. I'd say, wait a minute

11:14

before you send that email. My

11:16

favorite bit of, is it where

11:19

tech hackery I have is very

11:21

simply a 30 second delay on

11:23

my email when the 30 seconds after I've

11:25

sent any message I can read it and take

11:27

it back. And my goodness, I

11:30

take back a lot of those messages because I realized

11:32

that I'm passive aggressive, or that I haven't sent the

11:34

attachment, or that I'm boring, or I didn't need to

11:36

be sent. And then the second thing

11:38

is to try and, by comparing people's

11:40

knowledge and experience, find

11:43

out what their unknown unknowns are.

11:45

Someone is 40, someone is 50,

11:47

someone is 30, they're going to

11:49

have different gaps in their knowledge.

11:51

And the big thing with modern

11:53

organizations, with complex systems, is covering

11:55

the gaps, is pooling your

11:57

knowledge constructively and then

11:59

is coming up with the best possible strategy. And

12:02

that really does mean, I think, not

12:04

taking things for granted. Last simple

12:06

example, for most people, again,

12:08

a very powerful intervention in terms of

12:11

people's relationship with technology can be an

12:13

audit of apps on their phone. It

12:15

can be simply getting people to pause And

12:17

because these things have built up, there's a

12:19

lot of momentum. It's like a QWERTY keyboard.

12:21

It's not a QWERTY keyboard because it's the

12:23

best design in the world. It's because it's

12:26

always been like that because of the peculiarities

12:28

of the mechanics of typewriters in the century.

12:31

So do what you can, audit your

12:33

apps, look at your day, look

12:35

where your time's going, reflect, improve, iterate,

12:37

do this together. That for

12:39

me is the recipe. Yeah, the evolution

12:42

that I was talking about, my experience today

12:44

as a first time, and I thought it

12:46

was very fortuitous that I was sitting down to

12:48

speak with you about this, which was new iOS

12:50

comes out, I updated, it

12:52

includes a version of chat GBT,

12:54

I include it. And

12:56

suddenly I found this feature where I go

12:58

to hit reply to an innocuous email,

13:01

is this time okay for this meeting? And

13:03

instead of doing that reply of thank you

13:05

so much for reaching out and I look

13:07

forward to the meeting, there was the little

13:09

AI button and I press that. And I

13:11

noticed that it formulated the response for me.

13:15

And I reviewed it and thought, 80

13:17

% right. Thanks so much for reaching out. Then there's

13:19

some information on it that I was, you know,

13:21

don't need that. And much like you,

13:23

I will take that buffer time before

13:25

I send the response. And I realized that

13:27

I don't need that buffer time anymore

13:30

because just by seeing its response and thinking

13:32

about it and thinking what would be the correct

13:34

way to correct it, it actually gave

13:36

me more insights. And

13:38

more thought about what I should

13:40

say to avoid being passive -aggressive or

13:42

typically I would be sarcastic. So

13:44

that's a great example of, I

13:46

guess, what Ethan Molyk around others

13:48

has pointed out as the result

13:50

of a kind of a new

13:52

suffusion of intelligence. And

13:54

in terms of the human skills, I

13:56

think you're absolutely right that suddenly the skill

13:58

you're using is the

14:00

skill of reflection and editing.

14:03

and you're not being your own typist.

14:05

And that can be a very good thing.

14:07

And previously, of course, we had to type

14:09

all our messages. It just goes without saying. Now

14:12

we don't. So the question becomes, which

14:15

messages should we type and why?

14:17

And which messages should we just

14:19

review? Which should we outsource and

14:21

delegate? I'd like to be more pretentious

14:23

with my thought and think. Please. That

14:26

seems to me almost, let's call it

14:28

level one of how we react to it.

14:30

My level two or level three thinking

14:32

on this is Just hold

14:34

on a second. Did I

14:36

just enter a new phase

14:38

where my AI is talking

14:40

to your AI essentially and

14:42

then more philosophically than what am I?

14:45

Yeah. And of course, absolutely. And I

14:47

think you are an intensified version

14:49

of what you've always been because to

14:51

be human is to be a technologized

14:53

animal. I am already a different

14:55

person with my phone than without it.

14:57

I'm different to my ancient ancestors because I'm

15:00

literal. because I read, I

15:02

write, I use screens, I communicate

15:04

with you right now at the speed

15:06

of light across the world from a little

15:08

office in my garden. So this

15:10

is an intensification of something that has

15:12

always been true that as human

15:14

beings we have this astonishing ability to

15:16

change ourselves through

15:18

technology. Our minds are

15:21

literally extended through technologies

15:23

and we acquire this

15:25

after birth due to

15:27

this extraordinary neuroplasticity. And

15:29

the point I'd make is that this

15:31

both isn't isn't new so we are this

15:33

is profoundly new but for example

15:35

the analogy of digital photography is interesting

15:37

takes photography from being something where film

15:39

is costly and expensive and you take

15:41

a few photos and it's for a

15:43

smaller number of people gradually becomes a

15:45

mass art form and now digital

15:47

photography. Your camera is taking

15:50

beautiful shirt photos of anything you want

15:52

kind of infinitely and suddenly as you say

15:54

you have this at your fingertips it's

15:56

almost like a part of your hand and

15:58

then you're kind of the conductor. Of

16:00

the orchestra you're the operator of the

16:02

machine you're the manager of this

16:04

kind of little cottage industry you're managing

16:06

a whole bunch of bits of

16:08

software and hardware that are kind of

16:10

running the identity that is you. You're

16:13

in a kind of director mode and

16:15

you're making decisions like okay well

16:17

which photo shall I show who shall

16:19

I send them to which how shall

16:21

I market them advanced and talk

16:23

about them can i turn them into data can

16:25

they be useful data that i'll feed back

16:28

into the machine. I write with AI a lot,

16:30

not my books, but I write

16:32

using AI as a kind of sounding board.

16:35

I think of it as almost the

16:37

infinite library. I can shout into it

16:40

and I'll get these echoes. I can say, help me with

16:42

these thoughts. What have I missed? What's going on? Have

16:44

I said it right? The

16:46

more formulaic, the easier it

16:48

is to outsource more of this task. It's

16:50

like having, as many people have said, infinite

16:53

interns at your disposal,

16:55

tireless. junior co -workers. But

16:57

crucially, you still

16:59

have to be in the driving seat when it comes

17:01

to the quality control, when it comes to the purpose,

17:03

when it comes to the intentions. And

17:06

potentially, it's quite dangerous to outsource

17:08

some of these decisions because you risk

17:10

not only kind of de -skidding yourself

17:12

and if you like diluting the

17:14

brand that is you, but you also

17:16

risk developing the very important kind

17:18

of human knowledge about what is

17:20

a better or worse email in

17:22

the first place. What is a

17:25

message that will differentiate you from

17:27

other people? What do you really

17:29

think and mean? And

17:31

so the challenge for me, which

17:33

I relish, is how organizationally, individually,

17:35

we can double down on these

17:37

things where it's essential we stay

17:39

in the loop while doing exactly

17:41

what you're saying and asking this level two,

17:43

this level three question, which loops do

17:45

I want to be in where and how?

17:48

And what should I gratefully outsource

17:50

in order to greater leverage

17:52

my own capabilities? I saw

17:54

this amazing piece of data that came out

17:56

last week. I think it was related to Canada,

17:58

but it may not have been that essentially 100 %

18:00

of professionals are looking

18:02

to switch jobs this coming

18:05

year, which is an astounding

18:07

number. And I'm throwing this

18:09

at you because I'm thinking about what

18:11

we just talked about. I'm thinking

18:13

about the fact that I've had very

18:15

recent conversations with people who have

18:17

very confidentially said to me something akin

18:20

to, do I even work

18:22

for a living? And what

18:24

they meant was that their days are

18:26

filled, their calendars are filled with meetings

18:28

and email and responses, but essentially the

18:30

self -reflection has led them to a

18:32

place of thinking that, I don't even

18:34

know if this is a job where

18:36

all I'm doing is almost meet in

18:39

the room, whether it's Zoom or physically, but

18:41

I'm just in meetings all day, we're sending

18:44

emails back and forth. I don't actually even know

18:46

what I accomplished in the past couple of

18:48

years, not in a way of which they're feeling

18:50

that they haven't done any work, they're doing

18:52

their job at performing exceptionally well. But

18:54

the work almost falls into what they would

18:56

call those bullshit jobs or fake jobs or

18:58

like, what do we need this for? The

19:00

other example that I constantly think about is

19:03

Elon Musk walks into Twitter at the time

19:05

and gets rid of all these employees and

19:07

everyone goes, how can you do that? And

19:09

sure, there have been problems. I think a

19:11

lot of them are more political and content

19:13

based rather than infrastructure based, but I mean,

19:15

it doesn't seem to go down. It doesn't

19:17

seem like they haven't added features. In fact,

19:20

it seems like they've added more features since

19:22

him than they could in a quarter or

19:24

two before. And all of

19:26

this just gets me thinking of how

19:28

we think about work. And as we

19:30

see this proliferation of AI, I think

19:32

you and I sit in the somewhat

19:34

privileged place where we're almost paid to

19:36

think about it and ask, but the

19:38

vast majority of people are running

19:40

their days and thinking, well, if my

19:42

meetings are just meetings and now those

19:44

meetings are being taken over by generative

19:46

AI, they're creating the transcripts, they're creating

19:48

the follow -ups, what are we actually doing?

19:51

Absolutely right. And the great anthropologist David

19:53

Graber coined that phrase, bullshit jobs, in

19:55

his book, an essay of that name. And

19:58

he was making a broader point

20:00

about the kind of information society that

20:02

it generates various forms of busy

20:04

work, but also that it generates a

20:06

kind of shadow world when on

20:08

the one hand your job is to

20:10

increase shareholder value is to build

20:13

relationships. But on the other hand, in

20:15

a way you do it through

20:17

the other kind of performative data moving

20:19

or shifting data around. It's

20:21

very interesting, of course, to look

20:23

at the jobs that will not

20:25

be replaced by AI anytime soon,

20:27

the ones that during the pandemic

20:29

kept us all alive, people delivering

20:31

food or growing food or cleaning

20:33

the streets or working in hospitals. Although

20:36

computers are making Jobs

20:38

in those areas are perhaps

20:41

more efficient or more challenging or

20:43

better or worse for that

20:45

matter. We are not seeing any

20:47

lesser -than -eat for nurses, for

20:49

physicians, for care workers, for

20:52

teachers, for headdressers, for

20:54

plumbers, for electricians and so on. Of

20:56

course, what these jobs have is an intimate

20:58

relationship with either the physical world or the

21:00

people in the physical world or both. And

21:03

so we do see in the kind

21:05

of knowledge sector, One of the strangest things

21:07

I feel about AI is that because

21:09

of the very nature of generative AI, it's

21:12

extraordinarily good at

21:14

doing creative tasks,

21:17

at creating poetry, at

21:19

creating videos, at creating images, and

21:21

it cannot yet write a great play or

21:23

write a really great book, but it can

21:25

write a pretty darn good article, it can

21:27

write a passable poem, it can create a

21:29

photographic image with something that looks as good

21:31

as the work of a world -class photographer. But

21:34

yet, These are jobs, these

21:37

are areas in which people do not

21:39

want AI to be doing it. They don't

21:41

want to buy the product of an

21:44

AI when it's not commodified. Of course, they

21:46

do if it's background music, but

21:48

people want to have a relationship with other

21:50

humans. And this then is a second key for

21:52

me. On the one hand,

21:54

there's still the jobs that people have

21:56

to do because lifting someone out

21:58

of a hospital bed or seeing to

22:00

their wounds is still a human

22:02

job, even if an AI is doing

22:04

the blood test. But secondly, there's

22:06

the jobs that people want to be

22:08

done by humans. They want their

22:10

children to be looked after by a human,

22:12

but they want their novel to have been written

22:14

by human. They want their movie to have

22:16

human actors in for it to be an act

22:18

of connection and cultural self -reflection. But if you're

22:20

not in one of those two categories, if

22:23

people don't care if an AI

22:25

is doing your job as long as

22:27

it's done well and they don't

22:29

intrinsically have any kind of physical or

22:31

bodily aspect to it, then yes,

22:33

that aspect of the job It looks

22:35

like something where neither of these

22:37

two reasons pertains. It's neither essential that

22:39

a human does it nor highly,

22:41

if you like, kind of ethically desirable

22:43

that a human does it. So

22:45

I do think we're going to see

22:47

and in some ways, maybe there

22:49

is something healthy about the organizational self

22:51

-examination that will have to take place.

22:54

When people and organizations are

22:56

saying, okay, where do we

22:58

need people in terms of

23:00

the morality, the ethics, the

23:02

relationships, the worth, the skills

23:04

that people have that machines

23:06

don't, where do we truly

23:08

need people? And we have people

23:10

themselves saying, well, look, if I'm doing

23:12

stuff that an AI can do

23:14

just as well, if I'm using it

23:16

to kind of cheat on my

23:18

job or show up half the time,

23:20

that's sending me a very strong

23:22

signal that my work is not perhaps

23:24

either intrinsically or extrinsically worthwhile. I

23:27

would just draw in the idea

23:29

of students because I deal a lot

23:31

with universities and there's obviously a

23:33

huge crisis around the fact that AIs,

23:35

that large language models, can generate

23:38

very, very good responses to many examined

23:40

questions. But of course,

23:42

I don't think this has people saying,

23:44

therefore, there's no point someone studying

23:46

for a degree. There's no point someone

23:48

learning. They're saying that in terms

23:50

of assessment and how we think about

23:53

learning, we have to really shift

23:55

our ideas. We have to find ways

23:57

to be testing people as thinkers, as

24:00

critical and creative thinkers, as prompters of

24:02

AI, as people who work with and through

24:04

these systems, as collaborators and so on. And

24:06

this can be kind of prefigures a

24:08

lot of what is going to happen,

24:11

I think, in workplaces where to be

24:13

optimistic for a second, a

24:15

lot of people will be

24:17

challenged to really think hard about

24:19

where, how, and why people

24:21

contain various forms of tacit knowledge.

24:24

people perform various tasks

24:26

that AIs can't. And

24:28

of course, AIs highly commodified, right? So

24:31

ultimately, if

24:33

you're just using AI to

24:35

run your company, you're probably

24:37

going to be eaten up and

24:39

spat out by someone else somewhere

24:41

else who's doing it cheaper and

24:44

faster at scale. But there is

24:46

some conversation and rhetoric around that

24:48

type of, I think it's almost

24:50

the binary way of stating it.

24:52

What I would mean is, If

24:54

you think about, let's say, AI

24:56

chatbots for nursing. So

24:58

suddenly, if you have privilege, you'll have

25:00

a nurse that can take care of you

25:02

or caregiver, and they would be using

25:04

AI or generative AI as some type of

25:06

supplement to make results better, more effective.

25:08

What have you? But there's also a

25:10

larger swath of people who are thinking about

25:12

this and very worried that suddenly what we'll say

25:14

as well, you don't have healthcare insurance or

25:16

you can't afford it or you're not important. So

25:18

you just get the chat bot. You get

25:20

the chat bot and say to it, my blood

25:22

pressure is high. I didn't take my medication.

25:24

I don't feel well. It interacts with you and

25:26

then it will level it up in terms

25:29

of needs. So we look at that and go,

25:31

well, isn't that phenomenal? I

25:33

think some of the thinkers are saying,

25:35

no, that's actually gonna create a greater

25:37

divide. and that the challenge isn't

25:39

that you have it or you don't.

25:41

It's that we're using it to replace or

25:43

hold others back in a world where

25:45

those who can afford more can have more

25:47

access. Yes, and we are

25:49

seeing a version of this already. I

25:51

deal a lot with kids. And

25:53

anecdotally, I will certainly find that if

25:55

I'm dealing with the children of

25:57

senior people and tech companies, their

26:00

education involves a lot of

26:02

exercise, yoga, meditation,

26:05

good quality food. beautiful environments and so

26:07

on, they are not spending 24

26:09

hours a day on devices partly because

26:11

they're being educated not to. And

26:13

at the other end of the spectrum,

26:16

people who have less privilege are

26:18

far more likely to be, as it

26:20

were, in an area where their

26:22

time is being turned into other people's

26:24

money through algorithmic means. They're watching

26:26

adverts, they're less able to afford

26:28

fresh nutritious food, they're making longer commutes.

26:30

And we do see a version of

26:33

this. And there is a very powerful,

26:35

but I think very double -edged argument

26:37

that says it's the better than nothing

26:39

argument. There's not enough money and

26:41

resource to go around. So it's better that

26:43

you chat to a psychotherapist bot that

26:45

gives you something than that you join the

26:47

three -year wait to talk to a human

26:49

psychotherapist. Or that we make the effort

26:51

that we're not going to do that because

26:54

we know the result. Or that

26:56

we're not going to just give

26:58

everybody in a low -income public school

27:00

system a AI bot math tutor. And

27:03

of course, we are creating

27:05

this strange world. And

27:07

it's very double -edged because on the one hand,

27:09

when I go and talk to students from lots

27:11

of different backgrounds, they say, I'm

27:13

struggling with my course and

27:15

I find AI really useful. It

27:17

explains stuff to me, it

27:19

listens to me, the

27:21

instructors try their best, but they don't have

27:23

the time. I'm dealing with overwhelming information. This

27:26

tool is amazing because it helps me

27:28

take these this overwhelming information, and it

27:30

helps me understand it. And yes, it's

27:32

pretty tempting. It could write an

27:34

answer for me, but I try quite hard not

27:36

to have the answer written for me. But on the

27:38

other hand, of course, that means you can have

27:40

a new model where you can effectively say, well,

27:43

this is good enough, right? And

27:45

I have no answer to this because

27:47

this is partly about what, as

27:49

a society, people value, people think they

27:51

owe to one another, about where

27:53

and when and how we think we

27:55

do or don't owe people our

27:57

time, our attention, our value, and

28:00

it's deeply also about

28:02

human levels of comfort

28:04

and discomfort. Now,

28:06

we were already seeing whole

28:08

swathes of the workforce in, for

28:11

example, jobs where people are

28:13

task -grabbits or in warehouses or delivery or

28:15

whatever and delivery where you are managed

28:17

by algorithm, where in effect, to be

28:19

an Uber driver, and plenty of people

28:21

are very grateful for the ability to

28:23

earn money in this way, but... They

28:25

are managed by algorithm by and large.

28:27

They log into the app. They are

28:29

casualized. There's very few humans there. Structurally

28:32

speaking, this has all kinds

28:34

of model hazards, people's dignity, people's

28:36

health, people's right of appeal,

28:38

and so on. Now, in

28:40

its own way, this looks

28:42

a little bit like other

28:45

eras in which we have

28:47

seen vast new kinds of

28:49

automation and mass production coming

28:51

in when we've had the

28:53

ability to, as it were, make

28:56

automated large amounts of things

28:58

that used to be bespoke.

29:00

And that can both remove

29:02

certain inequalities and create enormous

29:04

potentials for exploitation. And

29:07

we've had to have legislative interventions

29:09

around those. We don't have children working

29:11

in factories in a lot of

29:13

countries we do in some. We provide

29:15

unions to those who might be

29:17

uneducated or might be a larger workforce.

29:19

We have universal education and universal

29:21

healthcare in some countries, although not in

29:23

others. These have been defined as

29:25

rights. So there's going to

29:27

be a real battleground here. Do people

29:29

have a right as is being

29:31

enshrined in legislation like the EU's AI

29:34

Act, which for all its flaws

29:36

is a very bold attempt to talk

29:38

about rights? Do people have a

29:40

right to have things explained to them?

29:42

to have explicability? Do people have

29:44

a right for an algorithm not to

29:46

monitor certain aspects of their kind

29:49

of biometrics because that's too intrusive? Do

29:51

people have a right to healthcare,

29:53

regardless of their genetic profile, when someone

29:55

might be too genetically risky to

29:57

ensure? And again, we've seen this

29:59

for the insurance legislation that insurance decades,

30:01

by the way. Of course, insurance companies

30:03

are simply not allowed to, as it

30:06

were, kind of micro -profile you and

30:08

deny you these things. So these debates

30:10

have been had before, but... But we're

30:12

doing it on the fly. We're rebuilding

30:14

the ship while we're sailing. And

30:16

the trouble with rebuilding your ship while

30:18

you're sailing is you often only tend to,

30:20

so to speak, pass the relevant legislation

30:22

when you watch a large chunk of your

30:24

ship drop off and a whole bunch of

30:27

people in the water screaming at you,

30:29

I'm mixing my metaphors now. So

30:31

I have no answers to this.

30:33

What I do tend to advocate

30:35

just in general is that These

30:37

ethical concerns and these very baseline

30:39

concerns about what are our beliefs about

30:42

people, about rights and so on,

30:44

are present in discussions, are present in

30:46

the boardroom, and I don't think

30:48

it's just idealistic because I think this

30:50

huge kind of reputational long -term hazard

30:52

to organisations for being on the

30:54

wrong side of these things. But when

30:57

we make decisions about data, we

30:59

make decisions about people's lives, people's people's

31:01

childhoods, people's rights, and we have

31:03

to be very careful indeed Just at

31:05

the idea of we can hand

31:07

it over to systems and it's all

31:09

good. Equally we have

31:12

to be pragmatic and so sometimes

31:14

the solution there is one to

31:16

try and listen to the end

31:18

users try and talk to people

31:20

to try and talk to people

31:22

rather than to talk for them.

31:24

And spend as much time as

31:26

possible with people talking about their

31:28

experiences of healthcare education work and

31:30

management and see what you can

31:32

do about. actually meeting them

31:34

where they wish to be met. What

31:36

a volley of thought over to you

31:38

and preface it by saying I may

31:40

not formulate the words as effectively as

31:42

I want, but it's something that came

31:44

out in the work I'm doing in

31:46

reading wise animals. It was a note

31:48

that I kept the feeling was reoccurring

31:50

to me, which is this idea that

31:53

maybe what we're seeing is a devaluation

31:55

of knowledge workers, meaning our perception of

31:57

what knowledge work is when you put

31:59

general AI into me, isn't about AGI

32:01

or super intelligence. It's about the fact

32:03

that it's pretty damn good. It can

32:05

actually replicate a lot of the repetitive

32:07

tasks that executives or knowledge workers do,

32:09

and it does it in a great

32:11

way. And that perhaps if

32:13

we recognize that and see that, that's

32:16

the cataclysmic problem. Like, oof, got

32:18

this advanced degree. I find myself in

32:20

some level of what I would

32:22

call superiority. I manage people, tell them

32:24

what to do, get ridiculous sums

32:26

of money for it based off of

32:28

them doing the actual hard labor.

32:30

There's all of that. And

32:32

suddenly, it's the wizard. It's the curtain being

32:34

open that says, actually, the work that you're

32:37

doing is pretty easy to replicate and to

32:39

do. I think there's

32:41

a lot of truth in that.

32:43

I think that ties in

32:45

with a lot of social resonances,

32:47

if you like, that are

32:49

still in process. I

32:51

feel that there's a connection

32:53

there between this intuition and

32:55

A lot of the tone

32:57

of populism, when people have

32:59

been saying a version of,

33:02

we've had enough of experts, of

33:04

these finally educated people

33:06

don't know what's really going

33:09

on. The technocrats,

33:11

these people who are so clever with

33:13

their degrees, they're not delivering. They

33:16

make promises, they claim they're so clever,

33:18

but they're not. I'm not

33:20

pronouncing on the rightness or wrongness of

33:22

that, but it's a vibe. And

33:24

certainly, once you can

33:26

go to chat GPT or

33:28

Gemini or whatever and replicate the

33:30

work that five years ago

33:32

you were being told was so

33:34

fancy and unique that you

33:37

needed seven years of education and

33:39

a lot of letters after

33:41

your lane to do it. It

33:43

does something to the scarcity

33:45

value of that work. And

33:47

of course, a lot of members of

33:49

the academy of the elite have got a

33:51

lot of, if you like, moats and

33:53

draw bridges around the place. It's highly specialized

33:55

for categories. Lots of

33:57

qualifications that are kind

33:59

of keep out signs.

34:03

I am already

34:05

suspicious, I think, of people

34:07

who deal in jargon, of people who deal

34:09

in abstraction. Sometimes you need to, of

34:11

course, especially if you're operating in rocket science

34:13

or biochemistry, of course you need to

34:15

deal in jargon. But when it comes to

34:17

explaining why these things matter and why

34:19

things are important, I think

34:21

a lot of what passes for intellectualism.

34:23

There is a little bit of Wizard

34:25

of Oz going on. We

34:27

are seeing, I think, a much greater

34:30

premium going on to the soft skills

34:32

of empathy, of collaboration, or if you

34:34

like, their kind of mirrored sides, their

34:36

dark aspects, charisma, whipping

34:38

people up into a frenzy, manipulation,

34:40

charisma, and empathy are not

34:42

necessarily good things. They are enabling

34:44

virtues rather than cardinal virtues. So

34:47

I think you're absolutely right about

34:49

this. And I think One

34:51

of the interesting things for me is

34:53

where we flip over from the things

34:55

that people are suspicious of, like a

34:58

bunch of experts talking in a kind

35:00

of high flute and language about things

35:02

that may or may not actually benefit

35:04

others, towards the stuff perhaps that they

35:06

do love, beloved stories, beloved

35:08

performers, singers, songwriters, Taylor Swift,

35:10

who is in one sense,

35:12

absolutely a member of the

35:14

hyper elite, an incredibly talented

35:16

performer and songwriter, incredibly charismatic,

35:18

global figure. But at the

35:20

same time, someone who

35:22

speaks in a common idiom and generates

35:25

great popular works of art, our people

35:27

are not too happy, quite rightly, about

35:29

her image, her songs, her lyrics and

35:31

her ideas being ripped off by AI. And

35:34

people like that are leaders in

35:36

the movement against these synthetics. Someone like

35:38

Nick Cave, who is perhaps a

35:40

more kind of elitist, but nevertheless has

35:42

a huge audience. I love

35:44

video games as an art form. I've

35:46

written a book about video games. I

35:48

love them partly because the great video

35:50

games are demonstrably incredible works of art

35:52

and also wildly popular and accessible. And

35:55

so I do think that

35:57

suddenly a lot of people who

35:59

maybe had cushy lives but

36:01

whose status was unproblematic to them

36:03

now have to sweat quite

36:05

a bit and find new ways

36:07

to sing for their suppers. And

36:10

this is a really interesting

36:12

moment and a dangerous moment

36:14

perhaps because very easily a

36:16

meaningful critique can flip over.

36:19

into kind of demagoguery, rabble

36:21

rousing, and the devaluing of

36:23

the very precious thing that

36:25

lies behind the jargon, which

36:27

is knowledge that has been

36:29

tested and hard won, and

36:31

that is coupled to a

36:33

really sincere effort to understand

36:36

the world better, to understand

36:38

people better, to make it

36:40

better. I have a quick

36:42

story that I think illustrates this, which I'd

36:44

love just you to react to, which is

36:46

I was giving a keynote presentation. And

36:48

I was told for the morning keynote

36:50

that I need to come in a bit

36:52

later than I normally would to make

36:54

sure everything works because they were doing their

36:57

in -camera board meeting for this association of

36:59

a very regional but large industry. And,

37:01

you know, I'm waiting in the hallway and someone

37:03

says, oh, it's okay. Like, clearly you're not. Somebody's

37:05

going to spy or write articles about this. Go

37:07

in and do your little setup. So

37:10

they were talking primarily about the

37:12

five major issues facing this large industry

37:14

within this one specific geography. And

37:16

they're talking and I'm doing my thing

37:18

and somewhat laughing a little bit

37:20

because one of the things I will

37:22

do to just demonstrate how generative

37:24

AI makes us feel like it's intimate

37:26

is the way it types out

37:28

and thinks and does all these things

37:31

that are very fake, but they're

37:33

sending signals to us that it's just

37:35

for us or intimates, which I

37:37

think is another conversation point. And

37:39

what I'll typically say is you

37:41

are an expert in industry XYZ.

37:43

You are specialized in region. What

37:46

are the five major issues facing

37:48

that industry? And I do

37:50

this in what I call virgin mode, meaning it's not

37:52

using my custom interactions. It's not using the legacy. It's

37:54

just pure log and see what it does. And

37:56

it was almost exact to the

37:58

issues that they were discussing in camera

38:00

as if they had this deep

38:02

and it was using acronyms specific to

38:04

laws that are happening and that

38:06

are being enforced in their region. And

38:08

I just remember the jaws being

38:10

completely dropped because it wasn't even your

38:12

knowledge. It was the understanding of

38:14

the area, which again, you would think

38:16

is I have to pay this

38:18

association fee. I'm paying them to advocate

38:20

on my behalf and yet AI

38:22

in its own little simple way was

38:24

able to pull out every single

38:26

issue. It reminds me of your email

38:29

anecdote. You don't have to

38:31

do the first draft. The first draft is

38:33

there waiting for you. And so the skill

38:35

becomes what can you do with that first

38:37

draft that another AI or a person on

38:39

the street or even a royal organization can't

38:41

do. And of course one thing people know

38:43

about is their own organization in real depth

38:45

what it's capable it is our five things

38:47

you could do but you know that one

38:50

of those. Is gonna work and

38:52

that for those these people you've got

38:54

content or you need these people but

38:56

it is it's that first drafts thing

38:58

and I think. It

39:00

makes a lot of people very uncomfortable and it

39:02

should. Because a

39:04

subsidiary question is, well, is there

39:06

a benefit nevertheless to that group

39:08

of people coming up with these

39:10

five things, even if an AI

39:12

could have come up with it

39:14

for them? Is that process intrinsically

39:16

valuable? And I would

39:18

suggest that sometimes there is intrinsic

39:21

value to that process, but you

39:23

can absolutely have a kind of

39:25

AI pre -read that does some of

39:27

the heavy lifting for you. And

39:29

interestingly, some of the work I

39:32

do is often trying to come

39:34

up with, if you like, more

39:36

interesting structures for board meetings, for

39:38

debates, in order to get the

39:41

most out of people and systems.

39:43

And I put a big emphasis

39:45

on pre -reading, on people contributing

39:47

in advance of that, inputs from

39:49

quite diverse perspectives to big picture

39:51

prompts. That's quite the Jeff Bezos

39:54

pre -meeting strategy. Absolutely. And I

39:56

think Amazon's strategy on their meeting setups

39:58

is cognitively very literate, if you sort

40:00

of mean. because it's putting a huge

40:02

emphasis on making the most of your

40:04

people and not getting the people to

40:06

behave in a machine -like way. And

40:08

again, another famous thing that Amazon does

40:10

is you tend to get your data

40:12

from APIs yourself rather than from a

40:14

slide deck produced by somebody else. Because

40:17

then you are an interactor and a

40:19

questioner rather than just the passive recipient

40:21

of someone else's cam presentation. Talking

40:23

to law firms, they worry a lot.

40:25

Sorry, I'm getting carried away here. No,

40:27

please. About the fact that On the

40:29

one hand, it is potentially transformative for

40:31

them. That AIs can just do

40:33

a lot of legal grunt work, a lot

40:35

of paralegal or junior partner type work, even very

40:37

fast and very efficiently, and with a moderate

40:40

to high degree of accuracy. On

40:42

the other hand, how are you going

40:44

to have the senior partners of

40:46

tomorrow if they haven't done the time

40:48

working down in the nitty gritty

40:50

of contracts and precedents and draftings and

40:52

really honing their skills? I

40:54

don't have an answer to that, but

40:56

one thing I always point out to them

40:59

as well is it's not only that.

41:01

It's also that because everybody has this technology

41:03

and the man or woman on the

41:05

street has this technology, suddenly the legal environment

41:07

is going to become incredibly noisy because

41:09

I can get a bot to file a

41:11

suit for me. I can get a

41:13

bot to generate precedence and another bot might

41:16

read it and suddenly we're in this

41:18

environment where really good advice and really good

41:20

strategy is also going to be about

41:22

dealing with the level of what David Foster

41:24

Wallace called in the context of television

41:26

total noise. just this

41:28

ultra -noisy, algorithmically enhanced environment

41:31

where your skills to cut

41:33

through that strategically and

41:35

intellectually becomes sort of hyper

41:37

-impled. I had two thoughts

41:39

on this as you were talking, and

41:41

it's thoughts that I've put forward,

41:43

but I think they're just germane to

41:45

the conversation, interesting stuff to discuss,

41:47

which is the answer would seem to

41:49

be almost twofold. One

41:51

is the word that you haven't used

41:53

that I used often is commune.

41:55

The value is in the commute. The

41:57

value is in the fact that

42:00

we are in the room together and

42:02

we're using whatever technological tools we

42:04

have. But clearly, if we learned anything

42:06

from COVID and the emergence of

42:08

AI, it's that, well, we said shopping

42:10

malls were dead long before COVID.

42:12

Suddenly, everyone's going shopping malls. We recognized

42:14

in the removal of the physicality

42:16

how much we require it. We saw

42:18

people being arrested for gathering in

42:20

parks and things like that. So clearly,

42:23

and we're seeing this just in the pushback of

42:25

being a keynote speaker of events, of people going

42:27

to physical movie theaters. I don't think that's going

42:29

to last forever, but they're still doing it now. And

42:32

then the other thought is maybe

42:34

it's more of an insane provocation

42:36

that I've made is what we

42:38

might need to do is make

42:41

these advanced degree professions trades. I

42:43

mean, what if we approach the

42:45

education of a doctor, lawyer, engineer, and

42:47

others as a trade? Because we

42:50

know you're going to be supported in

42:52

healthcare by a healthcare practitioner, a

42:54

nurse practitioner. We know we're going to

42:56

be using AIs. The brunt

42:58

of work and knowledge that typical doctors

43:00

have done historically is no longer there.

43:02

And what if we did that? Perhaps

43:04

we would have an influx of more

43:07

people who actually wanted to care for

43:09

other people. Perhaps we would have a

43:11

profession of legal where it isn't just

43:13

my wordsmith, outward smithing, your wordsmith in

43:15

a court of law, but actual thinking

43:17

about what it means to protect both

43:19

the business and the interest. Those

43:22

were my two paths and they're very divergent.

43:24

So feel free to tack on where you want.

43:26

Well, I'd be interested to trample them together

43:28

because I do think the people in the room

43:30

or the connection between the people because, of

43:32

course, for certain kinds of people, a

43:34

combination of the kind of online,

43:36

the technological and the in -person can

43:38

build relationships, build deeper relationships and build

43:41

relationships with people who can't always

43:43

physically get to the room. I

43:46

particularly think in the context of

43:48

things like medicine and healthcare, which

43:50

is an area I know well because of

43:53

family and friends who work in these areas

43:55

and work I've done, was very

43:57

much saying a move to war

43:59

is kind of problem -based. education, where

44:01

you're hands -on, where you're dealing with

44:03

practical scenarios because, among other things, it's

44:06

your ability to work in an interdisciplinary

44:08

team in a healthcare system in the

44:10

real world that has a profound effect

44:12

on patient outcomes. It's

44:14

about what happens in the

44:16

operating theatre, in the

44:18

consultation, or on the complex

44:21

care pathways and the

44:23

interactions between primary care physician,

44:25

secondary care physician, the

44:27

community support worker. the pharmacist,

44:29

the cardiologist, the hematologist, and

44:32

so on. What this

44:34

suggests to me is that it's

44:36

absolutely right that the interpersonal

44:38

and communicative skills and the ones

44:41

that are to do with

44:43

people's ability to understand each other

44:45

and communicate well, very

44:47

inspired by Robin Dunbar's another work.

44:49

He wrote a book recently

44:51

with Tracey Camilleri and another bunch

44:53

of business experts. talking about

44:55

the implications of his sociological work,

44:58

the famous Dunbar number of 150

45:00

people that we can perhaps

45:02

form kind of emotionally meaningful relationships

45:04

with. What this implies for

45:06

organisations, it puts an

45:08

enormous emphasis upon empathy

45:10

building, upon communication, upon morale,

45:12

upon these spaces that

45:15

permit constructive disagreement and so

45:17

on. And I would say

45:19

that what we have in a trade, of

45:21

course, is a model of

45:23

education where if it's done well,

45:25

you are having people learning on

45:27

the job in a cohort who

45:30

are mentoring each other, learning from

45:32

experience other people, solving practical problems

45:34

with their hands in the real

45:36

workplace. You don't learn to be

45:38

a carpenter in theory. You don't

45:40

learn to be a great plasterer in

45:42

theory. You learn on the

45:44

job. And I think organizations

45:46

and if you like elite workplaces

45:49

should look a lot more like

45:51

this. because what does it mean

45:53

to have a bunch of different

45:55

people with different skills and experiences?

45:57

A, build real rapport and really

45:59

approach a problem and grasp that

46:01

problem and grasp its complexities and

46:03

grasp the experiences and not just

46:05

its algorithmic representation. And then

46:07

what does it mean for them to get down to it,

46:09

communicate well, communicate frankly, adapt,

46:11

change, maintain high morale,

46:13

deliver value to people in the real world,

46:16

deliver the stuff that people value,

46:18

which is going to be

46:20

increasingly about what algorithmic systems can't

46:22

or compensating for the weaknesses

46:24

of algorithmic systems, which will tend

46:26

to be very poor at

46:29

reframing, critical thinking, which will tend

46:31

to be very poor at

46:33

truly understanding that which cannot be

46:35

quantified. And of course, at

46:37

delivering a meaningful interpersonal relationship. So

46:39

there's a lot be learned from that.

46:41

And maybe there's a great case to be

46:43

made in business skills and elsewhere for

46:45

working more closely with industry, for working more

46:48

closely in graduate work. in part time

46:50

work, in not just a bunch

46:52

of people at the age of 17, 18,

46:54

19, 20 going and having a largely

46:56

abstract elite education. And then it

46:58

being assumed that they can go and

47:00

work in knowledge jobs and be paid

47:02

very well for that. And

47:04

of course, you pointed out

47:06

earlier that there is quite a

47:08

lot of damage being done

47:10

to people's morale, people's sense of

47:12

self by this feeling that

47:14

they are cut off from meaning

47:17

in their work, that they

47:19

are not. in a team of

47:21

people communicating meaningfully with each

47:23

other and with customers, clients, users,

47:25

society that they're in a kind

47:27

of abstract information bubble where they're busy

47:30

training the systems that will replace

47:32

them. That is not good for organisations,

47:34

not good for people, not good

47:36

for society. And we are seeing

47:38

also I think huge growth in higher

47:40

education in precisely what you might call

47:42

the informal areas. Universities that

47:44

are more affordable, that are more flexible. that

47:46

will let you skill up while working in

47:48

the real world while being paid or having

47:50

other skills that will come in when

47:52

you're older. We're saying more professional education, more

47:55

universities that say to people, yeah, you're in

47:57

your 30s, you're in your 40s, you've got

47:59

kids, you've got dependents. We'll train you

48:01

up to be a nurse. We'll train you

48:03

up to be an engineer. We'll train you

48:05

up to be an electrician. We'll train you

48:07

up while you do other things. And

48:09

this learning will not be the

48:12

kind of learning that an 18

48:14

-year -old who spends their entire time

48:16

Interrupting with the screen might expect.

48:18

Let's talk a little bit philosophically

48:20

about what you're seeing in terms

48:22

of artificial intelligence. Do

48:24

you see the emergence of

48:26

AI vastly different from other

48:28

technologies? And the reason

48:30

I'm submitting this idea to

48:33

you is because I've been

48:35

involved in technology since personal

48:37

computers came in, early

48:39

80s. My sentiment having gone

48:41

through the cycle and developed successful businesses

48:43

and others in every cycle of

48:45

this is I don't even think we're

48:47

thinking about this 1 % correctly my general

48:49

sentiment and yet others are kind

48:51

of passing it off like it's just

48:53

the technology and we adapt as humans

48:56

and it's okay and I have a

48:58

real troublesome time with that, and I

49:00

self -admit that I have an even

49:02

harder time having the debate and podcast

49:04

with them because they're typically business executives

49:06

who aren't really thinking about it.

49:08

They're like, oh, well, just adapt. It's

49:10

technology. And I have this

49:13

blood out of my brain look of

49:15

complete fear where I think as we

49:17

develop this, and by the way, whether

49:19

or not we hit AGI or superintelligence,

49:21

I'd even make the argument that if

49:23

it stopped right now, it's deadly dangerous

49:25

in terms of what it can do

49:27

already. I'm inclined to agree with you

49:29

really. I try to resist hype, not

49:32

because I think things should be minimized,

49:34

but because I think hype often points in

49:36

the wrong direction with too simple a

49:38

story. It tells, as

49:40

people like Brown Merchant have pointed out in

49:42

their work, hype is a kind of marketing

49:44

exercise even when it's negative. This

49:46

is so powerful and so big, you can't

49:48

afford to miss out on it. It's so vast,

49:51

it's so important, it's so epoch changing, you've

49:53

got to just buy in and go with it.

49:55

That sort of diminishes our agency. But

49:57

I think it's huge and I am

49:59

bewildered by people who minimize in the

50:01

sense of saying, ah, it's, you know,

50:03

same old, same old. You know, I

50:05

have a system in my pocket that

50:07

I can ask any question in the

50:09

world in natural language and get a

50:11

fairly sensible answer. Perhaps even more worryingly,

50:13

sometimes that answer will be misleading or

50:15

utterly wrong and yet it will be

50:17

incredibly fluent and confident. You can draw

50:19

me a picture, it can make me

50:21

a video, it can imitate a person,

50:23

it can copy a book and so

50:25

on. and so on. And

50:28

so things that were a few years

50:30

ago, so far as most people are

50:32

concerned, uniquely the province of humans or

50:34

highly skilled humans, are now potentially commodified

50:36

and out there. We've suddenly got a

50:38

system that can write you a good

50:41

essay in almost any humanities subject, such

50:43

that you can't really tell whether it's

50:45

an AI or a pretty well -informed human.

50:47

That is crazy. We've got a system that you can pop

50:49

a book into it and say, what's this book about? Pretend

50:52

to be the author and it will do that. I

50:54

mean, this is This is so

50:56

big, it's hard to know what to do with it.

50:59

I don't think we should do what a lot of AI

51:01

people say we should do with it in just sort of,

51:03

I don't know, submit to the utopian. But

51:05

equally, it is a

51:07

massive deal. And of course, it

51:09

feeds upon previous steps in

51:11

the way technology does. So

51:13

it's predicated upon vast amounts of data.

51:16

So we have turned our

51:18

world, or we have generated, as

51:21

it were, a machine readable version

51:23

of our culture. and all aspects of

51:25

our world. And now machines are

51:27

reading it and analyzing it. An enormous

51:29

power comes with that. The good

51:31

thrill. I don't even know. We

51:34

generated the ability to. And I've been

51:36

in San Francisco recently. I was toddled

51:38

around the place and self -driving a

51:40

vehicle. Watch them a little bit like

51:42

I'm watching a new species in the

51:44

wild. Watching their behaviors, watching them flock

51:46

and swarm and have this kind of

51:48

It looks more like an ecology. They're

51:50

doing their little things on the road and

51:52

they don't interact with the road system like

51:54

normal human driven vehicles. But

51:56

they're out there. I know there's remote people

51:58

monitoring them. It's

52:01

really big and what we want

52:03

to do, I think, is be

52:05

thinking about it and trying stuff,

52:07

keeping our wits about us and

52:09

experimenting and celebrating and warning and

52:11

getting specific and not just lurching

52:13

towards a simplified story that says,

52:15

oh, no big deal or such

52:17

a big deal, all our problems

52:19

have dissolved. And this, of

52:22

course, is what people find difficult psychologically. We

52:24

tend to express our tribe.

52:26

We tend to cope with the

52:28

world by expressing allegiance to

52:30

an oversimplified narrative. It's all good.

52:32

It's all bad. New technology is great. It's awful. It's

52:35

going to save us. It's going to doom us. All the

52:37

companies are bad and cynical and awful and all they

52:39

want is profit. It's the worst thing ever. And

52:41

this doesn't make sense to me. I

52:43

want to be there with people just

52:45

talking about what they're doing, what

52:47

they're learning, and how they're

52:49

learning and trying to be

52:52

useful, trying to give people thinking

52:54

tools for conducting meaningful experiments.

52:56

So you make a point of

52:58

the book -wise animals that what

53:00

we're seeing is how we

53:02

co -evolve with technology as individuals. And

53:05

we have seen this in terms of

53:07

how we've co -evolved with other species

53:09

over thousands of years. And I might

53:11

be very, very basic or pedantic in

53:13

this. I apologize if that's... The thought

53:15

that I was wondering is... So we

53:17

know that we've risen to the top

53:19

of the food chain, whether we should

53:21

or shouldn't is another philosophical Darwinism type

53:23

of conversation. But if we are introducing

53:25

this thing that you and I are

53:27

talking about now at that level, are

53:30

we still able to think about it

53:32

in this way? Are we still able

53:34

to think about it as a coexistence

53:36

of in and of its own self?

53:38

It becomes top of the food chain

53:40

in terms of just even how we

53:42

would define knowledge. So I've seen people

53:44

talk even about Some of the things

53:46

that have happened recently with Google and

53:48

their leap in terms of quantum computing,

53:50

that if you really break it down

53:52

what it's doing from a knowledge understanding, we

53:55

wouldn't even understand it. We couldn't

53:57

even talk, use words that we have

53:59

because of how it could operate

54:01

or think about the world and understand

54:03

it. What happens when we're not

54:05

co -evolving but suddenly we're not top

54:07

of the food chain either? So

54:10

for me, just my perspective, I

54:13

think that the most powerful

54:15

thing about understanding us as a

54:17

part of the planetary systems

54:19

that we evolved alongside. And

54:21

about technology as something distinct,

54:23

unique to our evolutionary heritage

54:25

that we evolved with and

54:27

think through, is that we

54:30

asked it in the same situation.

54:32

But the situation has always been

54:34

one in which as thinking beings,

54:36

we live alongside and

54:38

entwined with the incomprehensible,

54:40

the cosmic, the vast, There's

54:43

a nice line from a

54:45

philosopher whose name will have

54:47

to come back to me,

54:50

because we're doing this live and I

54:52

can't… we'll turn GBT it, exactly.

54:54

No, I refuse to do that for

54:56

now, but dig in myself. Go

54:58

ahead. Who suggests that partly what we're

55:00

doing in the context of Big

55:02

Data and AI is it's allowing us

55:04

to have almost a kind of

55:06

a direct interface within comprehensible complexities with

55:08

the kind of stochastic nature of

55:10

the universal. And certainly Google's Willow is

55:12

interesting. If you follow what's

55:14

been said by David Deutch and

55:16

others, there's a sort of non

55:18

-trivial prospect that quantum computation is

55:20

effectively indirect empirical evidence for the

55:22

existence of a multiverse, that

55:25

a calculation is effectively performed simultaneously

55:27

across all the superimposed versions of

55:29

reality. And then the waveform collapses

55:31

in this universe and we get

55:33

an answer. Now, my goodness,

55:35

that's an astonishing thing. But also, it's

55:38

like touching upon something

55:40

of such incomprehensible vastness that it shrivels

55:42

our minds to the size of

55:44

a speck. But that's what ought to

55:46

happen every time you look at

55:48

a plant or the night sky. We

55:51

are a tiny, tiny

55:53

part of a vast, largely

55:55

incomprehensible universe, and yet

55:57

we comprehend, and yet

55:59

we build tools that allow

56:01

us to look at its

56:03

grandest structures, megastructures in a

56:05

kind of gigaparsecs across. we'll

56:08

look down to the plank link. And

56:10

we'd already done this stuff before AI came

56:12

along. So AI is part

56:14

of us and our journey, I think. And

56:17

I think that the model for it

56:19

is not that it is something separate

56:21

from us, it is utterly dependent upon

56:23

us, utterly and wholly dependent upon us.

56:25

But we, in turn, are utterly and

56:27

wholly dependent upon this planet that we

56:29

live on all of our life. We

56:31

are dependent rational animals, to quote Alistair

56:34

McIntyre, a great philosopher of virtue ethics.

56:36

So our dependency has always been there. And

56:39

we've always been dependent. We may think we're

56:41

masters, we're not. That's a delusion. That's absolute

56:43

nonsense. Of course, we're not independent. What would

56:45

that even mean? But

56:47

we weave these vast collective

56:49

nets of power and

56:52

comprehension. They don't diminish our

56:54

ultimate dependency, but they

56:56

vastly increase our agency. And

56:58

yes, we have exponentially

57:01

increased our collective agency and

57:03

knowledge as a species, I think,

57:05

through the ability to interpret

57:07

and extrapolate and mobilize data. And

57:09

I don't even know what it

57:12

will mean. But I want for

57:14

myself to insist on it being

57:16

a revelation of something that has

57:18

always been true, that we lived

57:20

as part of this vast kind

57:22

of cosmic complexity and order, that

57:24

we are a tiny part of

57:27

it, and the miracle of us

57:29

in technology. is that somehow, somehow,

57:32

we managed to focus and

57:34

harness and render actionable

57:36

this immensity. You know,

57:38

the cliff notes on that is we are

57:40

going to see what was previously unseeable. Yeah,

57:43

all develop something that can see it

57:45

on our behalf. But remember, you don't

57:47

think about to hook in the microscope.

57:50

We've been sketching drawings

57:52

of the previously inconceivable.

57:55

or even the intuitions

57:57

that lead people towards

57:59

belief and a deity. Again,

58:02

intimations of the vastness, the

58:05

unknowable, the strangeness. That's what we do

58:07

as a species. We reach beyond, we

58:09

self -reinvent, we exceed, we are never content. It's

58:12

our curse and our blessing. You

58:14

talk in the book about the

58:16

need to have compassion, curiosity, and

58:18

humility. And I wanted to

58:20

give you an addition to that. That was

58:22

a gift, I think, to me just

58:25

in being in a moment, which is I

58:27

was at a conference here in Montreal

58:29

where I live. And very fortuitously, I live

58:31

in a city that also has Joshua

58:33

Bengio here, who is one of the early

58:35

developers of large language model LLMs and

58:37

just AI. He recently won the touring test

58:39

and he was being interviewed on stage.

58:41

And what he said that was so fascinating

58:44

to me, it's a thought I always

58:46

bring to these intellectual AI conversations is they

58:48

asked him what he would hope would

58:50

be built into these systems. And he said,

58:52

only one thing, self doubt. And

58:55

I thought it's such a beautiful addition

58:57

to what you had written in Wise

58:59

Animals, this idea that if these engines

59:01

are in some way super intelligent or

59:03

AGI or even in its current state,

59:05

if it just had some level of

59:08

self -doubt, that might provide a lot

59:10

of pathways for humans to interact with

59:12

it more effectively. Exactly right. And people

59:14

like Stuart Russell, absolutely exactly the same

59:16

thing. And I think they're quite right. And

59:19

doubt is a gift, of course, because

59:21

to doubt is to identify,

59:23

is to be interested

59:25

in that which we do not yet know,

59:27

is to admit we don't know everything

59:29

that our understanding is uncertain and yet to

59:31

try to move in the direction of

59:33

becoming less deceived. And it

59:35

strikes me as a very powerful thing

59:38

to work into AI. Now, there are

59:40

some people who say that the current

59:42

architecture of NLMs is, if you like,

59:44

inherently hallucinatory, inherently stochastic, and that says

59:46

maybe. But even then, how can

59:48

we make that a feature rather than a

59:50

bug? This incredible kind of creativity, this incredible

59:52

ability to match the extrapolation. I think you're

59:54

quite right. Doubt and an off switch. That's

59:57

the two things I want the most for machines. But

59:59

I would not even argue that

1:00:01

human beings hallucinate more than these AIs

1:00:03

do. Well, that's, I mean, you

1:00:05

know, Anil Seth and others have talked

1:00:07

about our consciousness as a controlled hallucination. We

1:00:10

do not have direct access to reality. Consciousness

1:00:13

and conscious perception revolve for utility,

1:00:15

not accuracy. So we are coming

1:00:17

up with actionable partial apprehensions of

1:00:19

the world on the basis of

1:00:21

feedback loops that weave in memory

1:00:23

and experience, we confabulate all the

1:00:25

time. Yeah, absolutely. But

1:00:27

interestingly, of course, what we do and

1:00:29

LLMs are not very good at

1:00:31

doing is we have the constant, as

1:00:33

it were, dampening effect of sensory

1:00:36

input that constantly encouraging us to reorient

1:00:38

our hallucination. So our hallucination is

1:00:40

anchored, so to speak. And the question

1:00:42

is, what is a similar anchoring

1:00:44

mechanism for LLMs? Love that. Tom, tell

1:00:46

me the one thing that made you think differently. I

1:00:49

mean, it's a very,

1:00:51

perhaps, obvious answer, but

1:00:53

becoming a parent. No,

1:00:56

this isn't a parent. We have a common

1:00:58

theme here, yes, for sure. Yeah, my children

1:01:00

are 9 -11, and it changes everything. It

1:01:02

doesn't mean everyone should have kids. It's maybe

1:01:04

much more sympathetic to people who don't have

1:01:06

kids. Having kids is very serious business, you

1:01:08

really shouldn't do it unless you really want

1:01:10

them. But just these little

1:01:12

pieces of your heart walking around in

1:01:14

the world and also just the daily

1:01:16

relationship with the extraordinary limitations of your

1:01:18

ability to control things. I can't keep

1:01:20

them safe. I can't make them do

1:01:23

what I want. I can try. I

1:01:25

have to try and create the conditions

1:01:27

that will allow them to become the

1:01:29

best versions of them. And it's an

1:01:31

extraordinarily painful and strange and wonderful thing.

1:01:33

And it, you know, maybe it's teaching

1:01:35

me. how to die, which is the

1:01:37

great task of life. You learn how

1:01:39

to do that well. And

1:01:41

you didn't even mention hallucinations, which is

1:01:43

something they do magically well as well. The

1:01:45

new book is called Wise Animals. There's

1:01:47

also How to Thrive in the Digital Age.

1:01:50

This is Gamora and other fascinating

1:01:52

books. Tom Light people know where

1:01:54

they can find out more about the new

1:01:56

book Wild Animals and where else you're

1:01:58

creating content and what you're up to. So

1:02:00

you will find me online on my

1:02:02

website, Amazon and other bookshops. I also design

1:02:04

Business courses, people like the Economist around

1:02:06

AI and critical thinking. I learned to be

1:02:08

found writing online on Substack and appearing

1:02:10

cheerily on YouTube and other places, talking and

1:02:12

broadcasting. So I'm quite easy to find,

1:02:15

especially if you ask an AI to help

1:02:17

you. I love that. Well, Tom, thanks

1:02:19

so much for your time. It's been

1:02:21

my very great pleasure, Rach. Thank you for having me. Spells

1:02:37

on the air See

1:02:39

there it's crushing The

1:02:41

final impression Stains on

1:02:43

the bay But where

1:02:45

words fell like water

1:02:47

On earth all the

1:02:49

changes That never didn't

1:02:51

matter I think it's

1:02:53

beginning to freeze here

1:02:55

I'm caught in the

1:02:57

rage And the fire

1:02:59

of things All the

1:03:01

brightness that burns me

1:03:03

I'm fumbling through like

1:03:05

a child in the

1:03:07

dark when the nakedness

1:03:09

comes I am shocked

1:03:11

by the color the

1:03:13

glorious way your skin

1:03:15

comes alive and I

1:03:17

never thought we'd make

1:03:19

it back so soon

1:03:21

might be nice Please

1:03:50

forgive me. Could we escape

1:03:52

all the bitterness? Powdered palm

1:03:55

bitterness. Held in the face

1:03:57

of the things that I

1:03:59

don't understand. Intellectual

1:04:01

lies. Over and

1:04:03

over this helplessness suits us.

1:04:06

Funny how quiet has

1:04:08

slipped to our corners. One

1:04:10

of our riches away,

1:04:12

you'll watch it. I'm

1:04:14

breathing and baiting. Wanting

1:04:17

and warming and consciously

1:04:19

weighing. And for some

1:04:21

simple signal To reap

1:04:23

cross your conscience Uncover

1:04:25

redemption And on it

1:04:27

I'll mention I carried

1:04:30

you down To the

1:04:32

St. Lawrence River The

1:04:34

banks running dirty The

1:04:36

waters beginning to freeze

1:04:38

air Solid by morning

1:04:40

And I'll freeze air

1:04:42

This winter by morning

1:04:55

Like so soon, might

1:04:57

be nice But

1:04:59

I knew you'd be

1:05:01

your own destroyer,

1:05:03

cause it's island So

1:05:24

when you'll face such

1:05:26

a curious greatest I

1:05:29

let go your hand

1:05:31

Was desperate to hold

1:05:33

you again But you're

1:05:35

second to deep in

1:05:37

the water Smarted by

1:05:40

myself and so easily

1:05:42

gave up what I

1:05:44

wanted Solid by morning

1:05:46

But I wanted It's

1:05:48

winter by morning back

1:06:00

so soon,

1:06:03

might be

1:06:05

nice But

1:06:08

I knew

1:06:10

you'd be

1:06:13

your own

1:06:15

destroyer And

1:06:20

And I'll freeze

1:06:23

here It's when

1:06:25

I'm by morning

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features