Sandra Matz: Your Data And Your Future

Sandra Matz: Your Data And Your Future

Released Tuesday, 7th January 2025
Good episode? Give it some love!
Sandra Matz: Your Data And Your Future

Sandra Matz: Your Data And Your Future

Sandra Matz: Your Data And Your Future

Sandra Matz: Your Data And Your Future

Tuesday, 7th January 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:04

Welcome to Big Questions. This

0:07

is Cal Busman, lots to

0:09

talk about this week, starting

0:11

with how much information you

0:13

can give away about yourself

0:15

in an hour. Simply by

0:17

getting up in the morning,

0:19

going on the computer, doing

0:21

a few Google searches, sending

0:23

out a few text messages,

0:25

heading out to a Starbucks,

0:27

using a credit card to

0:29

pay for your coffee, and

0:31

walking down the street where

0:33

cameras are videoing you and

0:35

capturing your face so it can

0:37

recognize it. Episode moves on

0:39

to the algorithms that are sending

0:41

you what they think you

0:43

want to see on your phone

0:45

and computer so that you'll

0:47

keep coming back for more. And

0:49

how that separates us from

0:51

a time where many of us

0:53

could see the same things

0:55

as those around us and have

0:57

a collective reference point. Will

0:59

we all eventually see different realities

1:01

and be turned into a

1:03

universe of one? And this moves

1:05

on to the place in

1:07

the future where we'll have

1:09

chips implanted in our brains

1:11

and won't need to talk.

1:13

We'll think a question and

1:16

the answer will come back

1:18

to us. So welcome to

1:20

this week's episode of Big

1:22

Questions with award -winning professor at

1:24

Columbia University Business School, Sandra

1:26

Matz, who's written a book

1:28

called Mind Masters, the data

1:30

-driven approach of predicting and

1:32

changing human behavior. In fact,

1:34

it just came out today. If

1:36

you want to get an idea

1:38

of where the world is

1:40

headed, just listen to Sandra talk

1:42

about how our digital footprints

1:44

are being used to understand us

1:46

even better than we understand

1:48

ourselves. She'll show you where all

1:50

this has taken us and

1:52

offer suggestions on how we might

1:54

make our data work best

1:56

for us and society. So here

1:58

we go from A small

2:01

town in Germany

2:03

to Columbia University

2:05

in New York,

2:08

let's get straight

2:10

to Sandra Motz

2:13

and your future.

2:15

Sandra, it is a

2:17

delight to be with you

2:19

after going through your book

2:21

and... Then just talking to

2:23

you for a few minutes,

2:26

you came up with an

2:28

astounding fact that just shows

2:30

how many data points we

2:32

create that other people have

2:34

access to. We are probably

2:36

completely unaware of we may

2:38

be aware of some, but

2:40

you know, your book begins

2:42

with you taking a walk

2:44

in chapter one on early

2:46

morning, and like an hour

2:48

later. you return and you

2:50

point out that six gigabytes

2:52

of data has been amassed

2:54

about you and what I

2:56

want to know is all

2:58

right how could we see

3:00

that like you use these

3:02

phrases gigabytes terabytes very hard

3:04

to like understand what this

3:06

means how many of these

3:08

little data points that we're

3:10

giving away without even knowing

3:12

we're giving them away. Is

3:14

there a way for you

3:16

to clarify it so somebody

3:18

can kind of get a

3:20

grip on it? Rub their

3:23

head around and happy to

3:25

and I thought it was

3:27

mind-blowing by the way. So

3:29

first of all it's it's

3:31

a lot of zeros, right?

3:33

Six zeros. And we just

3:35

did the math and essentially

3:37

every hour you and I

3:39

generate more data than the

3:41

spaceship that we used to

3:43

launch and people into space

3:45

and land on the moon

3:47

had in terms of memory.

3:49

So it's like 500 million

3:51

times more data that we

3:53

generate every 60 minutes than

3:55

was on this chip that

3:57

flew the spaceship into into

3:59

space. So to me, that's

4:01

pretty mind-blowing. What

4:03

does that mean for us?

4:06

Because we really don't know where

4:08

this is, that is going. At

4:10

least we knew back in 1969

4:12

it was taking us to the

4:14

moon. To the moon. But we

4:17

have no control over understanding

4:19

this. Where is it all

4:21

going? Like you start the

4:23

book with basically an hour

4:25

in your life that just seems...

4:28

you know, very ordinary. And then

4:30

you come up with this figure

4:32

that is to me, not ordinary at

4:35

all. For me, it's not even just

4:37

a figure, right? So a figure is

4:39

a figure and yeah, we do

4:42

generate a lot of data. For

4:44

me, it's just the fact that

4:46

it's incredibly rich data, right? So

4:48

forget about the volume, the fact

4:50

that... based on all the Google searches

4:52

that you make, right? You wake up, you

4:55

check your phone, so first of all I

4:57

know that you're up now because your phone

4:59

is unlocking, and I can see who you're

5:01

texting, I can see which websites you're browsing,

5:03

which news you're reading, then if you wake up

5:05

and go get some breakfast somewhere, I

5:07

can see that you swipe your credit

5:09

card, which means that I know what

5:11

you're purchasing, I know exactly where you

5:13

are, potentially know who you're meeting with,

5:15

because that's two phones showing up in

5:18

the same location. And even if you

5:20

don't have your credit card or your

5:22

phone, there's cameras in the streets pretty

5:24

much everywhere, capturing with facial recognition that

5:26

you've been there at a certain point

5:29

in time, we obviously have wearable devices

5:31

these days. So there's pretty much no

5:33

way that you can escape this data

5:35

tracking. And for me, what I think

5:38

is so interesting about this data is

5:40

that when you put these puzzle pieces

5:42

together, it's not just individual data

5:44

points, right? If I think about,

5:46

well, someone knows that I bought... coffee

5:48

lathed Starbucks. So be it, right? It doesn't

5:50

seem like the end of the world, but

5:53

what we've seen over the last really like

5:55

10, 15 years now in terms of science

5:57

is that we can put these puzzle pieces

5:59

together. at a really interesting and

6:01

accurate reading of who you

6:03

are on the inside. So

6:06

your psychology, anything from personality,

6:08

mental health, like your values,

6:10

your moral values. And that

6:12

to me is both fascinating

6:14

and terrifying because as you say

6:16

it's out there. So that's one very

6:18

big question I got about all

6:21

these algorithms. And I'll just introduce

6:23

the second question now and maybe

6:26

we could just frame. the podcast around

6:28

these two questions and also

6:30

I know from your book

6:32

that you do offer up

6:34

some solutions or attempts at

6:36

solutions. So maybe that one two

6:39

three is a nice way to

6:41

frame the whole podcast. So number

6:43

two to me what I'm noticing

6:45

is that I'm in conversation

6:47

with AI every day. I'm asking it

6:50

to do a lot of

6:52

things. It's very friendly to

6:54

me in fact. That's somebody

6:56

on the podcast who told

6:58

me, treat it warmly like

7:00

you would a person. And

7:02

it will actually do better

7:04

work for you. I don't know if

7:07

you believe that. But it's

7:09

interesting to test that.

7:11

Yeah. It's interesting because there's

7:14

a lot of pleasantry. It

7:16

seems like the AI cares

7:18

about me. Yeah. Just because

7:20

it's learned language model is

7:23

told it. Okay, when somebody

7:25

treats you like this, it's

7:27

good to trade them back like

7:29

that. It's certainly mimicking. So

7:32

that's probably part of

7:34

it, right? Same way that

7:36

we humans do. Right. So I'm

7:38

getting closer and closer to

7:40

AI. And what I'm noticing is

7:42

generally when I go on the internet,

7:45

just say I go on X or

7:47

Twitter as it was once known,

7:49

it is sending me... material,

7:51

stories, content that it knows that

7:53

I'm going to be interested

7:55

in. And I'm just assuming

7:57

that everybody's getting...

8:00

the same thing. So everybody

8:02

is getting only what they

8:04

particularly are interested in. And

8:07

so we're all seeing different

8:09

things, which is very different from

8:11

my childhood when we all turned

8:13

on the television at a certain

8:16

time in the evening and we

8:18

all got the same news. So

8:20

we all could look at something

8:22

and even if we had very

8:25

different opinions of it.

8:27

We couldn't have a different

8:29

opinion about what was told to

8:31

us. But now everybody's

8:33

getting a different opinion.

8:36

And so there's just

8:38

less and less connectivity

8:40

that we can have because I'm

8:42

saying something that you're saying.

8:44

There's no sameness to latch

8:46

on to. And it's not just news,

8:49

right? It's like what I think

8:51

is interesting too, is it's also

8:53

all of the cultural. icons.

8:55

Like we used to have

8:57

fun conversations because we watched

8:59

the same shows and we saw

9:02

the same stuff. It's so I think

9:04

this is like part of it,

9:06

but continue. No, go ahead.

9:08

I'm very interested about this

9:10

because it's those cultural

9:12

icons that keep us together.

9:14

And I know I'm going backaways,

9:17

but actually in the 60s, in

9:19

early 70s, there was a show

9:21

in America called Laughin. It was

9:23

a comedy show. And because

9:25

there are only three major

9:27

networks, it basically got 50%

9:29

of the market. Because it

9:31

was good, everybody watched it.

9:34

And when there would be a

9:36

comedy bit, and the character would

9:38

say something like, here comes the

9:40

judge. Like the next day,

9:43

everybody knew what was. Yeah, exactly.

9:45

Here come the judge. Here come

9:47

the judge. And we have had

9:49

these. At the time, I don't

9:51

want to call him iconic because

9:53

it didn't really last long. Nobody

9:56

would remember here comes judge unless

9:58

you're like over 60 now. But

10:01

the fact is that

10:03

there were things beyond the

10:05

news that kept us

10:07

together. And I just wonder

10:09

if we're going to

10:11

have that or if this

10:13

just keeps ramping up.

10:15

When I say ramping up,

10:17

I've been thinking of

10:19

some big numbers because I

10:21

saw that Google revealed

10:23

a computer chip that could

10:25

basically in five minutes

10:27

solve a problem that the

10:29

supercomputer now would need

10:31

10 septillion years. I don't

10:33

think you can even

10:35

wrap your head around those

10:37

numbers. That's many, many,

10:39

many more zeros than we

10:41

talked about in the

10:43

beginning. Yeah, I actually asked

10:45

AI about it. And

10:47

it told me that if

10:49

I'm counting, if I

10:51

want to count to a

10:53

million, one, two, three,

10:55

it will take me 12

10:57

days. If I want

10:59

to count to 10 septillion,

11:01

it would be like

11:03

counting from the time that

11:05

the universe was born

11:08

till now, times 23. Times

11:10

one, and a little

11:12

bit more. So you can

11:14

see how fast things

11:16

are going to keep moving,

11:18

keep moving, keep moving.

11:20

And I wonder if that

11:22

speed is also going

11:24

to keep splitting our connectivity

11:26

so that we're going

11:28

to be relying less and

11:30

less on other people.

11:32

And if AI starts taking

11:34

jobs that we were

11:36

already shifting to at homework,

11:38

you know, the fewer

11:40

places that you have, whether

11:42

it's a workspace or

11:44

a religious space, the less

11:46

connection there is. So

11:48

what I'm wondering, you've been

11:50

looking into this, is

11:52

there going to be a

11:54

day maybe not too

11:56

far off in the future,

11:58

where we're really going

12:00

to be universes of one?

12:02

It's probably one of

12:04

the questions that I've been

12:06

thinking about the most

12:08

over the. the last 10 years. And it's

12:10

an interesting problem, because so one of the reasons

12:12

for why this happens, right? Like we talked about

12:14

how we generate a lot of data, we can

12:16

use that to get insights into who you are,

12:18

and most of the time these insights are used

12:20

to personalize your experience, right? So like Google uses

12:22

it to kind of give you the most accurate

12:24

or the most relevant search results, your news feed

12:26

on social media is personalized to what they think.

12:28

keeps you on the platform. And typically that

12:31

means that we cater to your

12:33

existing preferences. Right, so there's something that

12:35

we in psychology call exploitation versus exploration trade-off,

12:37

and it's in a way the way that

12:39

humans learn, right? So oftentimes we have this,

12:42

this almost like a tension between do we

12:44

go for the stuff that we know is

12:46

good for us, the stuff that we know

12:48

is good for us, the stuff that we

12:51

like. Think of going to a restaurant, right?

12:53

So there's like two ways in which you

12:55

can pick restaurants, either you go to one

12:57

that you know you love, because you love,

13:00

because every time it's good, is the expiration

13:02

part and that allows us to get better

13:04

because maybe the restaurant that we know is

13:06

great. If we explore and we try something

13:08

new there's always a risk that we stumble

13:11

upon a restaurant that just sucks right and

13:13

then we've missed an opportunity to just go to

13:15

the one that is tried and tested and we

13:17

like but also if we don't sample something new

13:20

we're never going to find the one that's kind

13:22

of blowing our mind and it's even better than

13:24

the one that we knew and I think that

13:26

is kind of missing. in the online space. And

13:28

there's reasons for that, right? Because like I was

13:31

just thinking when you said it would take you

13:33

so long to count to a million, I at

13:35

some point calculated something similar, like how long would

13:37

it take you to watch all of the content

13:40

on YouTube? And I think when if you assume

13:42

that the average person lives to about 90,

13:44

it would still be more than 2,000 years.

13:46

So the reason for why we stick to

13:48

this notion of I try to figure out

13:50

what you want and then I make recommendations.

13:52

is partially because there's no way that we

13:54

consume everything online. So it used to be

13:57

the case offline that there was a limited

13:59

number of comments. that we can join

14:01

limited number instead of news like

14:03

news outlets products that we can

14:05

buy now the world is your

14:07

oyster but there needs to be

14:10

something that helps us find stuff

14:12

that at least somewhat relevant but

14:14

what we've entirely lost in that

14:16

part worries me a lot on

14:18

many dimensions is we've lost this

14:21

exploration part. So we, like the

14:23

one thing that I find is

14:25

like nobody's really talking about could

14:27

make us so boring, right? If

14:29

we constantly see the same stuff

14:32

again and again, just based on

14:34

what we've done in the past,

14:36

we're never gonna stumble upon this

14:38

amazing restaurant that we didn't even

14:40

know was there. Simple algorithms, take

14:43

Google Maps, right? Google Maps is

14:45

like, you can think of it

14:47

as like a very simple AI.

14:49

It's kind of trying to figure

14:51

out how do you get from

14:54

A to get from A to

14:56

be. super helpful most of the

14:58

time, but we missed these opportunities

15:00

where we could just lost in

15:02

the streets and then you find

15:04

like this additional additional amazing coffee

15:07

shop. And I think there's a

15:09

way in which you could actually

15:11

fix this online. So like the

15:13

boring part is just one thing,

15:15

right? And the other one that

15:18

you mentioned that I'm equally worried

15:20

about is the splintering of what

15:22

I call shared reality. So that's

15:24

the idea that we can make

15:26

sense of the world together. And

15:29

it's a... fundamental, right? So why

15:31

is there such a big load

15:33

unless I put... epidemic. Why is

15:35

it that there's like a lot

15:37

more mental health issues? Partially I

15:40

think because it's breaking down and

15:42

I think there's ways in which

15:44

technology can actually help us solve

15:46

this. There's also much darker futures

15:48

that I see. I'm happy to

15:51

go into that but I'll stop

15:53

there to give you a chance

15:55

to wait. Thanks. I was processing

15:57

and I seized on the word

15:59

boredom. for a reason. Because one

16:02

of the things I've been thinking

16:04

about is that what these companies

16:06

have done in order to keep

16:08

people on their site. I mean,

16:11

they're kind of like a store

16:13

that has a customer walk in.

16:15

They have all these products and

16:17

they don't want the customer to

16:20

walk out the door and go

16:22

to the next door. And that's

16:24

perfectly understandable.

16:27

And so it's constantly,

16:30

because it understands

16:32

your choices from

16:34

the past, putting out

16:36

things that it knows

16:39

you're going to be

16:41

interested in, maybe even

16:43

fascinated in. Now,

16:45

when you think about that,

16:47

and what that is doing

16:50

to the minds of people

16:52

who are, say, 18 years

16:54

old, who never... had a

16:56

life beyond this technology is

16:59

it is eliminating boredom. I

17:01

mean, when I was a kid there were

17:03

times where like nothing was happening

17:06

on the street with your

17:08

friends and that was the

17:10

idea. What are we going

17:12

to do to make it

17:14

interesting? That's what I'm going

17:16

to say. It's just creativity.

17:18

Yeah. But now you're in a situation

17:21

where you don't ever have to

17:23

be bored. And not only

17:25

that, you may think it's

17:28

your right to never be

17:30

bored. That it's absurd

17:32

to be bored. And I don't

17:34

know where that's going,

17:37

but it's another case,

17:39

if you look at the image

17:41

you gave about not

17:43

following Google directions

17:45

and driving around,

17:48

looking for something,

17:50

stopping to see

17:52

somebody who's cutting their

17:54

lawn and saying, hey, you know, you

17:56

know where this restaurant is? And then

17:58

they say, oh yeah. I go there

18:00

all the time and now you've

18:02

got a conversation going. They know

18:04

the owner. Oh yeah, tell him

18:06

Frank sent you and like you're

18:08

in a different place. And I

18:10

know from talking to people that

18:12

there are like teenagers who have

18:14

trepidation, if not fear of actually

18:16

like calling up a pizzeria and

18:18

talking to somebody to order a

18:21

pizza. Now, they can easily send

18:23

the text. That's fine. But to

18:25

actually have to talk to somebody

18:27

they don't know or haven't sort

18:29

of been introduced to over the

18:31

internet is something that can maybe

18:33

even terrify young people. And you

18:35

tell us, where is that going

18:37

to take us? It's so funny,

18:39

because first of all, that's existed

18:41

always. I used to hate, I

18:43

could have used to pay when

18:45

I was younger, used to pay

18:47

my two years younger sister to

18:49

call the doctor, call the pizzeria,

18:51

because I just absolutely hated taking

18:53

phone calls. So I learned gradually.

18:55

So I do think that there's

18:57

a way even for the really

18:59

introverted people to get there. But

19:01

I think it's totally right. And

19:03

is that it just makes it

19:05

so much more convenient to converse.

19:07

in a way that is not

19:09

face to face. And it also

19:11

means like we're talking earlier about

19:13

AI just being there to cater

19:15

to exactly the way that you

19:17

want to be talked to, right,

19:19

tries to avoid conflict, always says

19:21

yes, because it's like meant to

19:23

please, right? The way that these

19:25

large language models work. they're trained

19:27

to be nice. We at some

19:29

point in the beginning, I think

19:31

a lot of people were trying

19:33

to do work on negotiations and

19:35

see if we could teach them

19:37

to negotiate and they will always

19:39

wait too nice because those are

19:41

the guardrails that the companies put

19:43

in place. But it also means

19:45

if that's the only counterpart that

19:47

you talk to, you're never going

19:49

to learn how to deal with

19:51

conflict. So I do think that

19:53

there's something that we think is

19:55

like a musled atrophy. Right. The

19:57

same way that because we use

19:59

Google Maps all the time, we

20:01

now have a hard time navigating

20:04

without. It's the same way with

20:06

conversation. If we don't talk to

20:08

people who think very differently, if

20:10

we don't talk to people who

20:12

push back and say, hey, that's

20:14

a stupid idea, and how are

20:16

we ever going to learn to

20:18

do this in real world relationships?

20:20

I do think that's true. What

20:22

I, so I'm going to give

20:24

you my optimistic view, and then

20:26

we can also talk about the

20:28

more pessimisticimistic one. we could actually

20:30

just repurpose AI in a way that

20:32

it does its exact opposite. So

20:34

AI is exhausting. AI doesn't really care

20:37

of do I cater to your preferences,

20:39

do I make your view narrow, do

20:41

I make you more boring? That's just

20:43

what the incentives are. But as you

20:46

said, the incentives of platforms are to

20:48

keep you there for as long as

20:50

possible. But AI in a way could

20:52

be the best perspective taking machine the

20:55

best. I think of it is like

20:57

an echo chamber swap. tool that we've

20:59

ever seen. So if I, for example, I

21:01

was using myself as an example, if

21:03

I wanted to know what is the

21:05

experience of a 50-year-old Republican farmer somewhere

21:08

in Ohio look like, I have no

21:10

idea. But I don't know what the day-to-day

21:12

experience looks like, and it would be

21:15

very difficult for me to find out.

21:17

Same for anybody else, because you'd have

21:19

to find multiple people, kind of... go

21:21

live their life for like a little

21:24

bit, exchange ideas. Now Google and Facebook

21:26

know exactly what it looks like, right?

21:28

Because they have their algorithm, they know

21:31

exactly what they see, they know what

21:33

their day to day look like, they

21:35

don't know what their news consumption look

21:38

like, they don't see what their news

21:40

consumption look like, they see what their

21:42

friends talk about, and they could just

21:44

instead of just optimizing for me and

21:46

keeping me in my little echo chamber,

21:49

see what they see. Now there's a

21:51

reason I think for why that doesn't

21:53

exist potentially is maybe we're not

21:55

going to use that all too

21:57

often, but it's super super comfy.

21:59

in the echo chambers that we've

22:01

built for ourselves, because they speak

22:03

to our preferences. It's like same

22:05

as an offline context. We typically

22:08

have friends who are very similar,

22:10

right? But once in a while,

22:12

we could actually break out and

22:14

I could say, maybe I'm even

22:16

going to get, the way that

22:18

I think about is like, I

22:20

would love to have an AI

22:22

guide, go to this echo chamber

22:24

with me, and try to help

22:26

me understand, right? Because if I

22:28

go there by myself, maybe it's

22:30

just... take me back right away,

22:32

but I could have someone who

22:34

understands it a little bit better

22:36

and who could help me bridge

22:38

their reality to mine and see

22:40

maybe here are some commonalities, maybe

22:42

here's why they think about this

22:44

specific topic in a certain way

22:46

that you might not think about.

22:48

I think that could be an

22:50

opportunity that we've never had, like

22:53

super expensive to travel, super expensive

22:55

to go to different parts of

22:57

the world, talk to the locals,

22:59

but that's a way in which

23:01

we could actually do that. So

23:03

that's my positive. What if we

23:05

could use it slightly differently? That's

23:07

your dream. That's my dream. It's

23:09

not that difficult because it exists.

23:11

So all of these models, they

23:13

already exist. All it would take

23:15

is for Google and Facebook to

23:17

say we're just going to have

23:19

an explorer mode where we repurpose

23:21

and we're going to say okay

23:23

instead of using your model, we're

23:25

going to turn your model off

23:27

for a second and we're going

23:29

to turn Cal's model on and

23:31

that's what you're going to see

23:33

for the rest of the rest

23:35

of your day. what you want

23:38

to see and when you want

23:40

to see it, would be easy

23:42

to implement. Question is, would companies

23:44

do it? I think that the

23:46

thing that I wonder about is

23:48

the speed that this has overtaken

23:50

us. And the same thing happened

23:52

with the internet, where everything seemed

23:54

like Disneyland. Hey, look at you,

23:56

Facebook, I talked to my friends,

23:58

Instagram, and then 10 years. later

24:00

you see that kids have been

24:02

shamed and bullied and in

24:05

ways that it's impossible for

24:07

their parents to comprehend or

24:10

or help them that they're

24:12

in in this world that's

24:14

just sucking second them down

24:17

you're talking about mental illness

24:19

and then it's like 10

24:21

years later oh let's do

24:23

something about this. Do we

24:26

have people who while we

24:28

may be behind or out there,

24:30

just trying to understand

24:32

where this is going,

24:35

it's sort of like the

24:37

Iroquois Indians used to have

24:39

a saying about, you know,

24:41

before you do something new,

24:44

you think about its impact

24:46

seven generations down the

24:48

road. I wonder if

24:51

anybody's thinking a day

24:53

beyond tomorrow. So here's my

24:55

personal answer. I think the one

24:57

thing that changed everything for me

24:59

personally is becoming a parent. I think

25:01

I was like very much focused on

25:03

what can we do right here and

25:05

now, what are the opportunities challenges, but

25:08

it was very limited. I think becoming

25:10

a parent suddenly kind of gave me

25:12

a much longer horizon. So this is

25:15

just like obviously a tiny, tiny microcosmas,

25:17

but I know that it changed something

25:19

for me. And the one thing that I'm thinking

25:21

a lot about in terms of. where that

25:24

potentially take us in the future

25:26

and it's I think it's mostly

25:28

dark with with some opportunities and

25:30

is this growing push towards

25:33

not just AI versus human right

25:35

so right now it's still where

25:37

human we have certain kind of

25:39

relationships with our friends family

25:41

the world and then there's AI

25:44

and yet there's like becoming we're

25:46

becoming closer and we're using them

25:49

as relational agents But there's still

25:51

some separation. I think there's a

25:53

growing push with lots of money

25:56

behind, mostly coming from Elon Musk,

25:58

to essentially merge the two. So the idea

26:00

that instead of having AI as an external

26:02

agent, can we actually put a chip in

26:05

our brain and now again it becomes a

26:07

little bit dystopian but bear with me because

26:09

I do think that there's some bearing in

26:11

reality is can we put a chip in

26:13

our brain so that instead of you having

26:15

to go to Google and say hey where's

26:18

the restaurant you just think the thought kind

26:20

of goes to the cloud and back. Now.

26:22

The reason for why I'm interested in that

26:24

space is because my husband is a neuroscientist

26:26

and we know, first of all, we know

26:28

how to read and write into the brain,

26:31

right? So we know how the language of

26:33

the brain works, so we know how to

26:35

read from the brain, and we also know

26:37

how to speak to the brain. So what

26:39

is happening at NeuralLink, for example, that's Elon

26:41

Musk Company, is that they're trying to build

26:44

chips that right now solve health issues, right?

26:46

like paraplegics, do kind of regain ability. Sounds

26:48

amazing, but the moment that you have people

26:50

walking around with chips in their brain, there's

26:52

so much more that you can do. And

26:54

so for me, this very dystopian far future

26:57

is like, well, if this world happens, first

26:59

of all, talk about this connection. Right, so

27:01

now suddenly we can get everything just by

27:03

thinking, I don't need to speak to another

27:05

human being, just face to face. Maybe our

27:08

chips can communicate at some point, but I

27:10

can be totally separated from the world because

27:12

everything just happens inside my brain. There's also

27:14

like, if we think about these echo chambers

27:16

from social media, you just broadcast to my

27:18

brain directly. And obviously, the moment that there

27:21

are some people who have that chip in

27:23

their brain and others don't, like inequality will

27:25

just widen. way way bigger than anything that

27:27

we've seen. So there would be people who

27:29

would talk about us as like these cute

27:31

little monkeys pressing buttons on their on their

27:34

computers while they just compute everything in their

27:36

head. So if we kind of really talk

27:38

about where this AI and all of this

27:40

stuff on terms of trying to figure out

27:42

who we are trying to change our behavior.

27:44

your leaders in the

27:47

future. I think that's

27:49

what I'm most worried

27:51

about right now. Again,

27:53

a pretty big leap,

27:55

but if you talk

27:57

about what should we

28:00

be talking about? Because

28:02

right now it's mostly

28:04

a legal question, right?

28:06

So unless you have

28:08

something like epilepsy that

28:10

requires you to have

28:13

a chip in your

28:15

brain that's already regulating

28:17

the way that your

28:19

body works, you're

28:22

not allowed to do it, but that's just a

28:24

legal barrier. Nobody says that once we see

28:26

the benefits in kind of curing some of the

28:28

disease, that we don't just kind of take

28:30

the step and say, okay, as long as it's

28:32

not too invasive, maybe we can put something

28:34

in. And then it's the same way that we've

28:36

seen it with the internet, like a pretty

28:38

slippery slope. So if I want to have people

28:40

talk about a conversation early, that

28:42

would probably be the one. Well,

28:45

and you haven't even brought up the fact

28:47

that what happens if the chip gets

28:49

hacked and you're told

28:51

to do something evil. Or

28:54

you don't even know if the thoughts that

28:56

you have are your own, right? So right now,

28:58

like you can at least note that something

29:00

that kind of pops up in your mind, maybe

29:02

it's been manipulated and maybe people have been

29:04

pulling the strings in terms of showing you different

29:06

new sources, but you at least know that

29:08

you're thinking the fun. Like once you have a

29:10

chip in your brain, this kind

29:12

of security has gone

29:14

entirely. So there's a pretty

29:16

big challenge associated with that.

29:18

Yeah. But it is getting

29:20

scarier and scarier. I told

29:23

you, this is the dystopian version of all

29:25

of this. Okay. So

29:27

there's, let's look at the other

29:29

side here. Let's

29:31

try to go back. In two ways. In two

29:33

ways. What can

29:36

we do now

29:38

to just try to

29:40

protect ourselves from just

29:42

giving away too

29:44

much of ourselves and

29:47

maybe even starting

29:49

with understanding how

29:52

pieces of ourself

29:55

are being turned into data that

29:57

we're not even aware of.

30:00

How much time would that take? Because

30:02

it might be such a huge

30:04

amount of time that most people

30:06

would just throw up their hands

30:08

and say, look, I don't care.

30:10

I'm just going to get my

30:12

coffee. If six gigabytes of data

30:14

goes out because I put my

30:16

credit card through it, I

30:19

really don't care. And the younger

30:21

the people are, probably the easier

30:23

it is for them to do it

30:25

because that's their world. I think it's

30:27

a great question and I again like

30:30

we were trying to pivot back to

30:32

the to the bright side I think

30:34

I've become a lot more pessimistic about

30:36

this space over the last couple of

30:38

years I would say and so I

30:40

do think that we can all do

30:42

a little bit better right so I

30:44

think first of all most of us

30:46

when we think about our data being

30:48

very intrusive and privacy being violated. We

30:50

mostly think about social media, which makes

30:52

sense because that's what's constantly being shown

30:55

in the media. That's what everybody discusses

30:57

in the public space. But it's such

30:59

a tiny, tiny slipper. So some people

31:01

say, well, I don't want to use

31:03

social media because I don't want to

31:05

be tracked. But then they download the

31:07

weather app and they kind of mindlessly

31:09

say yes to having the app tab

31:11

into their microphone, their entire photo gallery,

31:13

their GPS records. which is just as

31:15

intimate, right? So I think sometimes, especially

31:17

with phones, that's just like the one

31:19

thing that I would recommend kind of

31:21

people to be a little bit more

31:23

mindful because you do have a bit

31:25

more control there and then in other

31:27

parts of your life. However, the one

31:29

thing that I've become more pessimistic is

31:31

that if we're put in charge of managing

31:34

our personal data, it's never going to

31:36

happen. Right. So even if you look

31:38

to some of the regulations that are

31:40

out there now, even that the most

31:42

progressive ones. The most common path to

31:44

trying to solve it is to say, well,

31:46

why don't we just tell people what's happening

31:48

with their data so we mandate transparency? And

31:51

then we give them control so they can

31:53

take care of themselves. It's just never

31:55

going to happen because, like, first of

31:57

all, technology developed so fast, right? You

31:59

talk. about like the rapid exponential

32:01

speed by which technology grows. So

32:03

I do this as a full-time

32:05

job and I think about this

32:07

all the time and I can

32:09

hardly keep up with the development

32:11

in technology. And then even if

32:14

people just magically caught up, right,

32:16

that say everybody gets educated early

32:18

on in schools and they just

32:20

keep up, it's a full-time job.

32:22

If you really wanted to make

32:24

sure that you read all of

32:26

the terms and conditions and you

32:28

kind of manage all of the

32:30

permissions. that would be like a

32:32

24-7 and most people coming back

32:34

to being boring hopefully have better

32:36

stuff to do than just reading

32:38

through all of the terms I

32:40

really hope so right so I

32:42

think that the solution where we

32:44

just say well people need to

32:46

understand and then they need to

32:48

better manage is never going to

32:50

happen the brain is not made

32:53

for these choices right if the

32:55

brain can say oh I'm going

32:57

to use the service right here

32:59

now by clicking yes or you

33:01

can spend the next two hours

33:03

trying to decide for everything, we're

33:05

absolutely going to click gas. So

33:07

we kind of, what we need

33:09

is to make it much easier

33:11

for people to do the right

33:13

thing. And some of that I

33:15

think is regulation, right? So not

33:17

only sends a signal of what

33:19

we care about as society, but

33:21

it also can say, well, why

33:23

don't we make the default that

33:25

your data is not being tracked?

33:27

But now nobody does that, right?

33:29

We're lazy. So instead of making

33:32

laziness work against us, we could

33:34

actually make it work in our

33:36

favor. You know, that's kind of

33:38

interesting. And you're the one who

33:40

brought up the swiping of a

33:42

credit card. Now, if you want

33:44

to use a credit card, then

33:46

you have to apply and then

33:48

you're back in that same place,

33:50

you know, agree to these conditions.

33:52

You want this, you give us

33:54

that. Yeah. And it doesn't seem

33:56

like the text side is going

33:58

to take away any. that can

34:00

help them understand me

34:02

better so that it could either

34:05

keep me on its platform

34:07

or get me to buy something at

34:09

selling. Yeah, there's actually,

34:11

so I think you're

34:14

absolutely right because right now it's

34:16

a binary choice. Right. So the only

34:18

choice that we have right now is

34:20

give us all of your data or

34:22

don't get. the superior service or the

34:24

product, use the product at all. And

34:26

I think if that's the choice, the

34:28

brain is always going to go for,

34:30

no, I do want the better service

34:32

in the here and now. But I

34:34

do think that there are solutions

34:37

that we're not talking about enough and

34:39

for a reason because there's many businesses

34:42

that are just in the, they're making

34:44

money from. commercializing your data.

34:46

But there's many companies that are not.

34:48

And so there's this one thing that

34:50

I think everybody should know about. And

34:52

I'll try to explain it in non-technical

34:55

terms, but it's called, for those who

34:57

want to know, it's called federated learning.

34:59

And it's a way of creating

35:01

intelligence and machines and predictive algorithms without

35:03

collecting your data. So I'm going to

35:06

give you an example to just make

35:08

it a bit more concrete. So

35:10

let's take Netflix. Like we all. Watch

35:12

movies on Netflix and in a way it's

35:14

helpful that they help us find the movies

35:16

right because there's so many movies out there

35:18

You'd never be able to find what you

35:21

want if it was just up to you

35:23

The way that it typically works like the

35:25

way that typically machines are being trained is

35:27

you have your data like the stuff that

35:29

you've got the movies that you've been watching

35:31

the ratings you submit that to Netflix all

35:34

of the data gets sent and now it

35:36

sits there Netflix on Netflix on a server

35:38

and they train so they know that well

35:40

Cal really liked Titanic and he also really

35:42

liked Love Actually. So here is like

35:44

the recommendations that we're going to make

35:47

for him and here's how we're going

35:49

to update the model. Now this means

35:51

that you had to send your your

35:53

data to Netflix, but what we can

35:55

do right now is instead of you sending

35:57

all of your data, they can essentially

36:00

send the intelligence to you, right?

36:02

So your phone is so much

36:04

more powerful that again, the rockets

36:06

that we use to send people

36:08

into space with. So they don't

36:10

need to get all of the

36:12

data. They can send the model,

36:14

the intelligence to you and say,

36:16

okay, locally, we see that you've

36:18

watched Itonic, you've watched lab, actually,

36:20

that never gets shared with Netflix

36:22

at all. The model just learns

36:24

and updates its little parameters, learns

36:27

a little bit about how the.

36:29

the movie world works, and instead

36:31

of you sending the data, you

36:33

just send back some of the

36:35

intelligence. So now the model of

36:37

Netflix gets better, we all benefit,

36:39

but you've never had to send

36:41

your data. And Netflix is just

36:43

like an example that might not

36:45

seem like, well, why would I

36:47

care sending my data to Netflix?

36:49

But think about medical data. Right,

36:51

so like there's so much that

36:54

we could do and learn about

36:56

diseases, how do they work, what

36:58

kind of treatments work for which

37:00

type of person under which conditions

37:02

and so on. Right, right, right,

37:04

right. All of this genetic data,

37:06

medical histories, biometric data, but I

37:08

don't want to farm a company

37:10

to have all of this data

37:12

centrally, right, because like now that's

37:14

a huge risk of, first of

37:16

all, data breaches, then abusing it

37:19

now, maybe they get a new

37:21

CEO, that is like once my

37:23

data is out there I can't

37:25

get it back but what they

37:27

could do is they could say

37:29

I'm going to keep my genetic

37:31

data you're going to keep your

37:33

genetic data they just kind of

37:35

send questions to my data so

37:37

they figure out okay over time

37:39

here's how like certain social demographics

37:41

respond to this kind of treatment

37:43

and maybe this works better for

37:46

women and so the data stays

37:48

with us and we're just sharing

37:50

back some of the intelligence And

37:52

it's a totally different way of

37:54

learning. And it really breaks down

37:56

this dichotomy that we've, I think

37:58

Silicon Valley has very carefully crafted

38:00

for a long time, where it's

38:02

like, well, either you have to

38:04

give us your data, because that's

38:06

the only way that you can

38:08

give the service and convenience and

38:11

like better medication and better product,

38:13

but that's not. longer true and

38:15

I think once we move to

38:17

a model that's more decentralized we

38:19

have a lot more power because

38:21

the data stays with us and

38:23

we have a lot more control

38:25

over over how it's being used.

38:27

So I think that's a model

38:29

that again makes it a little

38:31

bit easier to keep control and

38:33

exercise it more wisely. I don't

38:35

know that many companies would want

38:38

to do that because they already

38:40

have the upper hand and then

38:42

to it's asking them to do

38:44

extra work. Although, with the speed

38:46

of AI now, that may not

38:48

be that big of a problem.

38:50

But let's take it to pivoted

38:52

toward solutions and ways that we

38:54

could make things better. You thought

38:56

a lot about this. And actually,

38:58

before I even asked you to

39:00

get to the solutions, you were

39:03

just saying how much time it

39:05

takes to just try and keep

39:07

up with it. You didn't even

39:09

say trying to stay ahead. You

39:11

said just trying to keep up.

39:13

Is there any formula you could

39:15

recommend to somebody like me who

39:17

also wants to keep up? How

39:19

do you keep up? It's incredibly

39:21

difficult. So I think I'm in

39:23

right now a relatively lucky position

39:25

is that oftentimes people send me.

39:27

stuff so like I work a

39:30

lot with students and I think

39:32

by now people know what I'm

39:34

interested in so a lot of

39:36

the cutting-edge stuff actually comes from

39:38

other people recommending. I personally like

39:40

Wired because I think their articles

39:42

are really interesting I think they

39:44

have a very nuanced take on

39:46

what are some of the opportunities

39:48

and also some of the challenges

39:50

so and then also for us

39:52

again it's a little bit easier

39:55

because we have like all of

39:57

these academic institutions but I do

39:59

think that longer forms like wired

40:01

or podcast I think go a

40:03

long way, but again, it's a

40:05

it's an uphill battle if you

40:07

want to keep informed on that

40:09

also like times actually has an

40:11

interesting section on data and privacy

40:13

that is that is really good.

40:15

But again, those are just who

40:17

I'm sure there's many more. Okay.

40:19

So what are your best hopes

40:22

for pushing this forward in a

40:24

way that's positive for us and

40:26

a way that we're not going

40:28

to lose ourselves? I mean, I'm

40:30

already now thinking about what's going

40:32

to happen. when chips start going

40:34

into people's heads. And you know,

40:36

this is science fiction stuff. Do

40:38

you have any real world solutions

40:40

to pass on to us? Yeah,

40:42

there's actually, there's some hope. And

40:44

actually, I'm quickly going to go

40:47

back to something that we talked

40:49

about before, because one of the

40:51

things that I'm trying to do

40:53

is actually convince companies that it's

40:55

in their best interest, because I

40:57

don't think if we don't have

40:59

the corporate sector on board. there's

41:01

only so much that we can

41:03

do right so I think regulation

41:05

is slow it's super difficult to

41:07

police and implement so I actually

41:09

do think that there's benefits for

41:11

for companies because unless you're in

41:14

the business of selling data right

41:16

so Facebook for example is as

41:18

much as it they kind of

41:20

want to be a B to

41:22

see and focus on the customer

41:24

they just sell your data to

41:26

companies but if that's not the

41:28

case right say you're in the

41:30

space where you're trying to offer

41:32

better products, you're kind of trying

41:34

to make better recommendations. Ditting on

41:36

people's data is a huge risk.

41:39

So it used to be the

41:41

case that you needed to collect

41:43

people's data to offer your services,

41:45

right? If you want to make

41:47

recommendations, there was just no other

41:49

way than collecting the data centrally.

41:51

But if you think about it,

41:53

you've just accumulated this pile of

41:55

gold, you've just accumulated this pile

41:57

of gold. you're actually like now

41:59

you're in this place where you

42:01

have to protect it and like

42:03

the reputational costs legal costs of

42:06

someone breaching the data and even

42:08

protecting it having a tie in

42:10

a system is super expensive So

42:12

if you can offer the same

42:14

services, the same convenience, the same

42:16

product, without having to collect it

42:18

centrally, you're in a much better

42:20

spawn. Right? Not just that, but

42:22

then you can also make it

42:24

part of your value proposition. Because

42:26

if you can say, hey, we're

42:28

going to give you the same

42:31

experience without collecting your data, then

42:33

our competitor does. Yeah, you know

42:35

what? That's a great idea. Apple

42:37

has figured it out, right? Apple

42:39

has figured it very beat to

42:41

customer. And if you look at

42:43

in New York, the ads that

42:45

Apple has, it's all about privacy.

42:47

And they're actually one of the

42:49

big ones using federated learning. So

42:51

Siri, for example, on your iPhone,

42:53

it doesn't send the data to

42:55

the cloud. It kind of that

42:58

they sent a model to your

43:00

phone. It kind of that they

43:02

sent a model to your phone,

43:04

it locally, sent a model to

43:06

your phone, it kind of that

43:08

they sent a model to your

43:10

phone, to commercialize. They have no

43:12

incentive what I'm trying to do.

43:14

to make the world better, the

43:16

way that we talked about it.

43:18

And I do think it's, I'm

43:20

not just making this up, I

43:23

do think that there's a real

43:25

incentive that companies have for consumers.

43:27

The way that I hope this

43:29

is going to go is essentially

43:31

trying to build these new forms

43:33

of data governance that just gives

43:35

us a lot more support. So

43:37

we talked about. Well there's amazing

43:39

ways in which we could benefit

43:41

from biomedical data but we just

43:43

don't have the time to think

43:45

about all of the implications, who

43:47

does we share the data with,

43:50

for what purpose, because again we

43:52

have like only 24-7. There's a

43:54

really intriguing idea that's called data

43:56

co-ops and data trusts, that it's

43:58

like essentially a way in which

44:00

people can come together with a

44:02

shared interest in data. My favorite

44:04

go-to example these days is, and

44:06

you can see a common theme

44:08

here, is expecting moms. So when

44:10

you're pregnant, right, so you have

44:12

so many questions, you want to

44:14

know what are my risk factors,

44:17

what are the baby's risk factors,

44:19

what should I be doing? And

44:21

what you get is like every

44:23

four weeks you get to see

44:25

your... doctor and they say yeah

44:27

it's still breathing and the heart

44:29

is still beating and it's just

44:31

like terrifying now we could have

44:33

like an amazing kind of service

44:35

that says well let's have like

44:37

a lot of pregnant moms come

44:39

together pull our genetic data again

44:42

medical histories biometric data lifestyle choices

44:44

so we learn Like what should

44:46

each woman do given the back

44:48

genetic background and so on to

44:50

make sure that she's safe and

44:52

the baby is safe every step

44:54

along the way? And I don't

44:56

want to farm a company to

44:58

have that data, but if we

45:00

can come together and create a

45:02

data cop or data trust, we

45:04

can actually hire management so we

45:06

can hire people with expertise in

45:09

the use of data, they can

45:11

help us exactly figure out whom

45:13

should we be sharing with, they

45:15

can help a set of the

45:17

infrastructure for federated learning. Right, so

45:19

now we have like many people

45:21

with a shared interest, they can

45:23

make sure that we have the

45:25

technological infrastructure and the same way

45:27

that banks have a fiduciary responsibility

45:29

to act in the best interest

45:31

of their customers, that management is

45:34

now legally obligated to act in

45:36

our own best interest. And so

45:38

this is like a totally different

45:40

way of thinking about how do

45:42

we not just kind of eliminate

45:44

some of the risks through regulation,

45:46

but how do we actually help

45:48

us optimize and maximize the value

45:50

that data could have for all

45:52

of us? without us having to

45:54

be solely responsible for doing it

45:56

all by ourselves. So that's for

45:58

me, one of the most interesting

46:01

solutions. Now are you just putting

46:03

this out as an idea or

46:05

are you trying to actually implement

46:07

it somehow? Because as you're talking,

46:09

I'm thinking, wow, there are a

46:11

lot of diseases that aren't particularly

46:13

common. Yeah, that yet like a

46:15

doctor might see it may be.

46:17

once or twice a year. And

46:19

so there's very little experience that

46:21

the doctor can have, but if

46:23

you had all the data from

46:26

around the country or around the

46:28

world. Yeah, it already exists and

46:30

George hitting. the head on the

46:32

nail. So essentially my favorite example

46:34

of that is called my data

46:36

and my data is a Swiss

46:38

data cop in the medical space

46:40

and they focus exactly on what

46:42

you were just describing. So one

46:44

of the applications that they have

46:46

looks at MS, so multiple sclerosis,

46:48

which is one of the disease

46:50

that's absolutely crippling and we don't

46:53

understand it. Right. So it has

46:55

like multiple determinants. It's like genetic

46:57

but it's also kind of environmental

46:59

and so there's many, many ways

47:01

in which we could understand the

47:03

better. So my data is like

47:05

the Swiss data co-op that gets

47:07

a lot of data from patients

47:09

and again they own the data

47:11

co-op. So they have like not

47:13

just control over their data, they

47:15

also have stay in what the

47:18

data co-op does through like this

47:20

general assembly. And it's amazing because

47:22

once you collect data from many

47:24

MS patients and also like the

47:26

community of people who doesn't have

47:28

MS, right? Because you need a

47:30

comparison. And first of all you

47:32

can scientifically understand disease much better.

47:34

But what they do in addition,

47:36

which I think is like just

47:38

extraordinary, is for every patient, they

47:40

now can have a predictive model

47:42

and it connects with their doctors.

47:45

So it's not just an AI,

47:47

it's essentially an AI saying, oh,

47:49

here's something that we've learned about

47:51

this patient from this massive data

47:53

set. It sends this intelligence to

47:55

their local doctor. They kind of

47:57

look at how does it interact

47:59

with the medication and then the

48:01

doctor gives feedback to. the algorithm.

48:03

So it says, okay, this medication

48:05

actually works for this patient. I'm

48:07

seeing improvement in the way that

48:10

they kind of interact. And this

48:12

to me is like, just again,

48:14

mind blowing because now you have

48:16

like scientific knowledge that is being

48:18

built in a way that we

48:20

would have never been able to,

48:22

right? Maybe farm companies, if they

48:24

think there's profit, and then maybe

48:26

you never get to benefit from

48:28

it because it only happens in

48:30

20 years. Best case, you have

48:32

to pay millions of dollars to

48:34

get the thing. And again, the

48:37

data corp just acts in your

48:39

best interest and it helps you

48:41

benefit in the here and now.

48:43

So it's not hypothetical and already

48:45

exists and I think that there's

48:47

probably going to be a lot

48:49

more of those. I like where

48:51

you're going. It sounds like it

48:53

exists, but it takes a lot

48:55

of work. And it almost seems

48:57

like it's got to be somehow

48:59

forced or pushed the same way

49:02

you were talking about Apple and

49:04

their ads about privacy. Like, you

49:06

know, once you have a push

49:08

in that direction, the competitors have

49:10

to compete against it. So, you

49:12

know, it seems like. the first

49:14

of many conversations I'd like to

49:16

have with you. I don't know

49:18

if you're open to some more.

49:20

I would love to. Are you

49:22

easy? I could talk about this

49:24

forever. I'm sure you got a

49:26

sense. Well, I did. And I

49:29

could ask questions about it forever.

49:31

So this is a good match.

49:33

Let me thank you for now.

49:35

And let me just watch where

49:37

this world is going. You watch

49:39

where the world is going. And

49:41

then I'm sure down the road.

49:43

Down the road. our paths are

49:45

going to meet again because your

49:47

book is fantastic. It's a rare

49:49

business book that is written with

49:51

like a storytelling vitality from first-person

49:54

point of view that really puts

49:56

us in your shoes. And when

49:58

you find somebody who can explain

50:00

these complicated topics on a very

50:02

personal storytelling level, The world is

50:04

a better place for it. So

50:06

thank you for joining me today.

50:08

Thank you so much for having

50:10

me. And about wraps it up.

50:12

Want to thank Tim Ferris for

50:14

nudging me to start this podcast.

50:16

As we step into 2025, concept

50:18

of trust is being redefined. Artificial

50:21

intelligence is the power. to transform

50:23

our lives, making us more productive,

50:25

even more creative and connected. never

50:27

before. Yet, it also brings challenges

50:29

we've never faced. Deep fakes, data

50:31

manipulation, technology capable of knowing us

50:33

better than we know ourselves, and

50:35

deceiving us at scale. I am

50:37

craft in a keynote speech that

50:39

explores a critical question. How do

50:41

we build trust in a world

50:43

where we can no longer believe

50:46

everything we see or hear? To

50:48

find answers, I've been reflecting on

50:50

conversations I've had with iconic leaders

50:52

who've shaped history. Their insights offered

50:54

timeless lessons about the pillars of

50:56

trust that have held us together.

50:58

At the same time, I'm diving

51:00

deep into the future, exploring how

51:02

AI is reshaping our relationships with

51:04

information and with one another. This

51:06

is the keynote for our times,

51:08

and it will empower audiences to

51:10

navigate the complexities of our rapidly

51:13

changing world. It's not just about

51:15

identifying the challenges, it's about uncovering

51:17

the opportunities to strengthen trust all

51:19

around us. Business, leadership, and society.

51:21

If trust is a priority for

51:23

you, or your organization in 2025,

51:25

let's talk. I'd be happy and

51:27

honored to deliver this keynote at

51:29

your next event. Do you have

51:31

any thoughts on trust in the

51:33

age of AI? I'd love to

51:35

hear them. Please, reach out to

51:38

me at Cal Busman.com. Here's to

51:40

a year, filled with connection, curiosity,

51:42

and trust. Cheers!

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features