Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Released Thursday, 20th March 2025
Good episode? Give it some love!
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

Thursday, 20th March 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Hey everyone, it's Tristan. It's Daniel.

0:04

Welcome to your undivided attention.

0:06

So Daniel, something I think about often

0:09

is how throughout history, society takes

0:11

a lot of time to confront the

0:13

harms caused by certain industries. I think

0:15

about Upton Sinclair writing about the meat

0:17

packing industry in the early 20th century.

0:20

Now I think about Rachel Carson talking

0:22

about Silent Spring in the 1960s, and

0:24

talking about Silent Spring in the 1960s

0:26

and the 1960s. And with social media

0:29

we're seeing it happen again, the can

0:31

just keeps getting kicked down the road.

0:33

And with AI moving so fast, it

0:35

feels like the normal time that it

0:37

takes us to react isn't compatible with

0:40

doing something soon enough. You know, we

0:42

can become aware of serious problems, but

0:44

if it takes too long to respond,

0:46

meaningful action won't follow. Totally. And I

0:49

think this has to do with the

0:51

way that we manage uncertainty in our

0:53

society. You know, with any new thing,

0:55

with any industry, it's important. And it's

0:58

really easy for us to react to

1:00

that fear that we experience sitting with

1:02

uncertainty by avoiding thinking or speaking about

1:04

topics when we feel uncertain. And then,

1:06

you know, as a society, I often

1:09

think about when we're uncertain about what's

1:11

true or who to trust, we struggle

1:13

to make collective informed decisions.

1:16

And when we watch experts battling it

1:18

out in public, when we hear conflicting

1:20

narratives and strong emotions, it's easy to

1:22

start to doubt what we think we

1:24

know. And it's important to recognize that

1:27

that's not by accident. You know, it's

1:29

because companies and individuals with a lot

1:31

of money and a lot of power

1:33

want to hide the growing evidence of

1:35

harm, and they do so with sophisticated

1:38

and well-funded campaigns that are specifically designed

1:40

to create doubt and uncertainty. And so

1:42

how do we sit with this? Our

1:44

guest today historian, Naomi Oreskes, knows this

1:46

better than anyone. Her book, The Merchants

1:49

of Doubt, reveals how this playbook, has

1:51

been used repeatedly across different industries and

1:53

time periods. And Naomi's most recent book,

1:56

The Big Myth, just came out in

1:58

paperback. So, how do we make bold?

2:00

decisions with the information that we have

2:02

right now, while being open to changing

2:04

our minds as new information comes. How

2:07

should we sit with uncertainty, which is

2:09

everywhere and unavoidable, while inoculating ourselves from

2:11

weaponized out? We discuss all of these

2:13

themes and more. This is such an

2:16

important conversation, and we hope you enjoy

2:18

it. Naomi, thank you for coming on

2:20

Your Undivided Detention. Thank you for having

2:22

me on the show, and thanks for

2:25

doing this podcast. So Naomi. 15 years

2:27

ago, you and your co-author Eric Conway

2:29

wrote this book, Merchants of Doubt, which

2:31

really started this conversation about the ways

2:33

that uncertainty can be manipulated. Let's start

2:36

with a simple question. Who were the

2:38

merchants of Doubt? The original merchants of

2:40

that were a group of physicists. So

2:42

they were scientists, but they were not

2:45

climate scientists. They were Cold War physicists

2:47

who were quite prominent. They had come

2:49

to positions of power and influence and

2:51

even fame to some degree during the

2:54

Cold War for work they had done

2:56

on US weapons and rocketry programs. So

2:58

they had been in positions of advising

3:00

governments. They were quite close to seats

3:03

of power. These four physicists who had

3:05

been very active in attacking... climate science,

3:07

it turned out, had also been active

3:09

attacking the science related to the harms

3:11

of tobacco. And that was the first

3:14

indication for us that something fishy was

3:16

going on, because that's not normal science.

3:18

Normally, physicists wouldn't get involved in a

3:20

debate about chemistry. I mean, maybe if

3:23

it overlapped their work, but these guys

3:25

were so outside their wheelhouse, that was

3:27

pretty obvious that something fishy was going

3:29

on. The other real tell was that

3:32

the strategies they were using were sort

3:34

of taking legitimate scientific questions, but framing

3:36

in a way that wasn't really legitimate.

3:38

So it's normal in science to ask

3:41

questions. How big is your sample size?

3:43

How robust is your model? How did

3:45

you come to that finding? Those are

3:47

all legitimate questions. but they began to

3:50

pursue them with a kind of aggression

3:52

that wasn't really normal, and the real

3:54

tell to do it in places that

3:56

weren't scientific. So we expect scientists to

3:58

pose questions at scientific conferences, at workshops,

4:01

in the pages of peer-reviewed journals, but

4:03

that's not what these guys were doing.

4:05

They were raising questions in the pages

4:07

of the Wall Street Journal, a fortune,

4:10

and Forbes. So they were raising what

4:12

on the... face of things on the

4:14

surface looked superficially to be scientific questions,

4:16

but they were doing it in an

4:19

unscientific way and in unscientific locations. So

4:21

as historians of science, it was very

4:23

obvious to us that something was wrong,

4:25

and that's what we began to investigate.

4:28

Right, but if I'm naive to that

4:30

story, I might come to that and

4:32

think. Here are people who might be

4:34

cremogens, here are people who might be

4:37

fed up, here are people who might

4:39

be angry, but that's not the claim,

4:41

right? The claim is deeper than that,

4:43

that these were people who are actually

4:45

deeply incentivized. Cremogens are normal in science

4:48

and they're not necessarily bad. I mean,

4:50

they can be a pain and they

4:52

ask. There's nothing per se wrong, particularly

4:54

there's nothing epistemologically wrong with being a

4:57

cremogen. But there is something pretty weird

4:59

when you start questioning climate science in

5:01

women's wear daily, right? We started looking

5:03

into it, and then that's when we

5:06

discovered this connection to the tobacco industry.

5:08

And so then we thought, well, why

5:10

the heck would anyone, anyone at all,

5:12

but much less a famous prominent scientist,

5:15

make common calls with the tobacco industry.

5:17

And one of the key players here

5:19

was a man named Frederick Sights, who

5:21

was an extremely famous physicist, someone who

5:23

was very close to people who had

5:26

won Nobel Prizes. He had been the

5:28

president of the U.S. National Academy of

5:30

Sciences, so the highest level of science

5:32

in America, and the president of the

5:35

Rockefeller University. one of America's most prestigious

5:37

scientific research institutes. So why would this

5:39

man, famous, respected, successful, make common cause

5:41

with tobacco? industry. And this is where

5:44

being a historian is a good thing,

5:46

because you can go into dusty archives

5:48

and you can find the papers where

5:50

people answer these questions in their own

5:53

words. And what we found was that

5:55

all four of these men did this

5:57

for what was essentially ideological reasons. That

5:59

is to say, it had nothing to

6:02

do with the science. They weren't really

6:04

in private. They're not having robust conversations

6:06

about how good is the evidence that

6:08

smoking causes cancer. No, that's not what

6:10

they're saying. What they're saying is. this

6:13

threatens freedom. They're saying if we let

6:15

the government regulate the economy, if we

6:17

let them ban smoking or even regulate

6:19

it strictly, like banning advertising, and this

6:22

is a really important point, we'll come

6:24

back to about free speech issues, if

6:26

we let them ban tobacco advertising, then

6:28

we're going to lose the First Amendment.

6:31

If we let them ban smoking in

6:33

public places, the next thing you know,

6:35

they'll be telling us where we can

6:37

live, what jobs we can have, what

6:40

cars we can drive, and we'll be

6:42

on the slippery slope. to totalitarianism. And

6:44

so for them, it's deeply connected with

6:46

their work in the Cold War. So

6:49

the Cold War part of the story

6:51

is not just incidental, it's actually central.

6:53

They feared that if the government became

6:55

more involved in regulating the economy through

6:57

environmental or public health regulation, it would

7:00

be a backdoor to communism. So there's

7:02

this sort of slippery slope in their

7:04

own argument. They're accusing their enemies of

7:06

being on a slippery slope, but they

7:09

themselves go on the slippery slope of

7:11

going from climate scientists doing the work

7:13

that shows why we might want to

7:15

regulate fossil fuels to accusing them essentially

7:18

of being communists and wanting to see

7:20

a kind of communist government in the

7:22

United States. Sure, and honestly, this is

7:24

one of the oldest debates in science.

7:27

The whole enlightenment story that really stuck

7:29

was the story of Galileo versus the

7:31

Pope and the Pope saying, you know,

7:33

you basically can't say this because it

7:35

would erode a lot of things about

7:38

the world. And so there's always been

7:40

this thing with science of how do

7:42

we tell the truth separate from values

7:44

we may care about. If I can

7:47

just say on that, one of the

7:49

ironies of this, though, and we see

7:51

this throughout this story, these guys like

7:53

to present themselves as if they are

7:56

Galileo, that they're the ones who are

7:58

standing up for truth. But of course,

8:00

it's the opposite. They're on the side

8:02

of very, very powerful corporations, like the

8:05

fossil fuel industry. But they try to

8:07

flip that script and claim that they

8:09

are the martyrs. They're the oppressed ones.

8:11

And we see that going on even

8:14

today. And that's one of the reasons

8:16

we want to have you on the

8:18

podcast, because it's actually a really confusing

8:20

time to be a person today in

8:22

our news environment to figure out who

8:25

is being suppressed, what opinions are real,

8:27

what opinions are manufactured. And so we

8:29

really want to come back to that

8:31

theme again and again as we talk

8:34

about this, because it has such relevance

8:36

to where we are today. But before

8:38

we do that, I want to go

8:40

back and talk about some of the

8:43

mechanics of how doubt is seated. tries

8:45

to convey the key thing. The idea

8:47

is that they're selling doubt. They're trying

8:49

to make us think that we don't

8:52

really know the answer, that the science

8:54

is unsettled, that it's too uncertain, that

8:56

the uncertainties are too great to justify

8:58

action. And it's a super clever strategy.

9:01

These people are very smart, right? They're

9:03

not dumb, because they realize if they

9:05

try to claim the opposite of what

9:07

climate scientists are saying. So if climate

9:09

scientists are saying the earth is heating

9:12

up, it's caused by human activities. If

9:14

they were trying to say No, it's

9:16

not heating up. They would lose that

9:18

debate. They have already lost that debate

9:21

because the scientific evidence is overwhelming. But

9:23

if they say, well, we don't really

9:25

know, you know, we need more data,

9:27

we should do more research, and there's

9:30

a lot of uncertainty. The uncertainty is

9:32

a key part of this story. That's

9:34

a much harder thing for scientists to

9:36

argue against because if I say there's

9:39

a ton of uncertainty, and you say,

9:41

well... I mean, yeah, there is uncertainty.

9:43

Of course, there's always uncertainty in science,

9:45

but it's not that bad. You know,

9:48

the scientist is now on his back

9:50

foot, or her back foot, right? The

9:52

scientist is now put in a defense.

9:54

position because they cannot deny categorically that

9:56

there are uncertainties. So the scientists are

9:59

placed in this kind of defensive position

10:01

and the other reason why this strategy

10:03

is so clever is because they're saying

10:05

it's uncertain the science isn't settled there's

10:08

a big debate and then they say

10:10

in fact I will invite you to

10:12

debate me on my podcast on Fox

10:14

News. in the pages of the Wall

10:17

Street Journal. Now the scientists often agrees,

10:19

because the scientists believes in free and

10:21

open conversation, the scientists thinks, I have

10:23

nothing to hide, why wouldn't I debate?

10:26

But the fact is, by agreeing to

10:28

debate, the scientist loses, before he or

10:30

she has even opened their mouth, because

10:32

the purpose of this argument is to

10:34

make it seem that there's a debate.

10:37

Right. They win as soon as there

10:39

is a debate. Bingo. The emergence of

10:41

doubt have won. Exactly. That's right. So

10:43

it's like people's minds are left with

10:46

the idea that there is a controversy.

10:48

We still don't really know. And, you

10:50

know, there's so many other strategies that

10:52

I'd love you to sort of talk

10:55

about, you know, keeping the controversy alive,

10:57

you know, delaying, let's commission an NIH

10:59

study or a study to figure out

11:01

what the true effects are, astroturfing, these

11:04

fake organizations that get sort of spin-ups.

11:06

You talk about citizens for fire safety

11:08

or the tobacco institute or the tobacco

11:10

institute. getting weaponized so that it's harder

11:13

to see the truth. Because basically, unless

11:15

we have antibodies for understanding these different

11:17

strategies, we're vulnerable to them. So essentially,

11:19

you are the kind of a little

11:21

vaccine here to help us have the

11:24

antibodies to understand. Yeah. And it's interesting

11:26

because some of my colleagues have now

11:28

started to talk about inoculation in the

11:30

context of bad information. But of course,

11:33

that's a really tricky metaphor, given that

11:35

we have lots of fellow Americans who

11:37

are suspicious now about vaccinations. So an

11:39

inoculation. you just said. So one of

11:42

the strategies kind of involves buying out

11:44

scientists. I hate to say this, but

11:46

it's true. One of the strategies is

11:48

to say we need more research. It's

11:51

too soon to tell. And it sadly

11:53

is relatively easy to get scientists to

11:55

agree to that because the reality is,

11:57

well, you know, scientists love to do

12:00

research and there always are more questions

12:02

that can be asked and as I've

12:04

already said, there are always some legitimate

12:06

uncertainties that we would do well to

12:08

look more closely at. So it's proved

12:11

very easy to get... scientists to buy

12:13

in sort of inadvertently by just saying,

12:15

oh, let's have a big research program.

12:17

So for example, back in the first

12:20

Bush administration, President Bush established the Global

12:22

Climate Research Program. Now back in 1990,

12:24

that wasn't necessarily a bad or malicious

12:26

thing to do, but it contributed to

12:29

this narrative that it was too soon

12:31

to tell that we needed to do

12:33

a lot more research, even though in

12:35

1992 President Bush signed the United Nations

12:38

Framework Convention on Climate Change, which committed

12:40

the United States to acting on the

12:42

available knowledge, which was already quite robust

12:44

at that time. Another thing you mentioned

12:46

were the astroturf organizations. So now we're

12:49

going from less dishonest to more dishonest.

12:51

So there's a whole range of activities,

12:53

some of which are catastrophically dishonest and

12:55

deceitful and... really appalling and maybe even

12:58

illegal, to others that are more manipulative.

13:00

So Astro turf organizations involve creating organizations

13:02

that purport to be citizens groups or

13:04

purport to be representing important stakeholders like

13:07

firefighters and getting them to do the

13:09

dirty work of the industry. So you

13:11

mentioned the Citizens for Fire Safety. This

13:13

was an organization that was created and

13:16

wholly funded by the tobacco industry. to

13:18

fight tobacco regulation by fighting back against

13:20

the overwhelming evidence that many house fires

13:22

were caused by smoking, particularly smoking in

13:25

bed. And so there were all kinds

13:27

of campaigns that pointed this out to

13:29

try to discourage people from smoking, particularly

13:31

from smoking in bed. The tobacco industry

13:33

made the claim that the real culprit

13:36

it wasn't the cigarette, it was the

13:38

sheets and the pillow cases, and that

13:40

these things needed to be fireproofed. And

13:42

so they persuade people across the country,

13:45

states, the federal government, to pass regulations

13:47

requiring flame retardants in pajamas. And I

13:49

remember when I was a parent, it

13:51

was incredibly hard to find comfortable cotton

13:54

pajamas from my children because they were

13:56

all made out of these disgusting synthetic

13:58

fabrics filled with flame returns. That was

14:00

pushed heavily by this group called the

14:03

Citizens for Fire Safety, represented by firefighters

14:05

who were in the pay of the

14:07

industry. So this was like true industry

14:09

shells. People should just stop here for

14:12

a moment and recognize just how diabolical.

14:14

It's very diabolical. I know. You've got

14:16

a product that is literally, you know.

14:18

causing houses to burn down. And instead

14:20

of actually that product, because they don't

14:23

want to change it, they can't really

14:25

change it, it's not really change a

14:27

bowl. And so they want to externalize

14:29

the source of this harm, this thing

14:32

that's happening in the world, saying, well,

14:34

there's another place that it's coming from.

14:36

It's coming from the flammable. materials, let

14:38

alone the fact that that probably gave

14:41

us more peafos and forever chemicals in

14:43

all of our furniture and bed sheets.

14:45

Now I know that for sure it

14:47

did, right? Right. And the idea, though,

14:50

that I think most people don't know,

14:52

there's sort of this asymmetry, just how

14:54

much effort would a, you know, incentivized

14:56

actor go through to spin up, you

14:58

know, lots and lots or dozens of

15:01

fake organizations, fake... institutions in order to

15:03

so doubt about this thing. And so

15:05

that's what I was so excited to

15:07

have you on because I just don't

15:10

think people understand. So in the case

15:12

of social media, you know, they might

15:14

say, well, we need to do research,

15:16

or let's fund parent education programs so

15:19

that parents are better educated about how

15:21

to manage their better educated about how

15:23

to manage their kids' use of screen

15:25

time, which is of course not an

15:28

actual solution to the fact that they've

15:30

created strategy of. distracting people from the

15:32

true source of the problem. process foods

15:34

that are really hard to stop eating.

15:37

And you might be too young to

15:39

remember this book. When I was young,

15:41

there was an advertising campaign on television

15:43

for Laze Potato Chips, and it featured

15:45

a young girl, a blonde, very pretty

15:48

young girl, and she's talking to the

15:50

devil. And the devil hands her a

15:52

potato chip and says, I bet you

15:54

can't eat just one. And I look

15:57

back on that ad now and my

15:59

mind is blown because in a way

16:01

they're admitting what they were doing. It

16:03

turned out they were doing research to

16:06

figure out how to manufacture a potato

16:08

chip that you couldn't eat just one

16:10

or five or ten that you would

16:12

eat the whole bag and it was

16:15

deliberate and it was knowing and they

16:17

even weirdly chipped their hand in the

16:19

ad except none of us realized that

16:21

that's what they were doing. Well, this

16:24

seems like also, just to do a

16:26

couple more here, there's another strategy which

16:28

is emphasizing personal agency, saying, well, it's

16:30

up to you to have personal responsibility

16:32

with how many Doritos, you know, you

16:35

have. It's up to the person who's

16:37

addicted to cigarettes, to choose, do they

16:39

really want to be addicted to cigarettes,

16:41

to choose, do they really want to

16:44

be addicted or not? They can still

16:46

choose that, social media, which would threaten

16:48

trillions of dollars of value if they

16:50

had to change in any way. Yes,

16:53

well the agency one is crucial and

16:55

it relates to the sort of bigger

16:57

framework, which is the framework of freedom.

16:59

So as you pointed out, there are

17:02

many ad campaigns both on social media

17:04

and in legacy media, basically trying to

17:06

shift the burden away from the producer

17:08

of the damaging product to the consumer

17:10

and to say, well, this is our

17:13

fault because we drive too much. And

17:15

so BP ran a big ad campaign

17:17

that many of us have seen and

17:19

it was super successful to calculate your

17:22

own carbon footprint. And how many of

17:24

us even now... Think about that. They'll

17:26

say, oh, I'm traveling less because I'm

17:28

trying to reduce my carbon footprint, right?

17:31

And of course, reducing your carbon footprint

17:33

isn't a bad thing. If you can

17:35

do it, it's a good thing. But

17:37

the net result of this is to

17:40

shift agency, to shift it away. from

17:42

the producer that is knowingly making a

17:44

harmful product and saying, no, it's my

17:46

fault because I made that choice. But

17:49

it wasn't entirely a choice because at

17:51

the same time the industry is fighting

17:53

regulations that would restrict fossil fuels. They're

17:55

fighting tax credits for electric cars, so

17:57

I'm not really making a free choice.

18:00

I'm making a choice that is heavily

18:02

affected by what the industry has done.

18:04

This is another strategy that we can

18:06

track back to the tobacco industry. Early

18:09

on the tobacco industry realized, and again

18:11

this is in the documents, we can

18:13

find them saying it in their own

18:15

words, that they would not succeed if

18:18

they said to the American people, yeah,

18:20

we know cigarettes will kill you, but

18:22

oh well, you know, enjoy a while

18:24

at last. No, that was not a

18:27

message that would work. Lots and lots

18:29

of people would say, oh, I should

18:31

try to quit. But if they said

18:33

this is about freedom? This is about

18:36

your right to decide for yourself how

18:38

you want to live your life. Do

18:40

you want the government telling you whether

18:42

or not you can smoke? And that

18:44

was a very powerful message. I think

18:47

for two reasons. One is because none

18:49

of us do want the government telling

18:51

us what to do. I think most

18:53

of us feel like, yeah, I want

18:56

to decide for myself where I live,

18:58

where I work, whether I smoke or

19:00

not. but also because it tied into

19:02

this bigger idea of America as a

19:05

beacon of freedom. That what makes America

19:07

America is that this is a country

19:09

of freedom. And so the industry ran

19:11

all kinds of campaigns with American flags,

19:14

with the Statue of Liberty, and we

19:16

talk about this in our new book,

19:18

The Big Myth. We can track this

19:20

back actually into the 1920s and 30s,

19:23

newsreels and documentaries, evoking all these icons

19:25

of American freedom. And this was a

19:27

very powerful argument because it meant that...

19:29

You weren't fighting for a deadly product.

19:31

You were fighting for freedom. And who

19:34

was going to argue against that? Yeah.

19:36

So it occurs to me that when

19:38

we talk about this, what we're really

19:40

talking about is not doubt itself. What

19:43

we're talking about is sort of unfair

19:45

conversational moves, right? It's unfair to turn

19:47

a fact conversation into a values conversation.

19:49

It's unfair to pretend that everyone is

19:52

just saying this when you're bankrolling this.

19:54

And so I kind of want to

19:56

come back because I have to admit

19:58

I bristle slightly about just focusing on

20:01

doubt because science and the process of

20:03

honest inquiry demands that we sit with

20:05

uncertainty. And, you know, it's part of

20:07

our ability to act in this world.

20:09

We don't know things. Sometimes longitudinal studies

20:12

do take 20, 30, 40 years. What

20:14

is the difference between? manufactured doubt that

20:16

is this deeply unfair conversational move that

20:18

destroys our ability to be together versus

20:21

more wise sitting with doubt. Yeah, that's

20:23

a great question. And it's one thing

20:25

we talked about in the book originally

20:27

that it's the doubt strategy is very

20:30

clever because it's a kind of jujitsu

20:32

move. It's taking what should be a

20:34

strength of science, the fact that scientists

20:36

are motivated by. doubt which in a

20:39

different context we call curiosity, scientists do

20:41

spend a lot of time worrying about

20:43

uncertainties and how to characterize them accurately,

20:45

fairly and honestly, and without some degree

20:48

of doubt, there wouldn't be progress in

20:50

science. So that's a good thing, but

20:52

the merchants of doubt take that good

20:54

thing and they turn it into a

20:56

liability and they want to make us

20:59

think that unless... The science is absolutely

21:01

positively 100% certain that therefore we don't

21:03

know anything and can't act. And so

21:05

it's really about exactly what you said,

21:08

that we as citizens have to understand

21:10

that we have to live with uncertainty.

21:12

I wrote a paper once that was

21:14

called Living with Uncertainty. And the reality

21:17

is we do that in our ordinary

21:19

lives all the time. We get married,

21:21

we buy a house, we buy a

21:23

car, we invest for retirement even though...

21:26

We might die beforehand. So we live

21:28

with uncertainty in our daily lives all

21:30

the time and we trust ourselves to

21:32

make judgments about uncertainty in our daily

21:35

lives because we think we have the

21:37

information we need to make those choices.

21:39

And so this leads to another strategy

21:41

we haven't talked about, which is the

21:43

direct attacks on scientists. Part of the

21:46

way this works also is to try

21:48

to undermine our trust in science generally.

21:50

say that scientists are crooked, they're dishonest,

21:52

they're in for the money, which is

21:55

again pretty ironic coming from the tobacco

21:57

industry. Very common. And this is one

21:59

of the things that we've tracked in

22:01

our work that's particularly distressing about what's

22:04

going on right now. Many of the

22:06

things we studied began as a tax

22:08

on particular sciences that seem to show

22:10

the need for regulation like science related

22:13

to tobacco, the ozone hole. climate change,

22:15

also pesticides. But then it's spread. And

22:17

what we've seen in the last 10

22:19

years, really since we published the book,

22:21

is this broader expansion to trying to

22:24

cast doubt on science more generally. So

22:26

this broad attack on science and scientists

22:28

in order to make us think we

22:30

can't trust scientists, but then who should

22:33

we trust? So as you say, now

22:35

we're in this. saturated media landscape with

22:37

information coming at us from all directions,

22:39

and it's really, really hard for anyone

22:42

to know who they should be trusting.

22:44

I feel like there's a distinction between

22:46

reflexive mistrust, which is a problem, and

22:48

then reflexive trusting, which is also a

22:51

problem, and what we're looking for is

22:53

warranted trustworthiness. And one of the things

22:55

I'm worried about the most in this

22:57

space is that I've seen the response

23:00

of scientists, even friends and colleagues, is...

23:02

to try to push for more certainty.

23:04

And they'll say, no, no, we know

23:06

this, we're more certain. And I have

23:08

to admit, I sort of doubt that

23:11

that's the right response. I kind of

23:13

think we all need to sit with

23:15

more uncertainty. I mean, if anything, I

23:17

blame the marketing teams. In the tobacco

23:20

example, I blame the cigarettes are safe,

23:22

eight of ten doctors agree, pulling us

23:24

up to a place where we believe

23:26

they were safe. And so how do

23:29

we counteract that? Because I'm a little

23:31

worried that science will be a race

23:33

to the bottom of people shouting and

23:35

claiming what we know as a sort

23:38

of a false certainty in reaction to

23:40

this very combative environment. Yes, I agree.

23:42

I think you're absolutely right. I think

23:44

it's a big mistake for scientists to

23:47

say, oh, we know this absolutely. I

23:49

think it's much better to say, of

23:51

course there's uncertainty in any live science.

23:53

The whole point of science is to...

23:55

It's a process of discovery and learning.

23:58

And this is of course where history

24:00

of science is so helpful, because of

24:02

course we learn new things, and that's

24:04

good. But we have an issue right

24:07

now. We have to make decisions that

24:09

in some cases are literally life and

24:11

death. And in a case like that,

24:13

it does not make any sense to

24:16

say, Oh, well, I need to wait

24:18

another 10 years till we better understand

24:20

this virus, or I have to wait

24:22

until sea level is on my window

24:25

cell, because then it's too late to

24:27

act. We make decisions based on the

24:29

best available information we have right now,

24:31

but we also prepare to change in

24:33

the future if we need to. And

24:36

we have a term for that in

24:38

science. It's called adaptive management. And it

24:40

was used very, very successfully in the

24:42

ozone hole case. International Convention, the Montreal

24:45

Protocol that was signed to deal with

24:47

the ozone hole, had a feature net

24:49

for adaptive management, because scientists knew that

24:51

there were still things they didn't understand

24:54

about ozone depletion. And so the politicians

24:56

put in a feature that as they

24:58

learn more information, the regulations could be

25:00

made more strict, or they could be

25:03

made less strict. And we could do

25:05

the same thing for climate change. I

25:07

mean, it's what we should do. We

25:09

should start, we should... Always start with

25:12

the least regulation that we think will

25:14

get the job done, but be prepared

25:16

to tighten the regulations if more science

25:18

tells us we need to or to

25:20

lessen them as the case may be.

25:23

What I love about the example you're

25:25

giving with the Montreal Protocol Agreement is

25:27

its law that recognizes its own humility,

25:29

that it's not always going to be

25:32

accurate, that the letter of the law

25:34

and the spirit of the law are

25:36

going to diverge and we need to

25:38

be able to update the assumptions of

25:41

the law as fast as the sort

25:43

of situation requires it. And that's building

25:45

in kind of the right level of

25:47

uncertainty. Yeah, and if I could jump

25:50

in on that, you know, a lot

25:52

of people have criticized the IPCC for...

25:54

a variety of different reasons, but I

25:56

think it's really important for people to

25:59

understand that the UN Framework Convention on

26:01

Climate Change was modeled on the ozone

26:03

case. the ozone case was such an

26:05

effective integration of science and policy and

26:07

it has proved effective and has done

26:10

the job it was intended to do,

26:12

the UN Framework Convention was modeled on

26:14

that. Now it hasn't worked, but I

26:16

think the main reason it hasn't worked

26:19

is because of the resistance of the

26:21

fossil fuel industry and we've now been

26:23

witnessed to 30 years of organized disinformation

26:25

and campaigns to prevent. really to prevent

26:28

governments from doing what they promised to

26:30

do back in 1992. So Naomi, one

26:32

of the things you write about in

26:34

your new book, the big myth, is

26:37

how those who are advocating for the

26:39

maximum unregulated sort of free market approach

26:41

have a selective reading of history and

26:43

have this great example of Adam Smith.

26:45

Could you speak to that? Yeah. So

26:48

one of the things we talk about

26:50

in the book is how the Chicago

26:52

School of Economics... really misrepresented Adam Smith

26:54

and how many of us have this

26:57

view of Adam Smith, the father of

26:59

capitalism as an advocate of unregulated markets

27:01

that business people should just pursue their

27:03

self-interest and all good will come from

27:06

people pursuing their self-interest. That is not

27:08

what Adam Smith wrote. in the wealth

27:10

of nations. In fact, he has an

27:12

extensive discussion of the absolute essential nature

27:15

of banking regulation. He says if you

27:17

leave banks to bankers, they will pursue

27:19

their own self-interest and they will destroy

27:21

the economy, or at least put the

27:24

economy at risk. You can't let factory

27:26

owners just pursue their self-interest, or they'll

27:28

pay their workers' starvation wages. And he

27:30

has multiple examples of this, which he

27:32

goes on to describe at quite great

27:35

length. Yet all of this has been

27:37

removed from the way Adam Smith has

27:39

been presented in American culture since 1945.

27:41

And in fact, it's a kind of,

27:44

you know, I teach agnetology, the production

27:46

of ignorance, the study of ignorance, and

27:48

it's really interesting to see how this

27:50

is a beautiful example of it, because

27:53

in the 1920s and 30s, there were

27:55

people even at the University of Chicago

27:57

saying, no, that's not what Adam Smith

27:59

said. But by the 19... 50s that

28:02

had all been erased it had been

28:04

expunged and they were producing volumes edited

28:06

volumes of Adam Smith that left out

28:08

all of his discussion of the rights

28:11

of workers the need for regulation etc.

28:13

So I want to take us a

28:15

little bit to a different direction which

28:17

is another way that science can get

28:19

weaponized so one of the other areas

28:21

of our work now me is around

28:23

AI risk and you know artificial intelligence

28:25

is the most transformative technology in human

28:27

history. If you have intelligence is what

28:30

birthed all of our inventions and all

28:32

of our science, and if you suddenly

28:34

have artificial intelligence, you can birth an

28:36

infinite amount of new science. It is

28:38

so profound and so paradigmatic, I think

28:40

it's hard for people to get their

28:42

minds around it. There's obviously a lot

28:44

of risk involved in AI, and one

28:46

of the things that I've noticed, some

28:49

of the major frontier AI labs, like

28:51

open AI labs, They came out after

28:53

these whistleblowers left open AI saying, hey,

28:55

we have safety concerns. And what they

28:57

said in response was. We believe in

28:59

a science-based approach to studying AI risk,

29:01

which basically meant they were pre-framing all

29:03

of the people who are safety concerned

29:05

as sci-fi oriented, that they were not

29:08

actually grounded in real risks here on

29:10

Earth PAL, but they were living in

29:12

sort of the terminator scenarios of loss

29:14

of control and sci-fi. And that's one

29:16

of the things that I just, one

29:18

of the reasons I wanted to have

29:20

you want is I want to think

29:22

about how can our... collective antibodies detect

29:24

when this kind of thing is going

29:27

on because that sounds like quite a

29:29

reasonable thing to say. We want a

29:31

science-based approach to do AI risk and

29:33

we don't want to be manufacturing doubts

29:35

or thinking hypothetically about scenarios. Just curious

29:37

your reaction to that. I have to

29:39

say I do something that's getting a

29:41

little nervous when I hear people say

29:43

we want a scientific approach because I

29:46

want to know well who are those

29:48

people and what do they mean by

29:50

a scientific approach because I could mean

29:52

by a an excuse to push off

29:54

regulations. So I would need to learn

29:56

more about, you know, who those people

29:58

are on what they mean by a

30:00

science-based approach. But I guess what I

30:02

would say, you know, it's interesting as

30:05

a historian thinking about how radical this

30:07

is and how serious the risks are,

30:09

because I agree with you, I think

30:11

it is radical, and I think both

30:13

the risks and the potential rewards are

30:15

huge. But it does remind me a

30:17

little bit of about of the chemical

30:19

revolution, because many things were said the

30:21

same about... chemicals, particularly plastics, but also

30:23

pharmaceuticals, other chemicals in the early to

30:26

mid-20th century, and chemicals did revolutionize industry.

30:28

They revolutionized textiles, plastics was huge, you

30:30

know, all kinds of things. And similarly,

30:32

there were many aspects of the chemical

30:34

industry that were very helpful to modern

30:36

life, and there were some aspects that

30:38

were really bad. And so how do

30:40

we make sense of that? And I

30:42

think one thing we know from history

30:45

is it gets back to my favorite

30:47

subject that people in Silicon Valley love

30:49

to hate, which is regulation. That part

30:51

of the role of government is to

30:53

play this balancing act between competing interests.

30:55

In fact, you could argue the whole

30:57

role of government is to deal with

30:59

competing interests. That we live in a

31:01

complex society. What I want isn't necessarily

31:04

the same as what you want. And

31:06

in a perfect world, we'd all get

31:08

what we wanted. In a perfect world,

31:10

we could be libertarians. We all just

31:12

decide for ourselves. But it doesn't work,

31:14

because what I do affects you and

31:16

vice versa. And so that's where governance

31:18

has to come in. And it doesn't

31:20

have to be the federal government. It

31:23

could be corporate governance. It could be

31:25

watchdogs. But I do think that the

31:27

way in which some elements of the

31:29

air industry are pushing back against regulation.

31:31

is really scary and really bad. Because

31:33

if we don't have some kind of

31:35

set of reasonable regulations of this technology

31:37

as it develops, ideally with adaptive management,

31:39

we could find ourselves in a really

31:42

bad place. And one of the things

31:44

we know from the history of the

31:46

chemical industry is that I think it's

31:48

fair to say that many chemicals were

31:50

underregulated. You mentioned PFS a few minutes

31:52

ago. Again, DuPont knew a long time

31:54

ago that these chemicals were potentially harmful

31:56

and were getting everywhere. So the industry

31:58

knew that this was happening and pushed

32:00

hard against revealing the information they had,

32:03

pushed hard against regulation, and we now

32:05

live in a sea, a chemical soup

32:07

where it's become almost impossible to figure

32:09

out what particular chemicals are doing, what

32:11

to us, because it's not a controlled

32:13

experiment anymore. Well, I think that points

32:15

at one of the core problems here

32:17

is that, you know, as much as

32:19

you want good science, good science takes

32:22

time and the technology moves faster than

32:24

the science. And so the question is,

32:26

what do you do with that when

32:28

the technology is moving and rolling out

32:30

much faster than the science? So what

32:32

does it mean to regulate this wisely?

32:34

You talked about one thing, which is

32:36

adaptive management. Are there other tactics that

32:38

you can make sure that as you

32:41

begin to figure out how to figure

32:43

out? Yeah, that's a great question. And

32:45

again, so good news here is that

32:47

we do have the ozone examples. We

32:49

have at least one example where it

32:51

was done right and we can look

32:53

to that example. And I think one

32:55

thing that we learned from that case

32:57

is to do with the importance of

33:00

having both science industry and stakeholder voices

33:02

involved. Because I thought one of the

33:04

really terrible things that someone said recently

33:06

about AI, I think it was Eric

33:08

Schmidt, correct me if I'm wrong, if

33:10

I'm wrong. And I thought that was

33:12

a very shocking and horrible thing for

33:14

an otherwise intelligent person to say, because

33:16

first of all, I don't think it's

33:19

true. I mean, I could say the

33:21

same thing about chemicals, I could say

33:23

the same thing about climate change, but

33:25

intelligent people, you know, who are willing

33:27

to work and learn can come to

33:29

understand what these risks are. And you

33:31

talk about this in your book, right,

33:33

as epistemic privilege, and one of the

33:35

challenges that's sort of fundamental to all

33:37

industries is... The people inside of the

33:40

plastics industry or inside the chemicals industry,

33:42

they do have more technical knowledge than

33:44

a policymaker and their policy team is

33:46

going to have. That doesn't mean you

33:48

should trust them because their incentives are

33:50

completely off to give them the maximum

33:52

agency and freedom. We've covered that. on

33:54

some of our previous episodes. But that's

33:56

actually one of the sort of questions

33:59

we have to balance is, okay, well,

34:01

we want the regulation to be wisely

34:03

informed. We want it to be adaptive

34:05

and never fixed. We want to leverage

34:07

the insights from the people who know

34:09

most about it, but we don't want

34:11

to have those insights be funneled through

34:13

these bad incentives that then end up

34:15

where we don't actually get a result

34:18

that has the best interest of the

34:20

public in mind. And I feel like

34:22

that's sort of the eye of the

34:24

eye of the needle of the needle

34:26

that the needle that we're trying to

34:28

the needle that we're trying to the

34:30

needle that we're trying to the needle

34:32

that we're trying to really fees into

34:34

the point I want to make here,

34:37

which is absolutely. The technologists know the

34:39

most about that technology, and so they

34:41

have to be at the table, and

34:43

they definitely have to be involved. But

34:45

they don't necessarily know the most about

34:47

how these things will influence the users.

34:49

They don't necessarily know the most about

34:51

how these things will influence the users.

34:53

They don't necessarily know the most about

34:56

how you craft good policy. Or stakeholders.

34:58

Or what about labor historians who have

35:00

looked at automation and other content. I

35:02

mean, one of the big worries about

35:04

AI is that a lot of us

35:06

will be put out of work and

35:08

that can be really socially destabilizing. Well,

35:10

there are people who are experts on

35:12

that. And so you could imagine bringing

35:14

to the table some kind of commission

35:17

that would bring the technologists. policy experts

35:19

and people who could represent, you know,

35:21

the risk to stakeholders, maybe even some

35:23

psychologists who study children. I mean, the

35:25

point is there's more than one kind

35:27

of expertise that's needed here, and the

35:29

technical expertise is absolutely essential, but it's

35:31

necessary, but not sufficient. Yeah, and I

35:33

certainly agree with you in that we

35:36

need all of society to come together

35:38

to figure out how to do this

35:40

well. But having lived through the early

35:42

internet and the Ted Stevens, the internet

35:44

is a series of tubes and the

35:46

inability for Congress to understand what they

35:48

were dealing with, I have a certain

35:50

amount of sympathy for this learning curve

35:52

that we're all on together. I mean,

35:55

Tristan and I can't even keep up

35:57

with the news and this is our

35:59

full-time job. And so I'm curious because

36:01

not only will people say that certain

36:03

people outside of industry don't understand, but

36:05

people say that our society... has become

36:07

over-regulated or the regulatory apparatus is too

36:09

slow, not just from the right, but

36:11

from the left. People will say that

36:14

building new housing is too onerous because

36:16

of environmental regulations, for example. And I'm

36:18

curious how you respond to that, because

36:20

you want to pull all of society,

36:22

you want to build committees, you want

36:24

to do this, and I think I

36:26

agree with you from a values perspective

36:28

that we need more of society in

36:30

this conversation, but I'm not sure how

36:33

good we are doing that. I don't

36:35

want to come across sounding like a

36:37

Pollyanna, although I should always point out,

36:39

you know, the moral of the Pollyanna

36:41

story is that the world becomes a

36:43

better place because of her optimism, and

36:45

I think we often forget that. We

36:47

think calling someone a Pollyanna as a

36:49

criticism. But I think, I guess I

36:51

would say two things about that. First,

36:54

I'd want to slightly push back on

36:56

the idea that we have people on

36:58

the left as well as the right

37:00

who are anti-whoare anti-like who are anti-like.

37:02

business opposition to regulation in this country.

37:04

And it's almost all from the right.

37:06

There are some examples, but even the

37:08

housing staff, I mean, I was just

37:10

talking to an urban historian the other

37:13

day about how the real estate industry

37:15

is really behind a lot of this

37:17

pushback against housing regulation, not communities. I

37:19

mean, there are some exceptions, particularly in

37:21

California, but. You know, there's been a

37:23

hundred year history, I mean, this is

37:25

the story we tell in the big

37:27

myth, of the business communities insisting that

37:29

they are over-regulated, and they've used it

37:32

to fight back against regulation of child

37:34

labor, protections of worker safety. tobacco, plastics,

37:36

you know, pesticides, DOT, and also saying

37:38

that if, you know, if the government

37:40

passes this regulation, our industry will be

37:42

destroyed. The automobile industry claimed that if

37:44

we had seat belt laws, the U.S.

37:46

auto industry would be destroyed. And none

37:48

of that was true. And every time

37:51

a regulation was passed, industry adapted and

37:53

typically passed the cost on to consumers,

37:55

which, you know, maybe wasn't always great.

37:57

Maybe sometimes we paid for regulations we

37:59

didn't really need. But in general. The

38:01

opposition to regulation generally comes from the

38:03

business community who wants to do what

38:05

they want to do and they want

38:07

to make as much money as they

38:10

want to make and make it as

38:12

fast as possible. So it gets back

38:14

to what Tristan said about the incentives.

38:16

I understand that. If I were a

38:18

business person I would probably want to

38:20

run my business the way I want

38:22

to run it as well. But in

38:24

a democratic society we have to weigh

38:26

that against the potential harms to other

38:28

people, to the environment, to biodiversity, to

38:31

children. And so this gets back to

38:33

another thing that's really important, especially in

38:35

Silicon Valley, which is the romance of

38:37

speed. We live in a society that

38:39

has, American society has always had a

38:41

romance with speed, railroads, automobiles, space travel.

38:43

We love speed, we love novelty, and

38:45

we like the idea that we are

38:47

a fast-paced, fast-moving society. But on the

38:50

other hand, sometimes moving too fast is

38:52

bad. Sometimes when we move fast and

38:54

break things, we break things we shouldn't

38:56

have broken. And I think we are

38:58

witnessing that in spades right now. I

39:00

mean, we have a broken democracy in

39:02

part because we move too fast, in

39:04

my opinion, with telecommunications deregulation. Something that

39:06

was supposed to be democratizing and give

39:09

consumers more choice, has ended up giving

39:11

us less choice, paying huge bills for

39:13

our streaming services. really contributing to political

39:15

polarization because of how fragmented media has

39:17

become. So that's a really... I have

39:19

an idea. Let's go even faster with

39:21

AI. Yeah, exactly. So, you know, this

39:23

is a really good moment to be

39:25

having this conversation because one of the

39:28

things we're seeing now is exactly what

39:30

we wrote about in our last book,

39:32

the big myth, which is the business

39:34

attempt to dismantle the federal government. because

39:36

they resent the role that the federal

39:38

government has played in regulating business in

39:40

this country. And this is a story

39:42

that has been going on for 100

39:44

years, but is suddenly unfolding in real

39:47

time incredibly rapidly in front of us.

39:49

And part of this argument has to

39:51

do with this idea that government regulation

39:53

is a threat to freedom and that

39:55

any restriction on business puts us on

39:57

this slippery slope to loss of freedom.

39:59

But of course it's not true. because

40:01

we make choices all the time. And

40:03

so one of the examples I like

40:06

to cite, which was from actually from

40:08

a debate among neoliberals in the 1930s

40:10

about what it meant to be a

40:12

neoliberal, and one of them said, look,

40:14

being against regulation because you think it

40:16

eliminates freedom, is like saying that a

40:18

stop light or stop sign or a

40:20

red light is a slippery slope on

40:22

the road to eliminating driving. No one

40:24

who thinks we should have stop signs

40:27

on roads. is trying to eliminate driving.

40:29

We're trying to make driving safe. And

40:31

most regulations that exist in the world,

40:33

or most many, but I probably most,

40:35

have to do with safety, have to

40:37

do with protecting workers, children, the environment,

40:39

biodiversity, against, you know, other interests. And

40:41

so it's always a balancing act. It's

40:43

about, of course, we want economic activity.

40:46

And of course, we want jobs. And

40:48

of course, we know that business... plays

40:50

an essential role in doing those things.

40:52

But we also don't want business to

40:54

kill people with dangerous products. And we

40:56

don't want business to trample the rights

40:58

of working people. We don't want business

41:00

to exploit children. Absolutely. You know, as

41:02

we talk about the urgency that we're

41:05

all feeling and the urgency of these

41:07

problems and how AI even makes that

41:09

worse. I want to fold in that

41:11

everything feels so urgent and some of

41:13

that urgency is real in that we're

41:15

hitting these really real limits and we're

41:17

undermining parts of our society and other

41:19

parts of it seem like a hall

41:21

of mirrors that the internet has created

41:24

where everyone can't slow down to even

41:26

think about a problem because it's all

41:28

so urgent that we just have to

41:30

act now so I can't even sit

41:32

with my uncertainty on something. How do

41:34

you think that this conversation space or

41:36

this compression? that we're all feeling around

41:38

conversations that may take a decade to

41:40

settle the science. How do you think

41:43

that plays into the problem? And what

41:45

would you do? Yeah, I think that's

41:47

a great question. I feel like in

41:49

a way it's one of the paradoxes

41:51

of the present moment. We are facing

41:53

urgent problems. Climate change is irreversible. So

41:55

the longer we wait to act, the

41:57

worse it gets in the less. we're

41:59

able to fix it. So there should

42:01

be some sense of urgency about it.

42:04

And the same with AI, right? I

42:06

mean, as we've been talking about this

42:08

whole hour, this technology is moving very

42:10

quickly. It's already impacting our lives in

42:12

ways we wouldn't have even imagined five

42:14

or ten years ago. But at the

42:16

same time, I think it would be

42:18

really bad to panic. Panic is never

42:20

a good basis for decision-making. And there's

42:23

a way in which the very urgency

42:25

of it really requires us to stop

42:27

and to think and to listen. And

42:29

especially if we think about adaptive management,

42:31

adaptive management is all about not overreacting

42:33

in the moment, making the decision that

42:35

makes the most sense based on the

42:37

information you have, but being prepared to

42:39

adjust in the future. And one of

42:42

the ways that the Montreal Protocol worked

42:44

was by setting specific deadlines, dates. at

42:46

which the people involved would review the

42:48

evidence and decide whether an adjustment was

42:50

needed. And I think that's a beautiful

42:52

model because it incorporates both acting on

42:54

what we know now, not delaying, not

42:56

making excuses to delay, but also recognizing

42:58

human frailty, recognizing the benefits of learning

43:01

more information and being able to work

43:03

in that benefit. and making it structured.

43:05

So it wasn't just a sort of

43:07

promise, oh yeah, we'll look at that

43:09

again next week, but it was actually

43:11

structured into the law. That feels like

43:13

something that all laws should be doing,

43:15

actually, especially all laws that have to

43:17

do with emerging science or technology. Is

43:20

this a common practice or is this

43:22

a one-off that Montreal did? Yeah, that's

43:24

a great question. It would be a

43:26

good thing to study. I don't really

43:28

know the answer to that. you know,

43:30

the conservatives are right about that, that

43:32

we should have better mechanisms for if

43:34

we set up a government agency to

43:36

think about, you know, how long do

43:38

we want this agency to operate? And

43:41

should there be some mechanism for, you

43:43

know, after 10 years deciding if you

43:45

want to renew it? Almost like when

43:47

you take out a library book, you

43:49

know, you could renew it. I think

43:51

that would be a useful thing to

43:53

do. And certainly, one of the things

43:55

that Eric Conway and I write about

43:57

in our new book is that in

44:00

the 1970s, it was absolutely the case

44:02

that there were regulations from the 20s

44:04

and 30s that needed to be revisited.

44:06

I mean, there was a whole world

44:08

of trucking regulation that made no sense,

44:10

given that we now had airlines. Telecommunication,

44:12

it was absolutely right in the Clinton

44:14

era that we revisited telecommunication that was

44:16

based on radio now that we had

44:19

the internet. But again, there wasn't a

44:21

good mechanism for doing that. And I

44:23

think the Clinton administration moved too quickly

44:25

and made some really big mistakes and

44:27

broke some really serious things. So I

44:29

think that Montreal is a good model

44:31

for thinking about how could we do

44:33

something like that? You know, maybe for

44:35

AI, maybe we should have some kind

44:38

of commission on AI safety that has

44:40

a 10-year term, but that is renewable

44:42

if Congress or whoever votes to renew

44:44

it at that time, otherwise it sunsets.

44:46

This is a new thought for me,

44:48

which is you either hear people saying,

44:50

look, there's too many regulations or people

44:52

saying, well, there's not regulated enough. But

44:54

what you're saying is, it's both at

44:57

the same time. We always have old

44:59

regulations that we need to pull off

45:01

and we have new ones that aren't

45:03

protecting us and the ways we need

45:05

to pull off and we have new

45:07

ones that aren't protecting us in the

45:09

ways we need to put on and

45:11

that we should expect that always we

45:13

should be doing. One of the problems

45:15

of history, and as a historian, I

45:18

believe absolutely in the value of history

45:20

and all the lessons we can learn,

45:22

but sometimes people learn the wrong lessons

45:24

or they carry forward experiences from the

45:26

past that maybe aren't necessarily relevant now.

45:28

And so some balance between creating a

45:30

thing that we think we need now,

45:32

but also creating a mechanism to revisit

45:34

it and to learn from our mistakes.

45:37

There's also a way that AI can

45:39

play a role in helping to rapidly

45:41

accelerate our ability to find those laws

45:43

that need updating are no longer relevant

45:45

and to help craft what would those

45:47

updates be, find laws that are in

45:49

conflict with each other, I'm not trying

45:51

to be a techno solutionist or say

45:53

that AI can fix everything, but I

45:56

think To the degree that law is

45:58

actually part of how we solve some

46:00

of these multipolar traps, the if I

46:02

don't do it, I lose to the

46:04

guy that will, law is the solution,

46:06

but the problem is so people have

46:08

seen so many examples, rightly so, of

46:10

bad laws, bad regulation, and so this

46:12

is about how do we get more

46:15

adaptive, more reflective ways of doing this,

46:17

and AI can be actually a part

46:19

of that solution when I think about

46:21

a digital democracy. So

46:24

we've talked a lot in this podcast

46:26

about how hard it is to make

46:29

sense of the world right now. These

46:31

competing doubts and over certainties and different

46:33

cultic takes that social media has riven

46:35

our world into. What are ways that

46:37

individuals can actually stay grounded and understand

46:39

when something is distorted? What are the

46:41

antibodies that prevent people from being so

46:43

susceptible to disinformation right now? Well, I

46:46

think... You know, this is a really

46:48

tricky question, and if I had a

46:50

simple answer, that would be my next

46:52

book, right? Ten ways not to be

46:54

fooled by nonsense or something like that.

46:56

And maybe I'll write that book. But

46:58

I think an important thing to realize

47:00

is that, you know, we all have

47:03

our, we all have brains, and we

47:05

all have the capacity to use our

47:07

brains. So I really encourage people to

47:09

kind of embrace their own intelligence, and

47:11

then to ask questions. So if someone

47:13

is telling you something, the most obvious

47:15

question to say is, and who benefits

47:17

from what they're saying, and what is

47:20

their interest. And you know, that can

47:22

be used in a hostile, skeptical way,

47:24

and it sometimes has been. But in

47:26

general, it's always legitimate to say, well,

47:28

what does this person get out of

47:30

it? So I admit freely, I want

47:32

you to read my books. I get

47:34

some money from my books, but not

47:37

a lot. It's like a bucket book.

47:39

You know, I can't quit my day

47:41

job, as opposed to the fossil fuel

47:43

industry that is looking in trillions of

47:45

trillions of dollars in profit. Climate scientists,

47:47

most of whom get paid good middle

47:49

to upper middle class salaries, but they

47:51

don't get paid anymore if they say

47:54

climate change is serious than if they

47:56

say it's not serious, or the fossil

47:58

fuel industry that stands to earn trillions

48:00

of... more if they get to continue

48:02

doing what they're doing. So the vested

48:04

interest there are pretty lopsided and you

48:06

don't have to be a brainiac or

48:08

Harvard professor to see that difference. I

48:10

remember when we did our AI dilemma

48:13

talk about AI risk and people said,

48:15

but these guys profit from speaking about

48:17

risk and doomerism and here's all the

48:19

problems of technology as if that's what

48:21

is motivating our concerns and to the

48:23

degree that we profit in any way

48:25

from talking about those concerns, how does

48:27

that compare relative to the trillions of

48:30

dollars that the guys on the other

48:32

side of the table can make? And

48:34

I think how does one demonstrate that

48:36

they are a trustworthy actor that they

48:38

are coming from a place of care

48:40

about the common good? And that's built

48:42

over time, and I think it's becoming,

48:44

especially in the age of AI, when

48:47

you can basically so doubt about everything

48:49

and people don't know what's true, the

48:51

actors that are consistently showing up with

48:53

the deepest care and trustworthiness will sort

48:55

of win in that world as we

48:57

erode that trust. Yeah, I think that's

48:59

right. And that's one area where I

49:01

think scientists could do a better job.

49:04

A lot of scientists... We've been trained

49:06

to be brainiacs to use technical knowledge,

49:08

choose mathematics, and in our science, those

49:10

tools are important and good, but we

49:12

also have to recognize that when you

49:14

talk to the broader public, those tools

49:16

are not necessarily the best ones. And

49:18

then you have to relate to people

49:21

on a human level. One thing I've

49:23

been thinking a lot about in recent

49:25

years, I feel that in academia we

49:27

are taught. to talk, right? We're talked

49:29

to get our ideas out, to write

49:31

books, and it's all about, you know,

49:33

I'm getting my ideas out there, and

49:35

we aren't really taught to listen. And

49:38

so I really think that it's important

49:40

for anyone who's in any controversial space,

49:42

whether they're coming out as a scientist,

49:44

a journalist, a technologist, whatever. to recognize

49:46

the importance of listening and to try

49:48

to understand people's concerns. Because, you know,

49:50

I spent some time in Nebraska some

49:52

years ago talking with farmers and one

49:55

of the farmers said to me, I

49:57

just don't want the price of my

49:59

fuel to go up. I thought, well,

50:01

that's totally legitimate. If I were a

50:03

farmer, I wouldn't either. So it means

50:05

if we think about climate solutions, we

50:07

have to think about solutions that don't

50:09

hurt farmers. Tax credits, you know, people

50:11

have talked about fee and dividends systems

50:14

for carbon pricing, but to be mindful

50:16

of how is this affecting people, and

50:18

how can we structure solutions that take

50:20

those considerations into account? Naomi, thank you

50:22

so much for coming on your undivided

50:24

attention, your work on the merchants of

50:26

doubt and the big myth is really

50:28

fundamental and deeply appreciate what you're putting

50:31

out in the world. Yeah, thanks, Naomi.

50:33

Thank you. It's been a great conversation.

50:35

Your undivided attention is produced by the

50:37

Center for Humane Technology, a non-profit working

50:39

to catalyze a humane future. Our senior

50:41

producer is Julia Scott. Josh Lash is

50:43

our researcher and producer, and producer, and

50:45

producer, is Sasha Figan. Mixing on this

50:48

episode by Jeff Sudan, original music by

50:50

Ryan and Hayes Holiday. And a special

50:52

thanks to the whole Center for Humane

50:54

Technology team for making this podcast possible.

50:56

You can find show notes, transcripts, and

50:58

much more at humanetech.com. And if you

51:00

like the podcast, we'd be grateful if

51:02

you could rate it on Apple Podcast

51:05

because it helps other people find the

51:07

show. And if you made it all

51:09

the way here, let me give one

51:11

more thank you to you for giving

51:13

us your undivided attention.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features