End

End

Released Friday, 30th November 2018
 1 person rated this episode
End

End

End

End

Friday, 30th November 2018
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:03

This is not a hoax, This

0:05

is not a joke. It is

0:07

becoming clear that we hold

0:09

in our hands the fate of

0:12

the entire human race. Those

0:14

of us alive today are part of a

0:16

very small group, including us

0:19

and perhaps a few generations to follow,

0:22

who are responsible for the future

0:24

of humanity. And

0:26

if it turns out that we are alone

0:28

in the universe, then even the fate

0:31

of intelligent life may hang in the balance.

0:34

No other humans have ever been in the unenviable

0:37

position that we are. No

0:39

humans who lived before were actually

0:42

capable of wiping the human race from

0:44

existence. No other humans

0:46

were capable of screwing things up

0:48

so badly and permanently. And

0:51

those future humans to come won't be

0:53

in this position either. If we

0:56

fail and the worst happens, there

0:58

won't be any future humans. And

1:01

if we succeed and deliver the human

1:03

race to a safe future, those

1:05

future humans will have arrived at

1:07

a place where they can easily deal with any

1:09

risks that may come. We

1:12

will have made existential risks

1:14

extinct. Taking

1:17

all of this together, everything

1:19

seems to point to the coming century or

1:21

two as the most dangerous

1:23

period in human history. It's

1:26

an extremely odd thing to say

1:29

but together, you, me, and

1:31

everyone we know appear to

1:34

be the most vitally important humans

1:36

who have ever lived, and

1:39

as much as is riding on us, we have

1:42

a lot going against us. We

1:45

are our own worst enemies when it comes

1:47

to existential risks. We come

1:49

preloaded with a lot of biases that keep

1:51

us from thinking rationally. We prefer

1:54

not to think about unpleasant things like the

1:56

sudden extinction of our species. Our

1:58

brains aren't wired think ahead to

2:01

the degree that existential risks require

2:03

us too, And really, very

2:05

little of our hundred thousand years or so

2:08

of accumulated human experience has

2:10

prepared us to take on the challenge

2:12

that we are coming to face, and

2:15

a lot of the experience that we do have can

2:17

actually steer us wrong. It's

2:19

almost like we were dropped into a point

2:22

in history we hadn't yet become

2:24

equipped to deal with. Yet,

2:26

despite how utterly unbelievable

2:29

the position that we find ourselves in is,

2:31

the evidence points to this as

2:34

our reality. The

2:36

cosmic silence that creates the family

2:38

paradox tells us that we are either

2:40

alone and always have been where

2:42

that we are alone because no other civilization

2:45

has managed to survive if

2:47

the latter is true. If the

2:49

Gray Filter has killed off every other civilization

2:52

in the universe before they could spread out from

2:54

their home planets, then we will

2:56

face the same impossible step that

2:58

everyone else has before as we attempt

3:01

to move off of Earth. And

3:03

if the Great Filter is real, then it

3:05

appears to be coming our way in the

3:07

form of the powerful technology that

3:09

we are beginning to create right now. But

3:13

even granting that the Great Filter hypothesis

3:15

may be faulty, that we aren't alone,

3:18

that there really is intelligent life elsewhere,

3:21

we still find ourselves in the same

3:23

position. We are in

3:25

grave danger of wiping ourselves

3:27

out. There doesn't appear

3:29

to be anyone coming to guide us through

3:31

the treacherous times ahead. Whether

3:33

we're alone in the universe or not, we

3:36

appear to be on our own in facing

3:38

our existential risks, all

3:41

of our shortcomings and flaws. Notwithstanding,

3:44

there is hope. We humans

3:46

are smart, widely ingenious

3:49

creatures, and as much as we

3:51

like to think of ourselves as something higher than

3:53

animals, those hundreds of millions

3:56

of years of animal evolution is

3:58

still very much in our nature.

4:01

And when we're backed into a corner that

4:03

animal ancestry comes rising

4:05

to the surface. We fight,

4:08

We rail against our demise. We

4:10

survive. If we

4:12

can manage to join that creature habit to

4:15

the intelligence we've evolved that really

4:17

does make us different from other animals, then

4:20

we have a chance of making it through the existential

4:22

risks that lie waiting ahead. If

4:25

we can do that, we will deliver the

4:27

entire human race to a safe

4:29

place where it can thrive and flourish

4:32

for billions of years. It's

4:35

in our ability to do this. We

4:37

can do this. Some of us

4:39

are already trying, and

4:42

we've already shown that we can face down

4:44

existential risks. We've

4:46

done it before. We

4:53

encountered the first potential human made

4:56

existential risk we've ever faced, in

4:58

New Mexico, of all places. On

5:01

July six, at

5:03

just before am,

5:06

the desert outside of Alama Gordo was

5:08

the site of the first detonation of a nuclear

5:10

bomb in human history. They

5:13

called it the Trinity Test. At

5:16

the moment the bomb detonated, the pre

5:18

dawned sky lit up brighter than

5:20

the sun, and the landscape was eerie

5:23

and beautiful in gold and gray

5:25

and violet, purple and blue. The

5:32

explosion was so bright that one

5:34

of the bomb's designers went blind

5:36

for nearly half a minute from looking directly

5:39

at it. By the blast sight,

5:41

the sandy ground instantly turned into

5:43

a green glass of a type that had

5:46

never existed on Earth before that moment.

5:48

They called it trinotite to mark the occasion,

5:51

and then they buried it so no one would find it

5:55

on this day. At this moment, the

5:57

world was brought into the atomic age,

6:00

an age of paranoia among everyday people

6:02

that the world could end at any moment. In

6:05

less than a month, America would

6:07

explode an atomic bomb over Hiroshima

6:09

in Japan, and sixty five

6:12

thousand people would die in an instant.

6:15

Another fifty five thousand people would

6:17

die from the bomb's effects over the next year,

6:20

and three days after Hiroshima, America

6:22

would drop a second bomb over Nagasaki

6:25

and another fifty thousand people would die.

6:29

But even before all of the death and destruction

6:31

that America reaked on Japan in August

6:34

of even

6:36

before the trinity tests that day in July,

6:39

nuclear weapons became our first potential

6:41

human made existential threat when

6:43

the scientists building the bomb wondered

6:45

if it might accidentally ignite the atmosphere.

6:51

Edward Teller was one of the leading physicists

6:54

working on the Manhattan Project, the

6:56

secret program to build America's first nuclear

6:58

weapons. By chance, Teller

7:01

was also one of the physicists that Enrico

7:03

Fermi was having lunch with when Faremi

7:05

asked where is everybody, and

7:07

the Faremi paradox was born. Teller

7:10

was also pivotal in the nuclear arms

7:12

race that characterized the Cold War by

7:14

pushing for America to create a massive

7:17

nuclear arsenal in

7:20

three years before the Trinity Test, Edward

7:23

Teller raised the concern that perhaps

7:25

the sudden release of energy that the bomb would

7:27

dump into the air might also set

7:29

off a chain reaction among the nitrogen

7:32

atoms in the atmosphere, spreading

7:34

the explosion from its source in New Mexico

7:37

across the entirety of Earth. A

7:40

catastrophe like that would burn the atmosphere

7:42

completely off of our plan, and

7:45

that would of course lead to these sudden

7:47

and immediate extinction of virtually all life,

7:50

humans included. Almost

7:53

immediately, a disagreement over whether

7:55

such a thing was even physically possible

7:57

grew among the physicists on the project.

8:01

Some like Enrico Fermi, were

8:03

positive that it was not possible, but

8:05

others, like Teller in the future

8:07

head of the project, J. Robert Oppenheimer,

8:10

weren't so sure. Eventually,

8:13

Oppenheimer mentioned the idea to Arthur

8:15

H. Compton, who was the physicist

8:18

that was the head of the project at the time. Compton

8:21

found the idea grave enough to assign

8:23

Teller and a few others to figure out

8:25

just how serious the threat of accidentally burning

8:28

off the atmosphere really was. The

8:30

group that worked on the calculations wrote

8:33

a paper on the possibility that the bomb

8:35

could set off a nuclear chain reaction in Earth's

8:37

atmosphere, igniting it. Even

8:40

using assumptions of energy that far

8:42

exceeded what they expected their tests to produce,

8:45

the group found that it was highly unlikely

8:48

that the bomb would ignite the atmosphere.

8:51

Two years later, when the bomb was ready,

8:54

they detonated it the

8:58

morning of the Trinity test. Enrico

9:00

Fermi took bets on whether the atmosphere

9:02

would ignite after all. It

9:08

is to his credit that Arthur Compton took

9:10

the possibility of the nuclear test

9:12

igniting the atmosphere seriously. The

9:15

scientists and military people working

9:17

on the secret atomic bomb project had

9:20

every incentive to keep pushing forward

9:22

at any cost. At the time,

9:25

it was widely believed that Hitler and the Third

9:27

Reich were closing in on creating an atomic

9:30

bomb of their own, and when they completed

9:32

it, they would surely savagely unlea

9:34

should across Europe, Africa, the

9:36

Pacific, and eventually the United

9:39

States. In two

9:41

when the idea of the bomb might ignite the atmosphere

9:44

was first raised, it was far from

9:46

clear who would be left standing when

9:48

the Second World War was over. And

9:51

yet Compton decided that

9:53

the potential existential threat the

9:55

nuclear test may pose would be the

9:57

worst of any possible outcomes.

10:00

He didn't call it an existential threat,

10:02

but he knew one when he saw one, even

10:05

the first one. Better

10:07

to accept the slavery of the Nazis than

10:09

to run the chance of drawing the final curtain

10:11

on mankind, Compton said in

10:13

an interview with the writer Pearl Buck years

10:16

after the test in nineteen fifty nine. And

10:19

so it would appear that the first human

10:21

made existential risk we ever faced

10:23

was handled just about perfectly. But

10:27

there's still a lot left to unpack here.

10:32

Buck reported that Compton had drawn a

10:34

line in the sand, as it were, He

10:36

established a threshold of acceptable

10:39

risk. He told the physicists

10:41

working under him that if there was a greater

10:43

than a three in a million chance the

10:45

bomb would ignite the Earth's atmosphere, they

10:47

wouldn't go through with testing it. It's

10:50

not entirely clear what Compton based that

10:53

threshold on. It's not even

10:55

clear if the threshold was a three and a million

10:57

chance or a one in a million, and

11:00

some of the Manhattan Project physicists

11:02

later protested that there wasn't any chance

11:05

that either Compton had misspoken or

11:07

Buck had misunderstood. Regardless,

11:10

the group that wrote the safety paper found

11:13

that there was a non zero possibility that

11:15

the test could ignite the atmosphere, meaning

11:18

there was a chance, however slight, that

11:20

it could. It was possible

11:22

for such a chain reaction to occur. After

11:25

all, the atmosphere is made of energetic

11:27

vibrations that we call particles,

11:29

and those particles do transfer energy

11:31

among themselves, but the energies

11:34

involved in the nuclear bomb should be far

11:36

too small. The paper writers concluded it

11:39

would take perhaps a million times more energy

11:41

than their plutonium core was expected

11:44

to release. For some

11:46

of the scientists, the chance was so small

11:48

that it became transmuted in their minds

11:50

to an impossibility. They

11:53

rounded that figure down for convenience's

11:55

sake. The chance was so small

11:57

that to them there might as well have

11:59

been no chance at all. But

12:02

as we've learned in previous episodes, deciding

12:05

what level of risk is an acceptable

12:07

level of risk is subjective. There

12:10

are lots of things that have much less

12:12

of a chance of happening than three in a

12:14

million odds of accidentally igniting

12:17

the atmosphere. If

12:19

you live in America, you have a little less

12:21

than a one in a million chance of being struck

12:23

by lightning. This year. You have

12:25

a roughly one and two hundred and ninety

12:27

million chance of winning the Powerball.

12:31

Each person living around the world has something

12:33

like a one and twenty seven million chance

12:36

of dying from a charchitect during their lifetime.

12:39

Depending on your perspective, a three

12:41

and a million chance of bringing about

12:43

these sudden demise of life on Earth from a nuclear

12:46

test isn't necessarily a small

12:48

chance at all, especially considering

12:51

the stakes. And

12:53

yet it was up to Compton to decide

12:55

for the rest of us that the test was worth

12:57

the risk. Arthur Holly kh

13:00

Upton, aged sixty, living

13:02

in Chicago, Illinois, a Nobel

13:04

Prize winning physicist, father of two

13:07

and tennis enthusiasts, was put

13:09

in a position to decide for the rest of

13:11

the two point three billion humans alive

13:13

at the time that three chances

13:15

in a million their project might blow

13:17

up the atmosphere was an acceptable

13:20

level of risk. The

13:28

idea that a single person can make a decision

13:30

that affects the entire world is a

13:32

hallmark of existential risks. Not

13:35

only the existential risk poses a threat,

13:38

but the very fact that a single human being

13:40

is making the decision, with all of their

13:42

biases and flaws and stresses,

13:45

puts us all at risk as well. There

13:48

were a number of different pressure points that the

13:50

people involved in the Manhattan Project would

13:52

have felt pushing them towards the decision

13:54

to carry out the test. There

13:56

were the Nazis, for one, and the pressure

13:59

from the U. S. Miller terry to save the world

14:01

from the Nazis. Their

14:03

careers and reputations were at stake. There

14:06

was also the allure of a scientific challenge.

14:09

No one had ever done what the people working

14:11

on the Manhattan Project did up

14:13

to the moment of the trinity test. No

14:15

one was entirely sure that a nuclear explosion

14:18

was even possible. Consciously

14:21

or not, these things influenced the

14:23

decisions of the people working on the project.

14:26

This is not to say that there was any cavalier

14:28

disregard for the safety of humanity.

14:31

They took the time to study the issue rather

14:33

than just brushing it off as impossible after

14:35

all. But the point is

14:37

that just a handful of people working

14:40

in secret were responsible

14:42

for making that momentous decision, and

14:44

those people were only human.

14:50

It's also worth pointing out that a lot of the

14:52

science that the safety paper writers used

14:55

was very new at the time. The

14:58

nuclear theory they were working off of is

15:00

less than forty years old, the

15:02

data they had on fission reactions was less

15:04

than twenty years old, and the first

15:06

sustained nuclear fission reaction wasn't

15:09

carried out until when

15:11

Fairmi held the first test on that squash

15:13

court at the University of Chicago. And

15:16

don't forget there had never been a nuclear

15:19

explosion on Earth before. All

15:22

of that newness, by the way, showed up

15:24

during the Trinity test, when the bomb

15:26

produced an explosive force about

15:28

four times larger than what the project

15:31

scientists had expected. All

15:35

of this is to say that the data and understanding

15:37

of what they were attempting with the trinity test

15:40

was still young enough that they could have gotten

15:42

it wrong, and we find

15:44

ourselves in that same situation today.

15:47

We see it in the types of experiments that are

15:49

carried out in particle colliders and bio

15:51

safety labs around the world. We

15:54

see it in the endless release of

15:56

self improving neural nets. Our

15:59

understanding of the unprecedented

16:01

risks these things pose is lacking

16:03

to a dangerous degree. Depending

16:07

on how the chances of a risk changes, the

16:09

threat it poses can get larger

16:12

or smaller, but really the

16:14

reality of the threat stays the same. It's

16:17

our awareness of it that changes. Awareness

16:21

is the way we will survive becoming

16:23

existential threats m

16:35

M. There

16:38

are two ways of looking at our prospects

16:40

for making it to a state of technological

16:43

maturity for humanity where

16:45

we have safely mastered our technology

16:47

and can survive beyond the next century

16:49

or two. Gloom and doom

16:52

and optimism.

16:54

The gloom and doom camp makes a pretty

16:56

good case for why humans won't make

16:58

it through this pastly the greatest challenge

17:01

our species will ever face. There's

17:04

the issue of global coordination, the

17:06

kind of like mindedness that will have to create

17:08

among every country in the world to successfully

17:11

navigate the coming risks. Like

17:14

we talked about in the last episode, we

17:16

will almost certainly run into problems

17:18

with global coordination. Some

17:20

nations may decide that they'd be better off going

17:22

it alone and continuing to pursue

17:25

research and development that the rest of

17:27

the world has deemed too risky.

17:29

This raises all sorts of prickly questions

17:32

that we may not have the wherewithal to address.

17:35

Does the rest of the world agree that we should

17:37

invade non complying countries and

17:40

take over their government? In

17:42

a strictly rational sense, that's

17:44

the most logical thing to do. Rationally

17:47

speaking, Toppling a single government,

17:49

even a democratically elected one, is

17:52

a small price to pay to prevent

17:54

an existential risk that can drive

17:56

humanity as a whole to permanent

17:58

extinction. But we

18:01

humans aren't strictly rational, and

18:03

something is dire as Invading a country

18:06

and toppling its government comes with

18:08

major costs, like the deaths

18:10

of the people who live in that country and

18:13

widespread disruptions to their social

18:15

structures. If the chips

18:17

are down, would we go to such

18:19

an extreme to prevent our extinction. There's

18:23

also the issue of money. Money

18:26

itself is not necessarily the problem.

18:28

It is what fund scientific endeavors.

18:31

It's what scientists are paid with. Money

18:33

is what we will pay the future researchers

18:35

who will steer us away from existential risks.

18:38

The Future of Humanity Institute is funded

18:41

by money. The problem money poses

18:43

where existential risks are concerned

18:45

is that humanity has shown that we are willing

18:48

to sell out our own best interests

18:50

and the interests of others for money

18:53

and market share, or more

18:55

commonly, that we're willing to stand by

18:57

and let others do it, and

19:00

with existential risks, greed

19:02

would be a fatal flaw. Everything

19:05

from the tobacco industry to the fossil fuel

19:07

industry, the anti freeze industry,

19:10

to the infant formula industry,

19:12

all of them have a history of avarice,

19:15

of frequently and consistently

19:17

putting money before well being and

19:20

on a massive and global scale. How

19:23

can we expect change when money

19:25

is just as tied to the experiments

19:27

and technology that carry an existential

19:30

risk. Also

19:32

stacked against us is the bare fact

19:34

that thinking about existential risks

19:37

is really really hard.

19:40

Analyzing existential threats demands

19:43

that we trace all of the possible outcomes

19:45

that thread from any action we might take,

19:48

and look for unconsidered dangerous

19:50

lurking there. They require

19:52

us to think about technology that hasn't

19:54

even been invented yet, to look

19:56

a few more moves ahead on the cosmic chessboard

19:59

than we're typically capable of seeing. To

20:03

put it mildly, we're not really

20:05

equipped to easily think about existential

20:07

risks at this point. We

20:10

also have a history of overreliance on

20:12

techno optimism, that idea

20:14

that technology can save us from any

20:16

crisis that comes our way. Perhaps

20:19

even thinking that reaching the point of technological

20:22

maturity will protect us from existential

20:24

risks is nothing more than an example

20:27

of techno optimism, And

20:29

as we add more existential risks

20:31

to our world, the chances increase

20:34

that one of them may bring about

20:36

our extinction. It's

20:38

easy to forget since it's a new way of living

20:40

for us, But the technology we're developing

20:43

is powerful enough and the world is

20:45

connected enough that all it will take

20:48

is one single existential catastrophe

20:51

to permanently end humanity. If

20:54

you take the accumulated risk from

20:57

all of the biological experiments in

20:59

the unknown number of containment labs around

21:01

the globe, and you add it to

21:03

the accumulated risks from all

21:05

of the runs and particle colliders online

21:08

today and to come, and

21:10

you add the risks from the vast number

21:12

of neural nets capable of recursive

21:14

self improvement that we create and

21:16

deploy every day. When

21:18

you take into account emerging technologies

21:21

I haven't quite made it to reality yet, like

21:23

nanobots and geoengineering

21:25

projects, and the many more

21:28

technologies that will pose a risk that

21:30

we haven't even thought of yet. When

21:32

you add all of those things together, it

21:35

becomes clear what a precarious

21:37

spot humanity is truly in.

21:41

So you can understand how a person might look

21:43

at just how intractable the problem

21:45

seems and decide that our doom

21:47

is complete. It just hasn't happened

21:50

yet. I

21:55

think we can be a bit more optimistic than that. This

21:58

is Toby Ord again, one of the earliest

22:00

members of the Future of Humanity Institute.

22:03

Yeah, I think that this is

22:05

actually a clear and obvious enough idea

22:08

that people will wake up to it and

22:11

embrace it. Uh much more

22:13

slowly than we should. But I think

22:15

that uh we will realize that

22:17

this is a central moral issue of our time and

22:20

rise to the challenge. But to begin

22:22

to rise to the challenge, we need

22:24

to talk about existential risks

22:27

seriously. The way that anything

22:29

changes, the way an idea or an issue

22:31

comes to be debated and its merits

22:34

examined, is that people start talking

22:36

about it. If

22:38

this series has had any impact on

22:40

you, and if you have, like I

22:42

have, come to believe that humanity

22:45

is facing threats to our existence that are

22:47

unprecedented, with consequences

22:49

that, on the whole we are dangerously ignorant

22:51

of, then it is imperative

22:54

that we start talking about those things. You

22:56

can start reading the articles and papers

22:59

that are already being written about them,

23:01

start following people on social media who

23:03

are already talking about existential risks,

23:06

like David Pierce and Elie as A Yukowski

23:09

and Sebastian Farquhar. Started

23:11

asking questions about existential risks

23:13

from the people we elect to represent

23:15

us. I think we often feel

23:18

that the powers that be must

23:20

already have these things in hand. But

23:23

when I've talked with government about

23:25

existential risk, even

23:28

a major national government

23:30

like the United Kingdom, they

23:33

tend to think that these issues saving civilization

23:35

and humanity itself are above their pay

23:38

grade. Uh, and not really something

23:40

they can deal with in a

23:42

five year election cycle. Um.

23:44

But then it turns out there's no one else above

23:46

them dealing with them either. So I think that

23:48

there's more of a threat from complacency

23:51

in thinking that someone must have this managed. In

23:55

a rational world, someone would. It's

23:58

up to the rest of us, then, to start

24:00

a movement. The

24:04

idea of a movement to get humanity to

24:06

pay attention to existential risks

24:08

sounds amorphous and far off, but

24:11

we've founded movements on far off ideas

24:14

before. If enough people

24:16

start talking, others will listen.

24:19

Just a handful of books got the environmental

24:21

movement started, like the ones written

24:24

by the Club of Rome and Paul Airlick, but

24:26

especially Rachel Carson's nineteen

24:29

sixty two book Silent Spring, which

24:31

warned of the widespread ecological

24:33

destruction from the pesticide d

24:35

d T. Carson's

24:37

book is credited with showing the public

24:40

how fragile the ecosystems of the natural

24:42

world can be and how much of an effect

24:45

we humans have on them.

24:47

Awareness of things like fertilizer

24:49

runoff, deforestation, indicator

24:52

species concepts that you can find

24:55

being taught in middle schools today.

24:57

We're unheard of. At the beginning of

24:59

the nineteen sixties, most

25:01

people just didn't think about things like that.

25:04

But when the environmental movement began to

25:06

gain steam, awareness of environmental

25:09

issues started to spread. Within

25:12

a decade of silent springs release,

25:14

nations around the world started opening

25:16

government agencies that were responsible

25:19

for defending the environment. The

25:22

world went from ignorance about environmental

25:24

issues to establishing policy

25:27

agencies in less than ten years.

25:30

And I think that that we could do some of that, and it

25:32

really shows that it is possible

25:34

to take something which is not really part of common

25:36

sense morality, and then within a generation,

25:39

uh children are being raised everywhere

25:41

with this as part of just a background of beliefs

25:44

about ethics that that they live with. So

25:46

I really think that we could achieve them. There

25:48

is much work to be done with environmental

25:50

policy that is definitely grant but

25:53

we are working on it. Nations

25:55

around the world on their own and

25:58

together are spending money

26:00

to pay scientists and researchers

26:02

to study environmental issues, come

26:05

up with an up to the moment understanding

26:07

of them, and established best

26:09

practices how to protect Earth from

26:11

ourselves. The trouble

26:14

comes when we decide not to listen to the scientists

26:16

that we've asked to study these problems. Existential

26:20

risks call for this same kind of initiative.

26:24

We have to establish a foundation, provide

26:26

a beginning that others to follow

26:28

can build upon. Just

26:31

like Eric Drexler posed the rather unpopular

26:34

gray goose scenario regarding nanobot

26:36

design, just like Eliezer

26:38

Yukowski and Nick Bostrom

26:40

identified the AI should have friendliness

26:43

designed into it. Just like others

26:45

have raised the alarm about risks from biotech

26:48

and physics, if we examine

26:50

the problems we face, we can understand

26:53

the risks that they pose. And

26:55

if we understand the risks that they pose, then

26:57

we can make an informed decision about

27:00

whether they're worth pursuing. The

27:03

scientists working on the Manhattan Project

27:05

did the same thing when they took the possibility

27:08

seriously that they might accidentally

27:10

ignite the atmosphere, so

27:12

they investigated the problem to see if

27:14

they would. We don't

27:16

at this point have a clue as

27:19

to what the possible outcomes for our future

27:21

technology. Maybe, and

27:23

trying to guess at something like that today

27:25

would be like guessing back in the nineteen fifties

27:28

about what affects clear cutting old

27:30

growth forests and the Amazon Basin

27:32

would have on global cloud formation. It's

27:35

just too our kane a question for a

27:37

time when we don't have enough of the information

27:40

we need to respond in any kind

27:42

of informed way. We don't

27:44

even know all of the questions to ask at

27:46

this point, but it's up

27:48

to us alive now to start figuring

27:51

out what those questions are. Working

27:54

on space flight is another good example

27:56

of where we can start. Among

27:59

people who study existential risks, it

28:01

is largely agreed on that we should

28:03

begin working on a project to get humanity

28:05

off of Earth and into space as

28:08

soon as possible. Working

28:10

on space colonization does a couple of things

28:12

that benefit humanity. First,

28:15

it gets a few of our eggs out of the single

28:17

basket of Earth, so should an

28:19

existential risk befall our planet, there

28:22

will still be humans living elsewhere to carry

28:24

on. And Second, the

28:26

sooner we get ourselves into space, the

28:29

larger our cosmic endowment will be. One

28:32

of the things we found from studying the universe

28:35

is that it appears to be expanding outward

28:37

and apart over deep

28:39

time scales, the kind of time scales

28:42

we humans will hopefully live for. That

28:44

could be an issue because eventually

28:46

all of the matter in the universe will

28:49

spread out of our reach forever. So

28:51

the sooner we get off Earth and out into the

28:53

universe, the more of that material

28:56

we will have for our use to do with

28:58

whatever we can dream up. We

29:02

are not going to call anized space tomorrow.

29:04

It may take us hundreds of years of effort,

29:07

maybe longer, but that's exactly

29:09

the point. A project that is so

29:11

vital to our future shouldn't be put

29:13

off because it seems far off. The

29:16

best time to begin working on a space colonization

29:19

program was twenty years ago. The

29:21

second best time is today. We

29:25

are working on getting to space. True,

29:27

but there's a world of difference between the piecemeal

29:30

efforts going on across Earth now and

29:32

the kind of project we could come up with if

29:35

we decided to put a coordinated global

29:37

human effort behind spreading out

29:39

into space. Imagine

29:41

what we could achieve if humanity work

29:43

together on what would probably be our

29:46

greatest human project. Imagine

29:48

the effect that it would have on people across

29:51

the globe. If we work together

29:53

to get not a nation, not a

29:55

hemisphere, but the human race itself

29:58

into space. The

30:00

same holds true with virtually every project

30:03

for taking on existential risks. We

30:05

should begin working on them as soon as possible

30:07

to build a foundation for the future, and

30:10

we should make tackling them a global

30:12

effort. I

30:22

hope by now I've made it abundantly clear that

30:25

subverting scientific progress won't

30:27

protect us from existential threats. The

30:29

opposite is true. We need a

30:31

scientific understanding of the coming

30:34

existential threats we face to get

30:36

past them. The trick

30:38

is making sure that science is done

30:40

with the best interests of the human race in mind.

30:44

It's not something we commonly think of ourselves

30:46

as, but you and I and everyone

30:49

else in the world is a stakeholder in

30:51

science. And this is truer

30:53

than ever before with the rise of existential

30:56

threats, since the whole world can

30:58

be affected by a single experiment. Now.

31:01

In the article in The Bulletin

31:04

of Atomic Scientists, physicist

31:06

H. C. Dudley criticized Arthur Compton

31:09

and the Manhattan Project for their decision

31:11

that a three and a million chance was an

31:14

acceptable risk for detonating the first nuclear

31:16

bomb. They were all rolling

31:18

dice for high stakes, and the rest

31:20

of us did not even know we were sitting in the game.

31:23

Dudley wrote, the same is

31:25

true today in making assumptions

31:27

about whether cosmic rays make an acceptable

31:29

model for proton collisions in the Large Hadron

31:32

collider, or that forcing a mutation

31:34

that makes an extremely deadly virus easier

31:37

to pass among humans is a good way

31:39

forward in virology. Those

31:41

scientists are making decisions

31:43

that have consequences that may affect all

31:45

of us, So we should have a

31:47

say in how science is done. Science

31:50

is meant to further human understanding and

31:53

to improve the human condition, not

31:55

to further the prestige of a particular scientist's

31:58

career. When those two conflict,

32:01

humanity should come first. But

32:04

to say that the public has and how science

32:06

is done has to be an informed say,

32:09

no pitchforks and torches. This

32:12

is why a movement that takes existential risks

32:14

seriously requires trustworthy,

32:17

skilled, trained scientists

32:19

to make our say an informed one.

32:22

We rely on them for that. Science

32:25

isn't the enemy. If we abandon

32:27

science, we are doomed. If

32:29

we continue to take the dangers of science casually,

32:32

we are doomed. The only route

32:35

through the near future is to do science

32:37

right, and scientists

32:39

aren't the enemy either. They

32:42

have often been the ones who have sounded the alarm when

32:44

science was being done recklessly or

32:46

when a threat emerged that had been overlooked.

32:49

Those physicists who decided that three

32:51

and a million was an acceptable chance of

32:54

burning off Earth's atmosphere were

32:56

the same ones who figured out that there was something

32:58

to be concerned with in the first place. It

33:01

was microbiologists who called for a

33:03

moratorium and gain a function research

33:06

after the H five and one experiments.

33:09

It was particle physicists who wrote

33:11

papers questioning the safety of the large

33:13

hadron collider. If

33:15

you're a scientist, start looking seriously

33:18

at the consequences of your field, and

33:20

if work within it poses an existential

33:22

risk, start writing papers about

33:24

it. Start analyzing how

33:26

it can be made safe. Take

33:28

custody of the consequences of your

33:30

work. The people who are dedicated

33:32

to thinking about existential risks

33:35

are waiting for you to do that. This

33:37

is Sebastian Farquhar. To a

33:39

certain extent, organizations like

33:42

the FHI, the Future of Humanity Institute

33:45

UM their job is just to

33:47

poke the rest of the community and sort of

33:50

say by the way this this is a thing, and

33:53

then for AI researchers

33:55

or biology researchers to take

33:57

that on and to make it their own projects.

34:00

Um and the sooner and

34:03

the more FHI can step

34:05

out of that game and leave it to those communities,

34:07

the better. Many of these solutions

34:10

are already being worked on. Scientists

34:12

around the world are researching large problems

34:15

and raising alarms. But since

34:17

we have a limited amount of time, since

34:19

we're racing the clock, we have

34:21

to make sure that we don't waste time working

34:24

on risks that seem big but don't

34:26

qualify as genuine existential

34:28

threats, and we can't tell

34:30

one type from the other until we start

34:33

studying them.

34:35

The biggest sea change, though, has to come

34:37

from society in general. We

34:39

have to come together like we never have before.

34:42

We have to put scientists in a position

34:45

to understand existential risks, and

34:47

we have to listen to what they come back and tell us.

35:00

It is astoundingly coincidental

35:02

that at the moment in our history when we

35:04

become aware just how brief

35:06

our time here has been and just

35:09

how long it could last, we

35:11

also realize that our history

35:13

could come to an early permanent end, very

35:16

soon. At the beginning

35:18

of the series, I said that if we go

35:20

extinct in the near future, it would

35:22

be particularly tragic, and

35:24

that is true. Human

35:27

civilization has been around only

35:29

ten thousand years. And

35:31

remember that a lot of people who think humanity

35:33

could have a long future ahead of us believe

35:36

that there could be at least a billion years left

35:39

in the lifetime of our species. If

35:42

we've created almost every bit

35:44

of our shared human culture over

35:46

just the last ten thousand years or so, developed

35:49

everything it means to be a human alive

35:52

today in that short time span,

35:54

think about what we could become and

35:57

what we could do with another nine

35:59

and ninety

36:02

thousand years. It

36:05

is not our time to go, yet, there

36:13

is something we have to consider. The

36:16

great filter has to this point

36:18

been total. It is

36:21

possible that even if we come together,

36:23

even if humanity takes our existential

36:26

risks head on, that it won't

36:28

be enough. That there will

36:30

be something we miss, some detail

36:32

we hadn't considered, some new

36:34

thing that grabs us by our ankle just

36:37

as we are making it through and plux

36:39

us right out of existence. If

36:42

we go, then so many unique

36:44

and valuable things go with us. The

36:47

whole beautiful pageant of humanity

36:50

will come to an end. There

36:52

will be no one to sing songs anymore, no

36:54

one to write books and no one to read

36:57

them. There will be no one to

36:59

cry, no one to hug them when they do.

37:02

There will be no one to tell jokes and no

37:04

one to laugh. There will

37:06

be no friends to share evenings with, and

37:09

no quiet moments alone at sunrise,

37:12

good or bad. Everything we've

37:14

ever done will die with us. There

37:17

will be no one to build new things,

37:19

and the things that we have built will eventually

37:21

crumble into dust. Those

37:24

energetic vibrations that make up us

37:26

and everything we've ever made will disentangle

37:29

and go their separate ways along their quantum

37:31

fields, to be taken up into new

37:33

forms down the line, in a universe

37:36

where humans no longer exist. If

37:40

we go, it seems that intelligence

37:43

dies with us, there will

37:45

be nothing left to wonder at the profound

37:47

vastness of existence and

37:49

appreciate the extraordinary gift that

37:52

life is. There will

37:54

be no one with the curiosity to seek out

37:56

answers to the mysteries of the universe, no

37:59

one to even know that the mysteries exist. There

38:02

will be no one to reciprocate when

38:05

the universe looks in on itself, there

38:07

will be nothing looking back at it. But

38:12

as genuinely sad as the idea

38:14

of humanity going extinct forever is,

38:17

we can still take some comfort in the future

38:19

for the universe. We

38:22

can take heart that if we die, life

38:25

will almost certainly continue

38:27

on without us. Remember,

38:29

life is resilient. Over

38:32

the course of its tenure on Earth, life

38:34

has managed to survive at least five

38:37

mass extinctions that killed off

38:39

the vast majority of the creatures alive on

38:41

Earth at the time. The life

38:43

on Earth today is descended from

38:45

just that fraction of a fraction of a fraction

38:48

of a fraction of a fraction of life

38:50

that managed to hang on through each

38:53

of the times Death visited Earth, and

38:56

every time after Death left,

38:58

life poked its head back, came

39:00

back up to the surface, and began

39:03

to flourish again. If

39:06

we humans called death back to our planet,

39:08

life will retreat to its burrows and

39:10

to the bottom of the sea to hide

39:13

until it's safe to re emerge. And

39:16

perhaps when it does emerge again, one

39:18

of the members of that community of life that

39:20

survives us will rise to take

39:23

our place, to fill the void

39:25

that we've left behind, just like

39:27

we filled the void left after

39:29

the last mass extinction. Perhaps

39:32

some other animal we share the Earth with now will

39:35

evolve to become the only intelligent

39:37

life in the universe and take

39:39

their chance and making it through

39:41

the Great Filter. Perhaps

39:44

someday they will build their own ships that

39:46

will break their bonds to Earth and

39:48

take them into space in search

39:51

of new worlds to explore, just

39:54

like we humans tried so

39:56

long before. M

40:01

M. The

40:15

End of the World with Josh Clark is a production

40:17

from How Stuff Works and I Heart Media. It

40:20

was written and presented by Me Josh Clark.

40:22

The original score was composed, produced

40:25

and recorded by Point Lobo. The

40:27

head sound designer and audio engineer was

40:29

Kevin Senzaki. Additional sound

40:31

designed by Paul Funera. The supervising

40:34

producer was Paul Deckan. A very

40:36

special thanks to you, Me Clark for her assistance

40:38

and support throughout the series production

40:41

and to MOMO to thank you

40:43

to everyone at the Future of Humanity Institute,

40:46

and thanks to everyone at How Stuff Works for

40:48

their support and especially Sherry

40:50

Larson, Jerry Rowland, Connal

40:52

Burne, Pam Peacock, Nathan

40:54

Natoski, Tary Harrison, Ben

40:57

Bolden, Tamika Campbell, Noel

40:59

Brown, Jenny Powers, Chuck

41:01

Bryant, Christopher Hastosis,

41:04

Eve's, Jeff Cote, Matt Frederick, Tom

41:06

Boutera, Chris Blake, Lyle

41:09

Sweet, Ben Juster, John

41:11

go Forth, Mark fresh Hour, Britney

41:13

Bernardo and Keith Goldstein. Thank

41:16

you to the interviewees, research assistants

41:19

and vocal contributors Dana Backman,

41:21

Stephen Barr, Nick Bostrom,

41:24

Donald Brownlee, Philip Butler,

41:26

Coral Clark, Sebastian Farquhar,

41:29

Toby Halbrook, Robin Hansen,

41:32

Eric Johnson, Don Lincoln,

41:34

michel Angelo Mangano, David

41:37

Madison, Matt McTaggart, Ian

41:39

O'Neill, Toby Ord, Casey,

41:42

Pegrham, Ander Sandberg, Kyle

41:45

Scott, Ben Schlayer, Seth

41:47

Shostack, Tanya Singh,

41:49

Ignacio Taboada, Beth

41:52

Willis, Adam Wilson, cat Sebis,

41:54

Michael Wilson, cat Sebas, and Brett

41:57

Wood And thank you for

41:59

listening. W

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features