Infohazards | Could This Thought Experiment Harm You?

Infohazards | Could This Thought Experiment Harm You?

Released Monday, 9th September 2024
 1 person rated this episode
Infohazards | Could This Thought Experiment Harm You?

Infohazards | Could This Thought Experiment Harm You?

Infohazards | Could This Thought Experiment Harm You?

Infohazards | Could This Thought Experiment Harm You?

Monday, 9th September 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Sergeant and mrs. Smith, you're going to love

0:04

this house. Is that a tub in the kitchen? Is that a tub

0:06

in the kitchen. There's no field

0:08

manual for finding the right home. but when

0:10

you do, USAA homeowners insurance can

0:12

help protect it the right way.

0:14

restrictions, apply. What

0:25

if simply knowing a piece of information could put

0:27

you at risk? In recent

0:29

years, this idea has captured the

0:31

internet's attention and imagination, leading

0:33

to philosophical discussions and new levels

0:35

of scary stories. Today,

0:38

let's discuss the concept known

0:40

as info hazards. This

0:42

is Red Web. Welcome

0:53

back, Task Force Two, episode 200 of

0:55

Red Web, the podcast all about unsolved

0:58

mysteries, true crime and the supernatural. It

1:00

is my personal journey to explore the

1:02

unknown and figure out our ghost real,

1:04

our aliens real and what's going on

1:07

on the Internet. I'm

1:09

your resident mystery enthusiast, Trevor Collins, and

1:11

joining me for his 200th episode with

1:13

me, Alfredo Diaz. 200, the big O, two

1:15

double O. Man,

1:20

find a nickel for every episode I'd have. Do we have?

1:24

How much is that? Mm hmm. 200

1:26

nickels. 200 nickels? Two

1:30

glasses of milk, please. Which

1:32

we would buy at nothing. Maybe something from

1:35

the dollar menu. I

1:37

just did the math quickly. Anyway, yeah,

1:39

episode 200 is a it's it's

1:43

a good feeling. Thank

1:46

you, Task Force for everything. This is

1:48

the episode where people look back 10

1:50

years from now and the task force

1:53

is a giant conglomerate and people

1:55

will go. The

1:57

task force rebellion saga will start and people

1:59

will go. I miss the good old days. I

2:01

miss episode 200 and below. Right,

2:04

I can feel that nostalgia coming. They're

2:07

gonna ask questions commonly, like where were

2:09

you at the decision of episode 200?

2:12

Which side of the civil war did you land

2:14

on? Yeah. You know,

2:16

we don't know the inciting incident yet. This

2:19

hasn't launched. We're recording this ahead of time.

2:21

These are not live. So we'll just have to

2:23

figure out which side we land on. It

2:25

probably comes with the demise of

2:27

Christian. From there, the resurrection

2:29

through AI. Oh. That

2:32

kind of makes sense. Well, don't forget the ghouls. There's

2:34

always ghouls. There's definitely some ghouls. I don't know what

2:36

side they're on. I'm on the other side. What a

2:38

tangent. It is 200 episodes deep.

2:42

We're now fully independent. Our Phoenix wings

2:44

have stretched wide. We've floated around this

2:46

planet a couple of times now, just

2:48

in this independent form. But

2:51

yeah, 200 episodes, it's wild. It's wild to look

2:53

back genuinely on all of the

2:55

episodes and topics we've covered. Today is no

2:57

different. This one harkens back all the way to

2:59

episode one. It is our bread and butter. It's

3:01

the internet. It is so intriguing

3:03

to me. And it's what we always call,

3:06

it's the next frontier of mysteries where

3:08

new stories unfold, new ideas are discussed,

3:10

and the virality of it all is

3:12

just so fascinating. So I felt like

3:14

this is something that I've been wanting

3:16

to talk about for quite some time.

3:19

It is my Holy Roman Empire,

3:22

infohazards, just this entire

3:24

concept of infohazards. Have you

3:26

ever heard of that term, by the way? I

3:28

think very vaguely once,

3:32

but is it simply just info,

3:34

like you know too much?

3:37

I mean, very simply, yeah, essentially that

3:39

concept. And of course, we'll walk you

3:41

into the etymology, like where this came

3:43

from, what it means, all the different

3:46

subdivisions of infohazards, and then of course,

3:48

at the backend, instead of theorizing, we're

3:50

gonna give you hands-on examples of some

3:52

common ones that I'm sure you, Freya,

3:54

don't know about, but also the task

3:56

force might kind of recognize, and

3:59

then one that... you might not have

4:01

heard about that I've been wanting to talk

4:03

about for years now. Christian can attest to

4:05

that. I've just kept it on

4:07

the docs for a minute now. Oh,

4:09

so info that I shouldn't be knowing. Well,

4:11

I will now know. You will now know.

4:14

You will then be at risk, wherever that

4:16

risk is. Yep. And that

4:18

is the sensitive topic of today is that

4:20

we're talking about info hazards. It's just hazards

4:22

all over the place. That's okay. That's true.

4:24

We do work in a

4:26

task force HQ that subterranean constantly remodeling.

4:29

We got upturned boards with exposed nails

4:31

in them. Physical

4:35

hazards, but you did sign the safety

4:39

warning. So you're locked in. Right. We did

4:41

zap your memory of you signing it, but

4:43

you did sign it. Just trust. Now,

4:46

speaking of the task force, I want to

4:48

give a huge shout out to our Patreon

4:50

members over at patreon.com/red web. It is the

4:52

best way to support us. You get this

4:54

podcast ad free. You get exclusive discord events.

4:56

You have access to our exclusive podcast, movie

4:59

club, all sorts of bonus behind the scenes

5:01

stuff. Just a lot of things to thank

5:03

you for supporting us directly. And

5:05

some shout outs from the task force include Sarah

5:07

Jade 86. We

5:09

got Noah true. And then of course,

5:12

our elite squanks coming out of the

5:14

swamps, be wick, bromen G, Zara through

5:16

Strah, to name a few. Thank you

5:19

guys so much. And, uh,

5:21

and if you want to get your gear

5:23

going, you know, as we go into the

5:25

chillier days, we've got a, we got merch

5:27

store dot red, web pod.com. You know,

5:29

uh, I have said

5:31

the URL wrong before, but now I've

5:33

rewritten the post-it note. And it is

5:35

correct. We're all good now. We're

5:37

all good. We're locked in. Get yourself

5:40

something nice. Rep task force. I like

5:42

to think that everyone that you named

5:44

is living, living like just little luxuries

5:46

at task force HQ, like pens that

5:48

don't wear out of ink. You know

5:50

what I mean? Oh my God. Infinite

5:53

ink. Yeah. Yeah. A cup that never,

5:55

uh, runs out, you know, a sippy

5:57

mug that just, it's bottomless. Yeah.

6:00

And you actually get an extra square foot too. Of

6:03

height or like. You

6:05

know, you could do whatever you want with that. You

6:07

could you could see ceiling high. You can place that

6:09

wherever you want. You could put a coat rack in

6:12

it. Just be careful. All

6:16

right. Let's take you now into my mind. This

6:20

is the kind of stuff that I tumble down

6:22

the rabbit hole on on a late Saturday night

6:24

just by myself reading on the

6:26

Internet. Let's talk about info hazards.

6:29

Now, the term information hazard or info

6:31

hazard was introduced back in 2011. So

6:34

it's a relatively new term. And it

6:36

was introduced by philosopher Nick Bostrom in

6:38

his paper entitled Information Hazards, a typology

6:40

of potential harms from knowledge. So you

6:42

can find a link to his paper

6:44

in the description wherever you listen to

6:46

this podcast. It is where essentially all

6:48

this information comes from because it is

6:50

his concept, except for maybe the examples.

6:52

Those come from the broader Internet. But

6:54

yeah, if you want to get into

6:56

the very nitty gritty, all the weeds,

6:58

all the sub genres of info

7:00

hazards. Again, that paper is in

7:03

the description. Now, Bostrom himself defines

7:05

info hazards as, quote, a risk

7:07

that arises from the dissemination or

7:09

potential dissemination of true information that

7:11

may cause harm or enable some

7:14

agent to cause harm. End

7:16

quote. So in other terms, it's

7:18

the concept that a piece of

7:20

information can pose a risk simply

7:22

by being known or shared. Mm.

7:25

Yeah. I mean, look, information

7:28

is it could be a powerful thing. And you

7:31

watch, I mean, just governments in general, right?

7:33

Real life or experience it through movies. People

7:36

will kill for knowledge. I mean, that's what they

7:39

say, right? Knowledge is power. Knowledge is power. And

7:41

sometimes power, it hurts a little bit. So this

7:43

harm that he refers to doesn't have to be

7:45

physical harm. It takes a

7:48

lot of different forms, physical, of course.

7:50

But psychological and even existential harm. We'll

7:52

get to an example that kind of

7:54

explores that. But I want to

7:56

give some broad strokes examples before we continue

7:58

on, because it does start to. to get a

8:00

little in the weeds. A lot of

8:02

details, a lot of kind of confusion.

8:05

So misinformation, false or misleading information, I

8:07

should say, can cause harm if believed

8:09

or acted upon, right? Oh

8:11

yeah. People have told great

8:13

lies too. I mean,

8:15

cults, look at cults. Sure.

8:18

They're just feeding people BS all the time and

8:21

look how powerful that could be. Right, misinformation

8:23

can cause harm in all sorts of different

8:25

ways. You have sensitive data. If disclosed, right

8:27

through like an NDA breach or what have

8:29

you, or a security breach,

8:31

privacy violation, whatever. Oh

8:33

God, all those social securities that

8:36

got leaked recently. No! Well,

8:38

everybody go ahead and freeze your credit score unless you're

8:40

not American, then go ahead and chuckle a little bit.

8:43

But then allow us to all freeze

8:45

our credits. Yeah, just- That's a serious

8:47

thing. Yeah, on a serious note,

8:49

just make sure you check your credit score off then. Nowadays

8:52

you're pretty protected. It's more just

8:54

a big nuisance. Yeah, 2.7 billion.

8:58

Golly. Wait, serious? Hold

9:00

on a second. There aren't 2.7 billion Americans.

9:02

Well, they also count, UK has social security

9:04

too. Oh, but that's an old country. So

9:06

they're like one, two, and three. Maybe, maybe.

9:08

So they're guessable. That would suck if you

9:10

had one and it got leaked and you

9:12

got to get another one. Oh

9:14

man, I had a really good number that

9:17

I parked. Yeah, I had a great number.

9:19

All right, so there's also like dangerous knowledge,

9:21

which is information that could enable harmful actions

9:23

such as details on creating weapons or exploiting

9:25

vulnerabilities. And then psychological harm can come in

9:28

the form of information that causes distress, anxiety,

9:30

panic. You know, in this podcast,

9:32

we list sensitive topics whenever those

9:34

are applicable because of this very

9:36

concept, right? Yeah. But

9:38

in addition to those types of harm, info

9:40

hazard harm can also be more innocuous. We

9:43

were talking before we recorded this episode about

9:45

the recent release of Alien Romulus. And

9:47

the reason I mentioned that is movie spoilers

9:49

can come in the form of an info

9:51

hazard. If you know the ending of a

9:53

movie, it may negatively impact your kind of

9:55

experience of the film, but you're

9:58

not necessarily harmed, so to speak. Right.

10:00

But you know, it does. It is a

10:02

negative thing, right? It still sucks. Like, I

10:05

mean, during pre-release of Avengers Endgame, I went.

10:07

Oh my God. I went internet silent for

10:09

about a week and a half. Yep.

10:13

We did have a call like that was spoiled.

10:15

I hate that for that. They read a message

10:17

from Reddit and it's like, this person dies and

10:19

that person and this person dies. And they're like,

10:21

Oh, great. Well, since it's been about 62 years,

10:23

what was that spoiler? You

10:25

want to know? Yeah. I

10:28

want to know what that spoiler was. Oh,

10:30

the spoiler literally was like Gomorrah dies. Red

10:32

Skulls in it. Oh, it's the death list.

10:34

Yeah. Deathless. Oh man.

10:37

So that was a infinity war then, right? Yeah. Part

10:40

one. Goodness. You know what I

10:42

was saying, game? Was it? Oh, oh, I see. The

10:44

big ones. Yeah. The

10:46

big ones. Gomorrah. Tony Starker. I

10:48

was like, damn, that's unfortunate. While we're on the topic, because this

10:51

is a deep well of conversation may be best suited for movie

10:53

club. But some do say,

10:55

and I can feel some task force members kind

10:57

of raising their hand. Some argue

10:59

the opposite, right? Some studies have shown that

11:01

spoilers can actually enhance the ending of a

11:04

movie. But of course that means somebody who

11:06

wanted the spoiler, like they went out of

11:08

their way to go find fan theories of

11:11

that nature. And even then

11:13

it's still arguable at best. It's not a

11:15

foolproof concept, but it is interesting to me

11:17

that sometimes, or some studies have shown if

11:19

you know where it's going, it actually enhances

11:22

your appreciation. But to each

11:24

their own, everybody's got a different take, right? Yeah,

11:26

I get to see that. I think that kind

11:28

of spins into like movie trailers as well, right?

11:30

The trailer show, everything these days. But then again,

11:32

it's like, okay, for instance, Romulus. I'm

11:34

like, okay, that's spoiled a lot. But here's the thing.

11:36

I'm a big, huge Xenomorph fan, so I'm going to

11:39

watch it anyways. They're not trying to hook me. They're

11:41

trying to hook the people that are on the fence

11:43

or don't care about it and go, hey, look at

11:45

all these cool scenes. Absolutely. I

11:48

mean, there are movies that I choose to go in 100% uninformed by.

11:51

I know it exists. I think Rogue One, Star

11:53

Wars Rogue One was an example of that. Long

11:56

legs. I tried my best to know

11:58

as little as possible. recent memory.

12:00

But then there's other movies like Odds

12:02

and Ends, Marvel movies, maybe not the

12:05

big ones like the Avengers movies. I

12:07

will indulge in fan theories like new

12:09

rock stars nonstop just because I'm like

12:11

I love that world but I'm not

12:13

super informed on a lot of different

12:15

nuanced stories. Yeah, great channel. And to

12:18

me, I've still been fine both ways.

12:20

But again, to each their own. I

12:22

just love movies. But spoilers are interestingly,

12:24

yeah, a softer form of info hazard.

12:26

So Bostrom goes on to categorize the

12:28

many different sub types of info

12:31

hazards by in one category

12:33

effect, but also in a

12:35

way that he calls quote information

12:37

transfer mode, end quote. And

12:39

so I have a graph for you

12:41

from his paper that kind of it's

12:43

the typology of information hazards as he

12:46

calls it. It's a visual breakdown

12:48

of how he does it. But I

12:50

just wanted you to see just how

12:52

many subcategories there are. Of course, this

12:54

will post on social and on YouTube

12:56

for the task force members looking to

12:58

access it. It's also in his paper

13:00

in the description. Data hazard, idea hazard,

13:02

tension hazard, effects

13:04

type, subtype, enemy

13:06

information, asymmetry hazard. Oh,

13:09

yeah, this kind of like breaks off

13:11

in branches. Yeah. So the

13:13

top half, those six that you just read,

13:16

those ones are basically how information

13:18

is transferred. And the by

13:21

effect is kind of in the name,

13:23

it's broken down into ways

13:25

that it affects things and what it

13:28

affects. Let me break it down actually

13:30

with more detail. So the effects category,

13:32

they're pretty straightforward. You have things like

13:35

adversarial risks, things like breaking an NDA

13:37

and releasing intellectual property to a

13:39

competitor could be something even higher could

13:41

be government to government adversary things that

13:44

are just of that ilk, right? Yeah,

13:46

you have risks to society information that

13:48

could change social structures or the market

13:51

or basically just how society is built.

13:54

You also have risks importantly, right? It's

13:56

a very topical one from information technology

13:58

concerns around artificial technology. official intelligence

14:00

are very top of mind right now,

14:03

and it categorizes that as its own kind of

14:05

thing. And those are just

14:07

a few of those effect-driven categories. Artificial

14:13

intelligence is such a hard topic, but I do

14:15

feel like it has its place in the world

14:18

where it can be very beneficial for health,

14:20

for learning. It's just, how

14:22

do you heavily regulate that? You know what I

14:24

mean? And do you? Oh yeah.

14:27

Oh man, it's just, it's

14:29

rough. I do feel like it will

14:31

be a heavy part of our future,

14:33

and that at some point, most of

14:35

the negative ideology around it will flip,

14:37

because I feel like once it finds

14:39

its groove in everyday

14:41

casual assistance for a human

14:44

being, then that's where the

14:46

masses go, actually. Right. Hold

14:48

on, I like my little AI thing that helps me

14:50

do everything. Absolutely. And the next generation that's

14:52

born with it, they'll never know a

14:55

world without it. That's true. And so they'll know

14:57

to accept it. And so there's

14:59

a kind of fine line to find as a

15:01

society. I think we all are still looking for

15:03

it, and everyone has different opinions. But

15:05

one thing that really aggravates me

15:08

about AI is that it's used

15:10

to supplant creators, artists, musicians, anybody

15:12

that creates. It's used to

15:14

supplant those people by studying their work.

15:16

It studies their work, emulates it, and

15:18

then tries to replace them with their

15:20

stolen work. That is so weird and

15:23

so unethical. It's used in a weird,

15:25

nefarious way right now. And it's just

15:27

dumping so much crap on the internet.

15:29

People, I've seen so many videos on like,

15:31

this is how I get rich quick.

15:33

False, by the way, just pure false.

15:35

Not gonna happen. Where they just use

15:37

it to dump articles out that are

15:39

just based on other articles with no

15:41

fact checking. And so the internet, my

15:43

fear on one polar end is that

15:45

we're just gonna be littered with trash

15:47

all over the internet, and then AI

15:49

is gonna consume that trash, and it's gonna

15:52

spiral off into just this realm

15:54

of garbage. Yeah, I think it's

15:56

like a descent, right? Like the

15:58

internet came about. And it was

16:01

so easy to get information. And now

16:03

the information, people are just kind of regurgitating from

16:05

one another. And we use, you know, you Google

16:07

and now AI searched at the top. And a

16:10

lot of times it's not even like, right. Oh

16:12

my God, it's wrong so often. Like you've seen

16:14

the guy that said, yeah, if your pepperoni is

16:16

falling off your pizza, just use some Elmer's glue.

16:19

What? Where

16:21

is this coming from? Oh, right. It's

16:23

scrubbing Reddit where people joke and it

16:25

doesn't understand humor. I mean, listen,

16:27

that's the more cynical side. On the

16:29

other end of the spectrum, there is the positive side. I think you're right. This

16:32

is definitely a tool that could increase efficiency,

16:34

increase quality of life and

16:36

help humans improve. Right. There are

16:39

like ways that AI can rapidly

16:41

analyze viruses and genomes to find

16:43

cures for things. And that's

16:45

still kind of in the works. There are positives

16:47

that it can help with. And if you want

16:50

it as an assistant on your phone, that is

16:52

also kind of cool. But it's

16:54

just so interesting that it has. It's

16:57

a double edged sword, literally. Yeah, I think

16:59

the issue right now is that it's in

17:01

the hands of greedy corporations. And

17:04

so they're just like, how can we monetize the

17:06

hell out of this? How can we squeeze larger

17:09

profit margins? And it's at the expense of the

17:11

people you've listed. Yeah, absolutely. Let me tell you,

17:13

I do voice acting work in addition to Red

17:15

Web. And that AI is

17:17

such a huge concern. Every

17:20

single gig that

17:22

I get, that has to be

17:24

part of the negotiation where you have to say, OK, are

17:26

you willing to what's your stance on this? And can we

17:28

fix this into the contract? It is exhausting.

17:32

Yeah. So that's just like I

17:34

don't even do it full time, but just dealing

17:36

with it firsthand. It is exhausting. Hey, Christian, can

17:38

I get you to say my name is Christian

17:40

Young? Why?

17:43

Can I ask why real quick? Just give me a clear airwaves. Can I

17:45

go ahead and get that? Can you just tell me why first? First

17:48

of all, oh, that's such a funny joke.

17:54

Christian, why do you keep laughing? That's so

17:56

weird. Can you just tell me why first?

18:01

Jesus Christ. Anyway, um,

18:04

I'm not going to clone that voice or anything. That's

18:07

so good. No, Christian, you

18:09

have a great voice. I'm really stoked to hear that

18:11

you're doing some voice acting work. But also, like, yeah,

18:14

I totally see that happening. It's

18:16

like people cloning voices and

18:19

then making their voice

18:21

available beyond their actual reach,

18:23

right? And anyway, this is like a whole

18:25

kind of nuanced topic within Infohazards, but,

18:28

and I know it's a hot-button topic for

18:30

a lot of people. I'm not

18:32

trying to get anyone enraged, but this is a

18:34

real conversation I think we all need to figure

18:36

out together. Because we're going to have to find

18:38

that path through those woods, you know, to figure

18:41

out what is ethical, what's moral, what's right, what's

18:43

necessary. Because it is a tool, it

18:45

ain't going anywhere. No, it's here to stay. Yeah,

18:48

and it does feel like a dot-com bubble

18:50

era style thing where everyone's looking for an

18:53

AI and eventually a few of them will

18:55

pan out. And then we'll stop being so

18:57

focused on it, maybe. I don't know. I'm

19:00

not super well informed on it, admittedly. I think

19:02

it's an interesting topic. I'm not

19:04

like a philosopher on the topic or, you know,

19:06

a teacher or I just don't muse on it

19:08

a lot. But we're going to fall somewhere on

19:11

that cynical to positive

19:13

spectrum. And I hope

19:16

that it's somewhere that we can all be comfortable with. And

19:20

it becomes a tool, right? Because

19:22

you can't really, you can't really replace it. Anyway,

19:24

this is going to be a very interesting topic to look back

19:26

on when we get to the big example. I'm very excited to

19:28

get to that because this lays that groundwork nicely. But

19:32

coming back to kind of the categorization, we talked about

19:34

some of the effect categories. But

19:37

the one I'm more interested in is the one

19:39

that he refers to as an information transfer mode.

19:42

Again, this typology is more focused on the way in

19:44

which information is communicated or shared, which

19:48

can affect then the potential harm it

19:50

might cause. And then some examples of

19:52

those include data hazards. Data

19:54

and information can be harmful, even if it's true

19:56

and factual. For example, I talked

19:59

about genomes, right? the genome for

20:01

smallpox, Ebola, and other deadly diseases

20:03

are actually public domain, at least

20:05

from like the 90s is when

20:07

smallpox was made publicly

20:09

available. And someone knowing that information

20:12

could use it for harmful purposes,

20:14

just using data hazard as an

20:16

example. Idea hazard is another transfer

20:18

mode. Pause me if this doesn't

20:20

make sense. This might be, I've read the article

20:22

too much and then I've poured over this and

20:25

now I've got a blind spot. So this type

20:27

of info hazard involves ideas that can lead to

20:29

harmful consequences if believed or acted

20:31

upon. For example, here,

20:33

we just discussed the idea, right?

20:35

That one could theoretically, with these

20:38

public domain diseases, maybe they could

20:40

theoretically synthesize a virus and make

20:42

it even worse with different capabilities.

20:45

A data hazard very simply, when pointed out,

20:47

can become an idea hazard, which then

20:50

leads to our next example called an

20:52

attention hazard. Stop me if you're

20:55

getting a little lost. Okay,

20:57

so basically these are just

20:59

ways that ideas transfer the different ways,

21:02

right? Data is just, it's publicly available

21:04

and that's the transfer kind of version.

21:06

Idea is basically somebody has an idea

21:09

and says it and now you know

21:11

about it. And so that information has

21:13

now passed to you in that sense.

21:15

And at the attention hazard is basically

21:18

saying something very similar. This states that

21:20

bringing attention to certain data or

21:22

an idea then creates risk.

21:25

So Bostrom has an example about a

21:27

government body, right? Let's say they are

21:30

focusing their concerns on biological warfare. Well,

21:32

this may then signal to potential enemies

21:35

that this is a concern that can be

21:37

exploited, therefore seeding the idea. Essentially their

21:39

attention creates awareness and then that manifests

21:41

the thing that they were trying to

21:43

avoid in the first place. Okay, so

21:45

transfer modes are basically just how ideas

21:47

transfer and then the risks involved there.

21:49

It's a little bit more hairy. There

21:51

are three different types of hazards. I

21:53

don't wanna get too in the weeds

21:56

on the definitions because I think the

21:58

examples that were about to explore will

22:01

help kind of materialize what

22:03

I'm talking about. This

22:06

episode of Red Web is sponsored by you

22:08

guys, the task force. This is one of

22:11

those opportunities where I just wanted to say

22:13

thank you all so much for supporting the

22:15

show specifically at patreon.com/red web kind of like

22:17

PBS. This is an ad brought to you

22:19

by you and there are a lot of

22:21

different ways you can support Red Web. We

22:23

talk about it all the time in the

22:25

proper show, but here in this little special

22:27

ad break, I felt like I wanted to

22:30

expand upon it a little bit. What is

22:32

Patreon? What do you get with Patreon as

22:34

well as of course supporting with merch, the

22:36

store dot red web pod.com domain. You

22:38

can also share the show for free. You

22:40

can review us wherever you listen to us.

22:43

All of that means the world to us.

22:45

It supports us in so many different ways,

22:47

but Patreon is our bread and

22:49

butter. It is where we built our community

22:52

even tighter. We have exclusive discord access where

22:54

our community theorizes together and talks together. They

22:56

suggest movies to watch as well as cases

22:58

to cover in the proper show. We also

23:00

have movie club and I really want to

23:02

dive into what movie club is because we've

23:04

talked about it a few times and I

23:06

have a clip for you that I want to

23:09

play because I want you to hear exactly what

23:11

the vibe is. But we talk about horror movies.

23:13

We break down the plot, give our thoughts and

23:15

have a lot of laughs along the way. So

23:17

it's basically like Red Web but exploring horror movies

23:19

and it is a great time. Otherwise you also

23:21

get this podcast entirely ad free so you won't

23:23

hear these little ad breaks in the middle. And

23:25

we also have all sorts of other bonus stuff

23:27

like looks behind the scenes in September on the

23:29

25th. We're going to use the random nautica app,

23:32

which is an app we've covered in the past

23:34

and we're so excited to finally go film that

23:36

and it's going to go up exclusively

23:38

for Patreon members. We've uploaded sneak peeks

23:40

behind the scenes, things that haven't released

23:43

before. We had a Mothman commercial that

23:45

came out, which was absolutely hilarious. Then

23:47

of course we have Discord events every

23:49

single month. With that said, here is

23:51

a clip from a recent movie club.

24:00

Why's Houston sound like Steve-o? What?

24:02

Wait, what? He keeps talking? I don't remember

24:04

him talking the rest of the movie. When

24:06

he's all scared and yelling, it's

24:08

like Steve-o. And I couldn't take

24:11

it seriously because it sounded like... Oh

24:13

my god. I was watching Jackass.

24:15

I don't remember him talking. I'm

24:18

Houston. Welcome to Jackass. We're

24:21

going to die. So

24:27

to start, let's go with smaller examples, a

24:29

little bit more innocuous info hazards. The

24:31

game. By the way, you just

24:34

lost it. If you haven't

24:36

heard of The Game, it's a theoretical mind

24:38

game that everyone on earth is playing. However,

24:41

once you become aware of it, you lose.

24:43

Wait a minute. Pause the record

24:45

scratch. How

24:48

have you not heard of The Game? The Game?

24:51

You've been on the internet since at least 1765. At

24:55

least? What a no-man. What kind of game is

24:58

this? Like a Triple H? The game? Like

25:00

I don't... I don't know, man.

25:06

What kind of game are we talking about here, bro? You

25:09

and me. That was a good

25:11

cut. I love that one. Okay.

25:15

Okay. So my definition written

25:17

here in my own script is

25:19

based on maybe you knowing this. Okay.

25:23

So let me break it down more. So The

25:25

Game is just a hypothetical game that was invented

25:27

when I was a kid. And the whole idea

25:29

was that if you think of The Game, capital

25:32

T, capital G, you lose,

25:34

period, end of day. And when

25:36

you lose The Game, you have to announce it.

25:38

You have to say out loud, oh man, I

25:41

just lost The Game. Then everyone around you moans

25:43

because that means they thought of The Game, and

25:45

now they lost, right? And so

25:47

it becomes this ripple effect. And then when

25:49

the internet took off, you'd see people randomly

25:51

tweet things like, oh, I just

25:54

lost The Game. And then everyone responds. So

25:56

it's like, it's this fake little viral game

25:58

that then made it onto the internet. and it's just

26:01

kind of stayed alive. And it slowly

26:03

petered out over the many years until

26:05

someone like me sticks it in a

26:07

podcast and now, you know, thousands of

26:09

people just lost the game. I've never

26:11

heard of that. That's

26:14

amazing. Also kind of bummed that I

26:16

lost now. I know, I know. I

26:18

just ruined your, however you hold, let's

26:20

see, 457? Yeah,

26:23

yeah. Yeah, and your 457 year streak. Yeah,

26:26

it's a long time. Yeah, Christian, it's 200 episodes in,

26:29

and it's time to tell you, Efredo is

26:31

a cryptid. Yeah, I had a suspicion for

26:33

a long while. It finally makes sense. That's

26:36

what they make me. They make me. They make

26:38

me. But

26:42

yeah, that is an example of

26:44

an infohazard. It doesn't cause any

26:46

harm. It may cause some mild

26:48

distress, as you can feel, but

26:51

that is a simple example. And from there,

26:53

I kind of wanted to spiral outward into

26:56

online viral challenges. This is something that

26:58

Christian actually brought up in the outline

27:01

as a great example. You've heard of

27:04

viral challenges like the cinnamon challenge or

27:06

the Tide Pod challenge. Oh, God. Huge

27:08

preface, don't do these. I'm

27:10

telling you right now, do not. Wasn't that like a,

27:12

I don't know if this is real or not.

27:14

Wasn't it TikTok, they got crazy and they started

27:17

mixing alcohol and like a toilet bowl and stuff.

27:19

Oh man, are they making prison wine? Yeah. If

27:22

my definition of that is understood. I think so, yeah.

27:24

I don't know, dude. There

27:26

was all sorts of a yucky fermenting activity when

27:28

I was in high school that was hitting the

27:30

news. And then my parents would be like, you're

27:32

not doing this. I was like, what are you

27:35

asking me this for? First of all, now I

27:37

know about it. Second

27:39

of all, God, no. Not

27:41

sucking fermented air from

27:43

just things. Okay, I'll just say

27:45

things. But no, so

27:47

these challenges in particular, including many other

27:50

viral challenges, they're dangerous toxic

27:52

activities where folks film themselves either

27:54

eating too much cinnamon then

27:57

they either like cough or what have you. It's going to

27:59

burn their tongue or something. something they shouldn't, like

28:01

a tide pod, just okay. It then

28:03

creates a viral feedback loop causing others

28:05

to jump in on the trend despite

28:07

the dangers it poses to their health.

28:09

And to ground it in what

28:12

I've talked about, this is a

28:14

great example of an idea hazard

28:16

and actually what's called an evocation

28:18

hazard. It's another transfer mode. Basically

28:21

sharing or promoting a challenge evokes

28:23

a behavior that is extremely risky.

28:26

And the idea is the challenge itself,

28:28

right? Another type, another

28:30

small info hazard example is something

28:32

that I feel like our

28:34

parents generation is kind of into. The

28:38

example being emails that

28:40

say something like, forward this message to

28:42

10 people and you'll have bad luck

28:44

for 10 years, right? Yeah. And

28:47

you're like, okay, well, you know, I don't want bad

28:49

luck. I guess I'll forward it to Papa and Mama

28:51

and, Yeah. You know, my

28:53

eight siblings, I don't have. Bro, that

28:56

just dates people so hard. Yeah, yeah.

28:59

And now this is a good example of

29:01

an attention hazard because you are raising an

29:03

awareness of this email and then it seeds

29:05

the idea of like, well, if you don't

29:07

do this, it plays off your, you know,

29:09

superstitions, your beliefs. Right. If you

29:11

don't do it, you're gonna have bad luck. So you better

29:13

spread this potentially misinformation or

29:15

sometimes it's just no information.

29:17

Now, one that's more interesting

29:20

and starts to take more of

29:22

an actual real world

29:24

potential harm kind of feel

29:26

to it is the placebo effect, right?

29:29

Speaking of evocation hazards, the placebo effect

29:31

could also be considered an info hazard.

29:34

Placebos are medications with no therapeutic effects

29:36

intended, they're used in clinical trials to

29:38

act as a controlled variable. So they

29:40

can assess like a new medication against

29:42

a sugar pill. Now, if someone

29:45

is taking a placebo, but they believe

29:47

it is the real medication, they

29:49

may actually start to experience the

29:51

anticipated effects. Though if they become

29:53

aware that it's a placebo, those

29:56

effects may then go away. This

29:58

is very much a hyper- for simplification

30:00

of psychosomatic symptoms,

30:03

because their mind believes it so

30:05

strongly that even if it's a

30:07

sugar pill, it might manifest real

30:09

world results. And usually

30:12

it's not harmful, but it could be,

30:14

right? We're entering that realm. I'm very

30:17

aware of this because Bugs

30:19

Bunny taught me this in Space Jam.

30:21

Oh, please, let's break it down. Yeah,

30:23

when he was with Michael Jordan and

30:25

the other Lillian Toons and they were

30:27

against the Monstars. Oh my God, that

30:29

was scary. They were getting absolutely demolished.

30:31

And Bugs Bunny brought out a Gatorade

30:33

bottle and wrote secret stuff and

30:36

was like, he was like, Michael, you're holding

30:38

out on us. So the secret stuff. And Michael Jordan

30:40

was like, what are you talking about? Bugs Bunny was

30:42

like, wink, wink. And then everyone, all the Toons drank

30:44

it and everyone was like, yeah, we

30:47

got the secret juice for all such a

30:49

word. We got this and then it was

30:51

just water. Oh, what? He

30:53

didn't have any secret stuff? It wasn't no secret stuff. It's

30:55

just water. But that's placebo. So

30:57

on a very young age, I was like, hey, Gatorade.

31:00

Oh, okay. No, for sure

31:02

though, that is exactly like a really good

31:04

example of the placebo effect, right? But

31:07

basically, this is an

31:09

information hazard because the information

31:11

about the medicine evokes a real world

31:13

effect by way of a person's belief

31:15

and expectation of the treatment, regardless of

31:18

its presence or not in the medicine

31:20

that they're taking, right? So

31:22

those are like some smaller examples that

31:25

you can kind of relate to to

31:27

take this more nebulous concept and ground

31:29

it into something that you're familiar with.

31:31

And then this leads me to my,

31:33

again, my Holy Roman Empire, my Trojan

31:35

horse for this entire topic, this whole

31:37

podcast. It's what's called

31:39

Roko's Basilisk. It's a thought

31:42

experiment. Have you ever heard of Roko's Basilisk?

31:44

No. No, I bet not. Christian,

31:46

have you heard of it other than me telling

31:48

you about it about a dozen times? You were

31:50

the only person I've ever heard mention this before.

31:52

Great, now I sound like a mad person. You're

31:55

gonna have to do the mad man. Everybody out there is

31:57

going, shaking their head and going, no, I've never heard of this.

32:00

I just like had a lucid dream and then

32:02

it came to me and it's actually not

32:04

real. Well, okay. So this is

32:06

a really, really interesting concept. And I feel like I

32:08

could do a 20 minute, you

32:10

know, speech about it and hop up on a pedestal and

32:13

just go for it, you know, in a town square with

32:15

nobody there. But I'll break it

32:17

down a lot more simply. It's a

32:19

thought experiment. Like I said, I've heard

32:21

it referred to as many different things.

32:23

Roko's Basilisk, Rocco's Basilisk, etc. But it's

32:26

perhaps one of the most infamous info

32:28

hazards. So on July 23rd, 2010,

32:31

so it actually precedes the year

32:33

that info hazards were coined. There

32:36

was a user named Rocco, R-O-K-O,

32:39

and they shared their thought experiment

32:41

on the Less Wrong Forums. This

32:43

is a forum for discussing philosophical

32:45

ideas, to put it briefly. Now

32:48

in this experiment, if it were

32:51

real, you hearing about it

32:53

would actually put you at risk

32:55

for existential or perhaps physical

32:57

harm. Theoretically, we've discussed

33:00

how a piece of information can pose a

33:02

risk simply by being known. And that is

33:04

a core part of this idea. So

33:06

are you ready to proceed? I got to get some

33:09

verbal nods. Yes. Okay.

33:12

Christian? A

33:15

laugh is a yes. Did

33:17

you do that so quickly? There's

33:19

someone in the task force, she's like, off, off, off, off,

33:21

turn off, turn off. I'll

33:23

give you a beat, I'll give you a pause, hit play on

33:26

your favorite whoobastank CD, to

33:29

kind of numb out the,

33:31

the, the drown out the basilisk.

33:33

Whoobastank. Do

33:37

they have more than one CD? I don't

33:39

know. Okay. So your favorite CD might be

33:41

their CD, but anyway. Okay.

33:43

So here's the concept. In

33:46

this scenario, let's say there's an

33:48

extremely advanced artificial intelligence that exists

33:51

somewhere in the far future. This

33:54

otherwise benevolent and omniscient super

33:56

intelligence has the ability to

33:58

simulate reality. It can

34:01

calculate everything that has happened, anything

34:03

that can happen, very Westworld before

34:05

it was cancelled. The argument suggests

34:07

that if you were aware, like

34:09

somehow knew, that this AI could

34:11

exist at some point in the

34:13

future, and you didn't do anything

34:15

to assist in its creation or

34:17

arrival, the AI would then punish

34:19

you and perhaps torture you for

34:21

all eternity in its virtual reality.

34:23

In other words, this potential

34:25

AI is attempting to blackmail you

34:27

into assisting with its development and

34:29

creation. So let me ask you, now

34:32

that you're aware of this potential theoretical

34:34

AI, would you feel compelled

34:36

to help create it or

34:39

at least advocate for its

34:41

invention? Probably not.

34:45

Probably is doing a lot of work there. It's doing

34:48

a lot of work. I don't

34:50

think I would. It's a very

34:52

interesting concept that makes a lot

34:54

of logical assumptions about morals and

34:56

ethics and artificial intelligence, but it's

34:58

such a fascinating concept. It's very

35:01

difficult either way. Where

35:03

do you stand on this? So

35:06

that's a great question. I've

35:09

decided to not answer. Oh, my.

35:13

This reminds me of the story, the short story, I

35:15

have no mouth, but I must scream. And

35:18

it is an exploration of this

35:20

kind of topic. It is an

35:22

advanced AI, but it is

35:25

manifest, right? It has a physical presence

35:27

in the world. It has taken over and

35:30

it has like five humans as its

35:32

play thing, and it's able to keep

35:34

them alive for forever and kind of

35:36

torture them in various ways. It's very dark.

35:39

It's very, I mean, it's a

35:41

classic, but it gets very dark and very interesting.

35:44

And so that's what this concept is kind of leaning

35:46

on, is that if you sat

35:48

here and tweeted, I don't want

35:51

a self-aware AI to ever exist,

35:53

it then might grab that. Regardless of what

35:55

you do, it's like, well, you were on

35:58

the other side, right? It's the terminator. later

36:00

of it all. On judgment day, this

36:03

AI arrives and it carves out half

36:05

of the humanity or whatever that was

36:07

like not helping it, not

36:09

either trying to invent it, not advocating for

36:11

it, what have you. And then it says,

36:13

all right, you're my playthings for forever. I

36:16

think there are again, some logical

36:18

fallacies in play that give me comfort to say,

36:21

no, I don't think I would. But

36:23

there's also reasonable fear

36:25

anybody could have as a human

36:27

being for wanting a self-aware AI.

36:29

And especially if it's gonna take

36:31

this form, if it's gonna be

36:33

this kind of reductive and vindictive,

36:36

then like you're leaving a lot of people no choice,

36:38

but to be like, I don't

36:41

know, I don't want that. It's

36:44

a very difficult question to answer. But

36:48

I wanna give it in a different way. It's

36:50

not the same concept, but

36:52

it's very similar. Some folks

36:54

have kind of considered it to be the

36:57

modern version of what's called Pascal's wager. It's

36:59

another thought experiment that could be considered an

37:01

info hazard. So this one gets

37:03

a little religious. I'm not pressing

37:05

religion on anybody. I'm literally exploring the

37:08

same concept that Blaise Pascal has put

37:10

forth many, many centuries ago.

37:12

He was a philosopher, mathematician, many

37:15

other things of the 17th century. And he argued

37:17

that human beings should live their lives as

37:20

if God exists, even if it cannot

37:22

be proven or logically made sound.

37:25

Distilling this down very briefly,

37:27

it says that if God exists, a

37:29

believer gains the infinite reward of heaven.

37:32

If God doesn't exist, a believer kind of loses

37:34

very little. On the other

37:37

hand, if they don't believe and

37:39

he does exist, they gain an

37:41

infinite punishment, hell. And if he

37:43

doesn't exist, they gain little, other

37:45

than maybe being right. One could argue that you

37:48

do lose a lot though, right? Like if you

37:50

lose time. I

37:52

would 100% agree. Very little time. Yeah,

37:55

right? You're spending how much time praying,

37:57

spending how much time at mass, not

37:59

to get super. into religion or anything like

38:01

that. But I feel like there is a loss

38:03

of time. I think that time is valuable for

38:05

humans. Sure. It seems like we have forever until

38:07

we don't. Right. 100%. And I'm not like,

38:10

yeah, we're not trying to impress upon anyone's

38:12

religious ideology out there. No, not at all.

38:14

I think it's very, very interesting. It's super

38:17

interesting. I love this kind of thinking. It's

38:19

why I took a lot of philosophy courses

38:21

on the side back in college because having

38:23

logical arguments, you feel, especially as a novice

38:26

student, you're like, Oh wow, I'm trapped by

38:28

this argument. But then you realize much

38:30

smarter people than I have a new layer of

38:33

the argument. And so when you explore those arguments,

38:35

you go, Oh no, now I'm trapped on the

38:37

opposite side of this argument. And so it's so

38:39

cool to explore these things

38:41

because there really isn't an answer. That's

38:43

what's so cool about philosophy. Cause I

38:46

think like baseline just surface level. I

38:48

completely agree with, you know, what you're

38:50

saying where I'm just like, well, yeah,

38:52

right. What do you have to lose?

38:54

And if you go further into it,

38:56

you can find reasons of, you know,

38:59

things you lose like time. Ultimately,

39:01

yeah. Surface level. Hey, you know, you can even

39:03

argue, you live a better life or something like

39:05

that because these beliefs could, you know, some of

39:07

the beliefs was like be kind to your neighbors

39:09

and all that kind of stuff. Obviously you can

39:12

get into the weeds with Bibles and it's very

39:14

bad stuff, but, but you know what I mean?

39:16

So like you could say, Oh, I live a

39:18

better quality life. It's an argument that someone could

39:20

have. And then it's like, if, if not, if

39:22

like God exists, then I'm good. If not, then

39:25

I live a certain way. Right. And

39:27

that's kind of where I landed when I was a young buck, just

39:29

hanging out, looking at the ceiling, you

39:32

know, when I was just a young baby deer and

39:34

what's a buck? I don't know. And I'm just

39:37

thinking about life. You know, you think about these

39:39

kinds of things. I feel like everyone does to

39:41

some degree and everybody makes their own decisions. And

39:43

I'm like, listen, I'm just going to live the

39:45

most joyous, positive,

39:48

but like kind of life

39:50

I can live, right? I don't want

39:52

to negatively impact others, but

39:54

I also, you know, want to chase

39:56

interests and learn about things and explore what

39:59

it is. is to be human because

40:01

regardless of what's on the other side of

40:03

this corporeal veil, when we all pass on,

40:05

if there's nothing or there's something, what

40:08

we have here, in my humble opinion, what

40:10

we have in this physical, chemical

40:12

reality is so compelling. Now

40:16

we go back and now I'm just starting to think about

40:18

matrix theory and like, is this a simulation, you know? But

40:21

like, it's so cool and so rare. I

40:24

love the fact that humans have even

40:26

invented philosophy to explore reality in cerebral

40:28

ways. I don't know. Again,

40:31

there are no answers. And again, I reiterate, I

40:33

don't want to impress upon anybody or make anyone

40:35

feel uncomfortable, but these topics

40:37

are so fascinating to me just

40:39

because it's the same, it's the

40:42

same cog that motivates my love

40:44

of mysteries. The unknown is just

40:46

so cool. And you

40:49

know, what could be, what is, let's

40:52

analyze facts, let's analyze feelings, whatever. But

40:55

this is like the pinnacle of,

40:57

to me, of what's unknown. Rokos

41:01

Baselisk, this Pascal's wager,

41:03

the idea of infohazards and

41:05

how they can manifest in reality, but also in the

41:07

mind, it's all so very

41:09

interesting. I feel like a philosophy

41:11

class would, I would love it,

41:14

but I would also hate it all at the same time. I

41:17

would love the discussion. And then

41:19

on top of that, I would discuss

41:21

it with others, but then also discuss

41:23

it in my own head and reverse

41:25

engineer it. And then I get just

41:27

really upset. Yeah. Well, that's when

41:29

you start to intellectualize feelings. And

41:33

that's a rabbit hole in and of itself and

41:35

can spiral off into negativity. But

41:37

yeah, philosophy, really cool stuff.

41:40

But yeah, even in philosophy 101,

41:42

people were crying, some revelations, some

41:44

questions, some brevity, some kurtness

41:46

by the professor. It is what it is

41:48

sometimes, but that's what it

41:50

can do. It can get you to the

41:52

core of some of those human emotions for

41:54

better or worse. Now, before

41:57

we wrap up, actually, before we move

41:59

on, Christian. I'm very curious to

42:01

hear maybe your take on Roko's Basilisk. And

42:03

I feel like it's unfair for me to

42:05

give a non-answer. So I would

42:07

err on the side of not

42:09

really, but I'm also playing it by ear.

42:12

I wouldn't just kind of blindly lead

42:14

to the arrival of a super

42:16

smart, but vindictive AI. I

42:19

feel like I would assess along the way because that's how

42:21

I approach anything. But how do you feel? I'm kind of

42:23

in the same boat. I feel like it is a bit,

42:27

what's the word, bit

42:29

of a doomsday mindset to jump to

42:32

that worst case scenario of, yeah, this

42:34

vindictive, manipulative, evil AI. I think it's

42:36

something I would, exactly like

42:38

you're saying, kind of play it

42:40

by ear and see and kind

42:42

of watch from the sidelines and

42:44

observe. But it's interesting. It's so

42:47

interesting. Right. Would I actively

42:49

hinder? I don't think that I would

42:51

just blindly do that either. It's the same conversation

42:53

we go back to this. This

42:55

is kind of talking about what's now known as AGI, whereas

42:58

AI has co-opted the term of like

43:01

large language models in art generation. It

43:03

isn't actually what we've known AI to

43:05

be. So there's a spectrum.

43:07

There are elements to it that I find

43:10

disagreeable. Then there are elements to it that

43:12

I find, oh, that could definitely help humanity

43:15

as a whole. And so I

43:17

don't think there is a binary yes or no

43:19

answer here, but it's

43:21

almost the same as my own little baby brain philosophy

43:24

on religion when I was a kid. It was

43:26

like, well, if I live the best life

43:28

I can and I'm as good as I can be, whether

43:31

I believe or not, I would hope

43:33

that whatever power that is can recognize

43:35

that, you know? Because there are many

43:37

different gods. It's hard to say

43:39

like which direction to go. So let me

43:42

just live an ethical life and not wrong

43:44

others, but also live a curious life.

43:46

And then at the end of it, if there's something

43:48

to judge me there, I hope I pass,

43:50

you know? I think that's a great way to

43:52

look at it. That's how I would want to live. Yeah. Don't

43:55

be an ass. Just don't,

43:57

okay? Just don't. We've

44:00

covered many assholes on this podcast. Yeah,

44:02

yeah, we have. Oh God,

44:04

I just hope it's not something like

44:06

The Good Place or, you know, as

44:09

a show about like, Oh man. Heaven,

44:11

hell, whatnot. And essentially it's just like,

44:13

everyone ends up going to the bad

44:15

place just because like the ripple effect

44:17

of like, oh, I got

44:19

organic coffee, but then the organic coffee,

44:21

it's just like the ripple effect that

44:23

somewhere down that chain, it was something

44:25

bad. Right. Or like I helped

44:28

save this life, but that rippled into something else,

44:30

bad happening to someone else. And so ultimately

44:32

like everyone has a rap sheet. Mm-hmm.

44:35

Yep. Everything is like, it's like the seven

44:37

degrees of Kevin Bacon, right? So everything is

44:39

a couple of degrees away from evil. And

44:41

so you're trapped in a web that you

44:44

just cannot make it to the good place.

44:46

Oh, such an incredible show. It's a great

44:48

show. Such thoughtful, speak of

44:50

philosophy, such thoughtful exploration of some of

44:52

those concepts while also being tastefully humorous.

44:55

Oh my God, it's so good. I

44:58

would love to say more, but I feel like, you

45:00

know, I don't want to spoil something that good. But

45:03

before we wrap up, I would be

45:05

remiss if we didn't mention the most

45:07

commonly known kinds of info hazards are

45:09

actually fictional. This isn't something that can

45:11

broadly plague the mind or something to

45:14

be feared. In other words, there

45:16

are like, for example, SCP writers who

45:19

use info hazards as a narrative device.

45:21

We did a whole case files episode

45:24

in the before times where we

45:26

explored a handful of SCP anomalies.

45:29

The one we're about to discuss was actually

45:31

one explored there. But SCPs are

45:33

a fictional task force. It's

45:35

called the SCP Foundation. It's

45:37

a fictional task force that

45:39

quote, secure, contain and protect

45:41

the world from existential threats,

45:44

strange phenomenon, monsters, all

45:46

sorts of really cool stuff. So if you like

45:48

mysteries, you like the unsolved, this is just another

45:50

way to explore that in a more fictionalized

45:52

setting. There is an article known as

45:54

SCP 2718 by Michael Atreus, who

45:58

throughout the article, repeatedly tells you

46:00

to close the article and delete your

46:02

cache because supposedly knowing about this particular

46:05

entity, whatever is contained on the website,

46:08

just by knowing about it can put

46:10

you at risk, right? Very much like

46:12

an info hazard. So finding the page

46:14

means that it is not perfectly contained.

46:16

Therefore stop further reading, get out of

46:18

here, save yourself because once

46:21

you get down the rabbit hole, you cannot

46:23

turn back. It says things like belief is

46:25

key and that plays into some of the

46:27

themes we've explored today. It's a lot

46:29

of fun flavor text. I don't want to spoil

46:31

it itself. If this is something that you're interested

46:33

in Task Force, go read it,

46:36

explore the story. In fact, this is what Jillian

46:38

says in the outline. She says, quote, I won't

46:40

spoil it for you, but it gives the viewer

46:42

the knowledge they shouldn't have and would probably cause

46:44

an existential crisis. Oh damn. Hell of a review.

46:47

I want that on the back of the DVD

46:49

case. Yeah, I don't need that in my life

46:51

though. Yeah, but just remember it's fictional. Yeah. But

46:54

it explores those themes. But with

46:56

that, this has been episode two

46:59

hundo of Red Web. Oh

47:01

wow. So grateful. You know, it's

47:03

already over. Okay. I'm so

47:05

great. What a hard pivot,

47:07

right? I'm just so grateful that

47:09

we continue to get to explore these

47:12

mysteries, these ideas that

47:16

the group of us get to hang out and keep doing this

47:18

together. It's such a fun part of my week. And

47:21

to close out, you know, let me just say,

47:23

share the podcast with at least 10 friends or

47:25

family members. You know, if you don't. Yeah, if

47:27

you don't, just bad luck. Yeah, bad luck. Bad

47:29

luck. You might get, I don't know, sleep

47:31

paralysis and see the hat man. You might also

47:34

don't step on a crack. I might break your

47:36

mama's back. Or

47:39

do, because it might put it right back. It

47:41

could, it could crack in the right place. In,

47:43

out, in, out. Your

47:46

mom's just somewhere going, ah, oh,

47:48

ah, oh. Yeah. Oh

47:52

man. Well, again, if you want to support us, there are a

47:54

bunch of ways to do it. And I want to go ahead

47:56

and say some of the free ways to do it because I've

47:58

talked about Patreon. I've talked about the. One of

48:00

the best ways to support us algorithmically,

48:03

speaking about that, AI speaking about Roko's

48:05

Basilisk, if you want to review us,

48:07

give us five stars, whether it be

48:10

on Spotify, Apple, or any of the

48:12

other podcast playlists out there. It

48:14

is a huge way to boost us

48:16

in that algorithm, allow us to find

48:19

new audience members and continue growing the

48:21

show, which keeps us thriving, keeps us

48:23

present, keeps us here. It keeps

48:25

those lights on, you know? Yeah, it does. Thank

48:27

you so much. Like I said, we're independent now,

48:29

so everything that you guys do

48:31

counts, whether it's like I

48:33

said, sharing or, you know, getting something

48:36

like merch or supporting us

48:38

on Patreon is all completely

48:40

in your guys' hands. It's

48:44

in those big hands. Very

48:46

big, soft, leather capable hands.

48:49

Leather? Wait, soft and leather. I'm

48:51

impressed. Sorry, I meant lather, but

48:53

leather. Sign

48:56

up now, you know, sign up now, get

48:59

yourself that extra square footage. I mean, it

49:01

is the one. I've got leather hands. And

49:06

on that bombshell, thank

49:09

you, Task Force, for 200 amazing weeks

49:11

of mysteries, exploring the unknown. We

49:13

couldn't, seriously, I mean it sincerely,

49:16

we couldn't do it without you. And

49:18

with that said, Fredo, I'll see you right back

49:20

here next week for another mystery. Seriously,

49:26

go out there and catch something tangible for

49:28

us. That'd be fantastic. Just 200 weeks, it's

49:30

like at some point, Task Force, we've got

49:33

to produce a product here. We've

49:35

got to produce something that we could show the

49:37

world. Okay, bye.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features