#2311 - Jeremie & Edouard Harris

#2311 - Jeremie & Edouard Harris

Released Friday, 25th April 2025
Good episode? Give it some love!
#2311 - Jeremie & Edouard Harris

#2311 - Jeremie & Edouard Harris

#2311 - Jeremie & Edouard Harris

#2311 - Jeremie & Edouard Harris

Friday, 25th April 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

Joe Rogan podcast, check it out. The

0:04

Joe Rogan experience. Train

0:06

by day, Joe Rogan podcast by

0:08

night, all day. All

0:11

right, so if there's

0:13

a doomsday clock for AI

0:15

and where we're fucked, what

0:18

time is it? If midnight is,

0:20

we're fucked. going to write into it.

0:22

You're not even going to ask us what we had for

0:24

breakfast. No, no, no, no, no, no. Jesus.

0:27

Let's get freaked out. Well, okay,

0:29

so there's one without speaking to like

0:31

the fucking tombs day to mention right up

0:34

here There's a question about like where

0:36

are we at in terms of AI capabilities

0:38

right now? And what do those timelines

0:40

look like right? There's a bunch of disagreement

0:42

One of the most concrete pieces of

0:44

evidence that we have recently came out of

0:46

a lab on an AI kind of

0:48

evaluation lab called meter and they

0:51

put together this this test

0:53

basically it's like you ask the

0:55

question Pick a task that takes a

0:57

certain amount of time like an hour

0:59

it takes like a human a certain amount

1:01

of time and then see like how

1:03

likely the best AI system is to solve

1:05

for that task then try a longer

1:07

task see like a 10 hour task can

1:09

it do that one and so right

1:12

now what they're finding is when it comes

1:14

to AI research itself so basically like

1:16

automate the work of an AI researcher you're

1:19

hitting 50 % success rates for these AI systems

1:21

for tasks that take an hour long. And

1:23

that is doubling every, right now, it's like

1:25

every four months. So like you had tasks

1:27

that you could do, you know, a person

1:29

doesn't five minutes like, you know, uh,

1:32

ordering an Uber Eats or like something that

1:34

takes like 15 minutes, like maybe a book

1:36

on flight or something like that. And it's

1:38

a question of like, how much can these

1:40

AI agents do, right? Like from five minutes

1:42

to 50 minutes to 30 minutes. And in

1:44

some of these spaces, like. research

1:47

software engineering and it's getting further

1:49

and further and further and doubling

1:51

it looks like every four months.

1:53

So like if you if you

1:55

extrapolate that you basically get to

1:57

task that take a month to

1:59

complete like by 2027

2:01

tasks that take an AI researcher

2:04

a month to complete these systems will be

2:06

completing with like a 50 % success rate.

2:08

So you'll be able to have an AI

2:10

on your show and ask it what the

2:12

doomsday clock is like by then. I probably

2:14

won't laugh. It'll

2:17

have a terrible sense of humor about it. Just make

2:19

sure you ask you what it had for breakfast before you

2:21

started. What

2:23

about quantum computing getting

2:25

involved in AI? So,

2:27

yeah, honestly, I don't

2:29

think it's... If you think that you're

2:31

going to hit... -level AI capabilities across

2:34

the board, say 2027, 2028, which when

2:36

you talk to some of these people

2:38

in the labs themselves, that's the timelines

2:40

they're looking at. They're not confident, they're

2:42

not sure, but that seems pretty plausible. If

2:45

that happens, really there's no way we're

2:47

going to have... computing that's going to be

2:49

giving enough of a bump to these

2:51

techniques. You're going to have standard classical computing.

2:54

One way to think about this is that

2:56

the data centers that are being built

2:58

today are being thought of literally as the

3:00

data centers that are going to house

3:02

like the artificial brain that powers superintelligence,

3:04

human level AI when it's built in

3:07

like 2027, something like that. So

3:09

how, how knowledgeable are

3:12

you when it comes to quantum computing? So

3:14

a little bit. I mean, I

3:16

like I did my my grad

3:18

studies in like the foundations of

3:20

quantum mechanics. Oh, great Yeah, well,

3:22

it was a mistake, but I

3:24

appreciate it for the person. Why

3:26

was it a mistake? You know,

3:29

so academia is like kind of

3:31

funny thing It's really bad culture.

3:33

It teaches you some really terrible

3:35

habits. So Basically my entire life

3:37

after academia and Ed's too. Yes

3:39

unlearning these like terrible habits of It's

3:42

all zero sum, basically. It's not like when

3:44

you're working in startups, it's not like when you're

3:46

working in tech where you build something and

3:48

somebody else builds something that's complimentary and you can

3:50

team up and just like make something amazing. It's

3:53

always wars over who gets

3:55

credit, who gets their name on the

3:57

paper. Did you cite this fucking stupid

3:59

paper from two years ago because the author

4:01

has an ego and you gotta be honest.

4:03

I was literally at one point The

4:07

I'm not gonna get any details here,

4:09

but like there was a collaboration

4:11

that we ran with like this anyway

4:13

fairly well -known guy and My supervisor

4:15

had me like write the emails

4:17

that he would send from his account

4:19

so that he was seen as

4:21

like the guy who is like interacting

4:23

with this big wig that kind

4:25

of thing is like Doesn't

4:27

tend to happen in startups at least not

4:30

in the same way because everybody wanted

4:32

credit for the like he wanted Just seem

4:34

like he was the genius who was

4:36

facilitating this sounding smart on email Right, but

4:38

that's everywhere. Yeah reason it

4:40

happens is that these guys who

4:42

are like professors or even

4:44

not even professors just like your

4:46

postdoctoral guy who's like supervising

4:48

you They can write your letters

4:50

of reference and control your

4:52

career after that. That's by the

4:54

ball. do whatever. And so

4:56

what you're doing, it's like a

4:58

movie. It's gross. gross

5:01

movie. It's gross boss in a movie. The

5:03

ones that take credit for your work. And

5:05

it's real. It's rampant. And the way to

5:07

escape it is to basically just be like,

5:09

fuck this, I'm going to go do my

5:11

own thing. So you're dropped out of grad

5:14

school to. Come start a company and and

5:16

I mean honestly even that it took it

5:18

took me it took both of us like

5:20

a few years to like unfuck our brains

5:22

and unlearn the bad habits We learned it

5:24

was really only a few years later that

5:26

we started like really really getting

5:28

a good like getting a good

5:30

flow going you're also you're kind of

5:33

disconnected from Like base reality when you're

5:35

in the the ivory tower, right? If

5:37

you're there's something beautiful about and this is

5:39

why we spent all our time in startups But

5:41

there's something really beautiful about like it's just

5:43

a bunch of assholes

5:45

us and like no money and nothing and

5:47

a world of like potential customers and

5:49

it's like you actually it's not that

5:52

different from like stand -up comedy in a

5:54

way like your product is can I

5:56

get the laugh right like something like

5:58

that and it's Unforgiving if you fuck up

6:00

it's like silence in the room It's the

6:02

same thing with startups like the space of

6:04

products that actually works is so narrow and

6:06

you've got to obsess over what people actually

6:08

want and it's so easy to fool yourself

6:10

into thinking that you've got something that's really

6:12

good because your friends and family are like

6:15

oh No, sweetie. You're doing a great job

6:17

like what a wonderful life. I would totally

6:19

use it I totally see that all that

6:21

stuff right and that's I love that because

6:23

it forces you to change. Mm -hmm. Yeah The

6:26

whole indoctrination thing in academia is so

6:28

bizarre because there's these like hierarchies of

6:30

powerful people and just the idea that

6:32

you have to work for someone someday

6:34

and they have to take credit by

6:36

being the person on the email. That

6:38

will haunt me for days. I swear

6:40

to God, I'll be thinking about that

6:42

for days now. I fucking can't stand

6:44

people like that. It drives me nuts.

6:47

One big consequence is it's really hard

6:49

to tell who The people are who

6:51

are creating value in that space too.

6:53

Of course sure because this is just

6:55

like television one of the things about

6:57

television shows is So I'll give you

6:59

an example a very good friend of

7:01

mine who's very famous comedian had this

7:03

show and His agent said we're gonna

7:05

attach these producers. It'll help get it

7:07

made and He goes well, what are

7:09

they gonna do? He goes they're not

7:12

gonna do anything is just be in

7:14

name. He goes but they're gonna get

7:16

credit He goes yeah, he goes fuck

7:18

that he goes no no listen listen

7:20

This is better for the show. It'll help

7:22

the show give me but they'll know how

7:24

excuse me They'll have a piece of the

7:27

show. He's like yes. Yes, but like it's

7:29

a matter of whether the show gets successful

7:31

or not And this is a good thing

7:33

to do and he's like what are you

7:35

talking about? But it was a conflict of

7:37

interest because this guy was rep the agent

7:39

was representing these other people This is completely

7:41

common. Yeah, so there's these executive producers that

7:43

are on shows that have Zero

7:45

to do with it. It's so many. Yeah,

7:47

so many industries are like this and that's why

7:49

we got into startups It's it's literally like

7:51

you in the world, right? Yeah, it's like in

7:54

a way like stand -up comedy like Jerry said

7:56

we're like podcasting We're like podcasting where your

7:58

enemy isn't actually hate it's indifference like most

8:00

of the stuff you do When especially when you

8:02

get started like why would anyone like give

8:04

a shit about you? They're just not gonna pay

8:06

attention. Yeah, that's not even your enemy You

8:08

know, that's just all potential. That's all that is.

8:11

You know, it's like your enemies within you.

8:13

It's like figure out a way to make

8:15

whatever you're doing good enough that you don't have

8:17

to think about it not being valuable. It's

8:19

it's meditative. Like there's no way for it not

8:21

to be to be in some way a

8:23

reflection of like yourself. You know, you're kind of

8:25

like in this battle with you trying to

8:27

convince yourself that you're great, so the ego

8:29

wants to grow, and then you're constantly trying to

8:32

compress it and compress it. And if there's

8:34

not that outside force, your ego will expand to

8:36

fill whatever volume is given to it. Like

8:38

if you have money, if you have fame, if

8:40

everything's given, and you don't make contact with

8:42

the unforgiving on a regular basis, like, yeah,

8:45

you know, you're gonna end up, you're gonna

8:47

end up doing that to yourself. And you could.

8:49

Yeah, it's possible to avoid, but you have

8:51

to have strategies. Yeah, you have to be intentional

8:53

about it. Yeah, the best strategy is jujitsu. Yeah,

8:56

it's Mark Zuckerberg is a different

8:58

person now. Yeah, you can see

9:01

it. You can see it.

9:03

Yeah, well, it's a really good thing for people

9:05

that have too much power because you just

9:07

get strangled all the time. Yeah. And then you

9:09

just get your arms bent sideways. And after

9:11

a while, you're like, OK. This

9:13

is reality. This is reality. This social hierarchy

9:15

thing that I've created is just nonsense.

9:17

It's just smoke and mirrors. And they know

9:19

it is, which is why they so

9:22

rabidly enforce these hierarchies. The best people seek

9:24

it out. sir and ma 'am and all

9:26

that kind of shit. That's what it

9:28

is. You don't feel like you

9:30

really have respect unless you say that. Ugh

9:32

these poor kids that have to go

9:35

from college where they're talking to these dipshit

9:37

professors out into the world and operating

9:39

under these same rules that they've been like

9:41

Forced in indoctrinated to it's God to

9:43

just make it on your own It's amazing

9:45

what you can get used to though

9:47

and and like the it's only you're mentioning

9:49

the producer thing that is literally also

9:51

a thing that happens in academia So you'll

9:53

have these conversations where it's like all

9:55

right. Well this paper is Fucking garbage or

9:57

something, but we want to get it

10:00

in a paper in a journal and so

10:02

let's see if we can get like

10:04

a famous guy on The list of authors

10:06

so that when it gets reviewed people

10:08

go like oh mr.. So -and -so okay like

10:10

and that literally happens like you know

10:12

the funny thing is like the hissy fits

10:14

over this are like the stakes are

10:16

so Brutally low at least with your producer

10:18

example like someone stands to make a

10:20

lot of money with this It's like you

10:22

get maybe like an assistant professorship out

10:25

of it best that's like 40 grand a

10:27

year and you're it's just like what

10:29

this is it's just produces it is money

10:31

but I don't even think they notice

10:33

the money anymore I think a big part

10:35

because all those guys are really really

10:37

rich already I think you know if you're

10:39

a big time TV producer you're really

10:41

rich I think the big thing is being

10:44

thought of as a genius who's always

10:46

connected to successful projects. That's what

10:48

they really like. That is always going

10:50

to be a thing, right? It wasn't one producer.

10:52

It was like a couple. So there's going to be

10:54

a couple different people that were on this thing

10:56

that had zero to do with it. It was all

10:58

written by a stand -up comedian. His

11:00

friends all helped him. They all put it

11:02

together. And then he was like,

11:04

no, you want to fire his agent over

11:07

it. Oh, yeah, good for him. mean yeah get

11:09

the fuck out of here at a certain

11:11

point for the producers too It's kind of like

11:13

you'll have people approaching you for help on

11:15

projects that look nothing like projects You've actually done

11:17

so I feel like it just it just

11:19

adds noise to your your universe like if you're

11:21

actually trying to build cool shit You know

11:23

I mean like some people just want to be

11:25

busy They just want more things happening and

11:27

they think more is better. More is not better,

11:29

because more is energy that takes away from

11:32

the better, whatever the important shit is. Yeah, the

11:34

focus. You only have so much time until

11:36

AI takes over. Then you'll have all the time

11:38

in the world, because no one will be

11:40

employed and everything will be automated. We'll all be

11:42

on universal basic income. And it. That's a

11:44

show. That's

11:46

a sitcom much of poor people existing

11:49

on $250 a week. Oh, I would watch

11:51

that Yeah, cuz the government just gives

11:53

everybody that that's what you live off of

11:55

like weird shit is cheap like the

11:57

stuff that's like all like well the stuff

11:59

you can get from chatbots and AI

12:01

agents is cheap But like food is super

12:03

expensive or something. Yeah, the organic food

12:05

is gonna be gonna have to kill people

12:08

for it You will eat people it

12:10

will be like a soylent world, right? than

12:15

people, though. That's true. Depends

12:17

on what they're eating, though. It's just like animals, you

12:20

know? You don't know to eat a bear that's been

12:22

eating salmon. They taste like shit. Yeah, I didn't know

12:24

that. I've been eating my bear raw

12:26

in this entire time. So

12:30

back to the quantum thing. So

12:33

quantum computing is infinitely more

12:35

powerful than standard computing. Would

12:38

it make sense then that if quantum

12:41

computing can run a large language model, that

12:43

it would reach a level of intelligence

12:45

that's just preposterous? So yeah, one way to

12:47

think of it is like, there are

12:49

problems that quantum computers can solve way, way,

12:51

way, way better than classical computers. And

12:53

so like, the numbers get observed pretty quickly.

12:56

It's like, problems that a classical computer

12:58

couldn't solve if it had the entire lifetime

13:00

of the universe to solve it, a

13:02

quantum computer right in like 30 seconds boom.

13:05

But the flip side, there are problems

13:07

that quantum computers just can't help us

13:09

accelerate. The kinds that one

13:12

classic problem that quantum computers help with

13:14

is this thing called the traveling salesman

13:16

paradox or problem where you have a

13:18

bunch of different locations that a salesman

13:20

needs to hit and what's the best

13:22

path. to hit them most efficiently. That's

13:24

kind of a classic problem if you're

13:26

going around different places and have to

13:28

make stops. There are

13:30

a lot of different problems that have the right

13:32

shape for that. A lot of quantum machine learning,

13:34

which is a field, is focused on

13:36

how do we take standard AI

13:39

problems, like AI workloads that we want

13:41

to run, and massage them into

13:43

a shape that gives us a quantum

13:45

advantage. And it's a nascent field.

13:47

There's a lot going on there. I

13:50

would expect, my personal expectations

13:52

is that we just build the

13:54

human level AI and very

13:56

quickly after that superintelligence without ever

13:58

having a factor in quantum.

14:00

But it could define that for

14:02

people. What's the difference between

14:04

human level AI and superintelligence? Yeah.

14:07

So yeah, human level AI

14:09

is like AI. You

14:11

can imagine like it's AI that is as

14:13

smart as you are in, let's say, all

14:15

the things you could do on a computer.

14:17

So you can order food on a computer,

14:19

but you can also write software on a

14:21

computer. You can also email people and pay

14:23

them to do shit on a computer. You

14:25

can also trade stocks on a computer. So

14:27

it's as smart as a smart person for

14:30

that. Super intelligence, people

14:32

have various definitions, and there are

14:34

all kinds of honestly, hissy fits

14:36

about different definitions. Generally

14:38

speaking, it's something that's like very significantly

14:40

smarter than the smartest human and

14:42

so you think about it It's kind

14:44

of like it's it's as smart

14:46

as much smarter than you as you

14:48

might be smarter than a toddler and

14:51

You think about that and you

14:53

think about like the you know How

14:55

how do you how would it

14:57

how would a toddler control you? It's

14:59

kind of hard like you can

15:01

you can outthink a toddler pretty much

15:03

like any day the week. And

15:05

so super intelligence gets us at these

15:07

levels where you can potentially do things

15:09

that are completely different. And basically, you

15:11

know, new scientific theories. And last time

15:14

we talked about, you know, new, new

15:16

stable forms of matter that were being

15:18

discovered by these kind of narrow systems.

15:20

But now you're talking about a system

15:22

that is like, has that intuition combined

15:24

with the ability to talk to you

15:26

as a human and to just have

15:29

really good like rapport with you, but

15:31

can also do math. It can also

15:33

write code. It can also like solve

15:35

quantum mechanics and has that all kind

15:37

of wrapped up in the same package.

15:39

And so one of the things too

15:41

that by definition, if you build a

15:44

human level AI, one of the things

15:46

it must be able to do as

15:48

well as humans. Is AI research

15:50

itself? Yeah. Or at least the parts

15:52

of AI research that you can do in

15:54

just like software, like you know, by

15:56

coding or whatever these systems are designed to

15:58

do. And so, so

16:00

one implication of that is you

16:03

now have automated AI researchers. And

16:05

if you have automated AI researchers, that

16:08

means you have AI systems that can

16:10

automate the development of the next. Level

16:12

of their own capabilities and now you're

16:14

getting into that whole you know singularity

16:16

thing where it's an exponential that just

16:18

builds on itself and builds on itself

16:20

Which is kind of why? You

16:22

know a lot of people argue

16:24

that like if you build human level

16:27

AI Super intelligence can't be that

16:29

far away. You've basically unlocked everything and

16:31

we kind of have gotten very

16:33

close right like it's It's past the

16:35

Fermi, not the Fermi Paradox. What

16:38

is it? Oh, yeah, yeah, the damn it.

16:40

We were just talking about him the other

16:42

day. Yeah, the test, the... Oh, the Turing

16:44

test? Turing test, thank you. We were just

16:46

talking about how horrible what happened to him

16:48

was, you know, the chemically castrated

16:50

him because he was gay. Yeah. Horrific

16:53

winds up killing himself the the guy

16:55

who figures out What's the test to

16:57

figure out whether or not ai's become

16:59

sentient and by the way does this

17:01

in like what 1950s? Oh, yeah, yeah

17:03

Alan Turing is like the guy was

17:05

a beast, right? How did he think

17:07

that through he invented even know he

17:09

invented basically the concept that underlies all

17:11

computers like he was like An absolute

17:13

beast. He was a code breaker that

17:15

broke the Nazi codes, right? And he

17:17

also wasn't even the first person to

17:19

come up with this idea of machines,

17:22

building machines, and there being implications like

17:24

human disempowerment. So if you go back

17:26

to, I think it was like the

17:28

late 1800s and I don't remember the

17:30

guy's name, but I sort of like

17:32

came up with this. He was observing

17:34

the industrial revolution and mechanization of labor

17:36

and kind of starting to see. More

17:38

and more like if you zoom out it, it's

17:40

almost like you have a humans or an ant colony

17:42

and the artifacts that that colony is producing that

17:44

are really interesting are these machines. You know,

17:46

you kind of like look at the

17:48

surface of the earth as like gradually increasingly

17:50

mechanized thing and it's not super clear. If

17:53

you zoom out enough, like what is

17:55

actually running the show here? Like you've got

17:57

human servicing machines. Humans looking to improve

17:59

the capability of these machines at this frantic

18:01

pace. Like they're not even in control

18:03

of what they're doing. Economic forces are pushing.

18:05

We the servant of the master at

18:08

a certain point. Like, yeah. And the whole

18:10

thing is like, especially with a competition

18:12

that's going on between the labs, but just

18:14

kind of in general, you're

18:16

at a point where like. Do the

18:18

CEOs of the labs, they're these

18:20

big figureheads, they go on interviews, they

18:22

talk about what they're doing and

18:24

stuff, do they really have control over

18:26

any part of the system? The

18:29

economy's in this almost convulsive

18:31

fit. You can almost feel

18:33

like it's hurling out AGI.

18:37

As one data

18:39

point here, all

18:42

these labs, OpenAI, Microsoft,

18:44

Google, Every year they're

18:46

spending like an aircraft carrier worth of

18:48

capital individually each of them just

18:51

to build bigger data centers to house

18:53

more AI chips to train bigger

18:55

more powerful models And that's like so

18:57

so we're actually getting the point

18:59

where if you look at on a

19:01

power consumption basis like We're getting

19:03

to you know two three four five

19:06

percent of US power production If

19:08

you project out into the late 2020s

19:11

Kind of 2026 27 you're not enough

19:13

for double digit though not for double

19:15

digit for single digit Yeah, you're talking

19:17

like that's a few gigawatts a one

19:19

gigawatt. So it's not enough for single

19:21

digit It's in the like for for

19:23

2027 you're looking at like you know

19:26

in the point five -ish percent, but it's

19:28

like, it's a big fucking frat, like

19:30

you're talking about gigawatts and gigawatts. One

19:32

gigawatt is a million homes. So you're

19:34

seeing like one data center in 2027

19:36

is easily going to break a gig.

19:38

There's going to be multiple like that.

19:41

And so it's like a thousand, sorry,

19:43

a million home city metropolis, really, that

19:45

is just dedicated to training like one

19:47

fucking model. That's what this is. Again,

19:49

if you zoom out at planet Earth,

19:51

you can interpret it as like this.

19:53

Like all these humans frantically running around

19:56

like ants just like building this like

19:58

artificial brain mind assembling itself now face

20:00

of the planet Marshall McLuhan in like

20:02

1963 or something like that said human

20:04

beings of the sex organs of the

20:06

machine world. Oh, God, that hits different

20:09

today. Yeah, it does. It does. I've

20:11

always said that if we were aliens,

20:13

or if aliens came here and studied

20:15

us, they'd be like, what is the

20:17

dominant species on the planet doing? Well,

20:19

it's making better things. That's all it

20:21

does. The whole thing,

20:23

it's dedicated to making better things. And

20:25

all of its instincts, including materialism, including

20:27

status, keeping up with the Joneses, all

20:29

that stuff is tied to newer, better

20:31

stuff. You don't want old shit. You

20:34

want new stuff. You don't want an

20:36

iPhone 12. You know, what are you

20:38

doing, you loser? You

20:40

need newer better stuff and they

20:42

convince people especially in the realm

20:45

of like consumer electronics Most people

20:47

are buying things. They absolutely don't

20:49

need the vast majority of the

20:51

spending on new phones is completely

20:53

unnecessary Yeah, but I just need

20:55

that extra like that extra like

20:58

fourth camera though My life isn't

21:00

I run one of my phones

21:02

is an iPhone 11 and I'm

21:04

purposely not switching it just to

21:06

see if I notice it. I

21:08

fucking never know I've

21:10

watched YouTube on it, I text people,

21:12

it's all the same. I go online, it

21:15

works, it's all the same. Probably

21:17

the biggest thing there is gonna be the

21:19

security side, which - No, they update the

21:21

security, it's all software. But I

21:23

mean, if your phone gets old enough, I mean,

21:25

at a certain point they stop updating it?

21:27

Yeah, like iPhone one, China's watching

21:29

all your dick pics. Oh dude, I

21:31

mean, Salt Typhoon, they're watching all our

21:33

dick pics. They're definitely seeing mine. Which

21:35

Salt Typhoon? So salt. Oh, sorry. Yeah.

21:38

Yeah, so it's this big Chinese cyber

21:40

attack actually starts to get us to

21:42

to kind of the the broader great

21:44

name by the way salt typhoon. You

21:46

know, yeah guys They have the coolest

21:48

names for their cyber operations to destroy

21:50

salt typhoon. slick, you know, it's kind

21:52

of like when when people go out and do like

21:54

a an awful thing like a school shooting or

21:56

something and they're like, oh, let's talk about, you know,

21:59

if you give it a cool name, like now

22:01

the Chinese are definitely going to do it again. Um,

22:03

anyway, because they have a cool name. Yeah. That's

22:05

definitely a salt typhoon. Pretty dope.

22:07

Yeah. But it's this thing where basically,

22:09

so, so there was in the, um,

22:11

the 3G kind of protocol that was

22:13

set up years ago. Law

22:15

enforcement agencies included backdoors intentionally to be

22:17

able to access comms theoretically if they

22:20

got a warrant and so on. And

22:22

well, you introduce a backdoor, you

22:24

have adversaries like China who are

22:26

wicked good at cyber. They're

22:29

going to find and exploit those backdoors. And

22:31

now basically they're sitting there and they had been

22:33

for some people think like maybe a year

22:35

or two before it was really discovered. And just

22:37

a couple of months ago, they kind of

22:39

go like, oh, cool. We got

22:41

fucking like China all up in our shit

22:43

and this is like this is like

22:45

flip a switch for them and like you

22:47

turn off the power water to a

22:49

state or like you fucking Yeah, well, sorry.

22:51

This is sorry salt typhoon. That was

22:53

about just Sitting on the the like basically

22:55

telecoms. Well, that's the telecom. Yeah, it's

22:57

not the but but yeah I mean that

22:59

that's another there's another there's another thing

23:01

where they're doing that too Yeah, and so

23:03

this is kind of where what we've

23:05

been looking into over the last year is

23:07

this question of how What

23:09

is if you're gonna make like a

23:11

Manhattan project for superintelligence, right? Which is

23:14

that's I mean that's what we're texting

23:16

about like way back and then Actually

23:18

funnily enough we shifted right our date

23:20

for security reasons, but if you're gonna

23:22

do a Manhattan project for for superintelligence

23:24

What does that have to look like?

23:26

What is the security game have to

23:28

look like to? Actually make it

23:30

so that China is not all up

23:33

in your shit like today It is

23:35

extremely clear that at the world's top

23:37

AI labs like all that shit is

23:39

being stolen. There is not a single

23:41

lab right now that isn't being spied

23:43

on successfully based on everything we've seen

23:45

by the Chinese. Can I ask you

23:47

this? Are we spying on the Chinese

23:49

as well? That's a big problem. Do

23:52

you want it? We're we're I mean,

23:54

we're definitely we're definitely doing some stuff

23:56

But in terms of the the relative

23:58

balance between the two we're not where

24:00

we need to be they spy on

24:02

us better than we spy on them

24:04

Yeah, cuz like we cuz like our

24:06

build all our shit. They build all

24:08

that was the Huawei situation, right? Yeah,

24:10

and it's also the oh my god.

24:12

It's the like if you look at

24:14

the power grid So this is now

24:16

public, but if you look at like

24:19

transformer substations, so these are the essentially

24:21

anyway, a crucial part of the electrical

24:23

grid. And there's really

24:25

like. Basically all of

24:27

them have components that are made in

24:29

China China is known to have planted

24:31

backdoors like Trojans into those substations to

24:33

fuck with our grid The thing is

24:35

when you see a salt typhoon when

24:37

you see like big Chinese cyber attack

24:39

or big Russian cyber attack You're not

24:42

seeing their best that these countries do

24:44

not go and show you like their

24:46

best cards out the gate you you

24:48

show the bare minimum that you can

24:50

without Tipping your hand at the actual

24:52

exquisite capabilities capabilities you have like we've

24:54

the way that one of the people

24:56

who's been walking us through all this

24:58

really well explained it is the philosophy

25:01

is you want to learn without teaching.

25:03

You want to use what is the

25:05

lowest level capability that has the effect

25:07

I'm after, and that's what that is.

25:09

I'll give an example. I'll tell you

25:11

a story that's kind of like, it's

25:13

a public story, and it's from a

25:15

long time ago, but it kind of

25:17

gives a flavor of how far these

25:20

countries will actually go when they're playing

25:22

the game for fucking real. It's

25:25

1945. America and

25:27

the Soviet Union are like best pals

25:29

because they've just defeated the Nazis, right? To

25:33

celebrate that victory in the coming New

25:35

World Order that's going to be great for

25:37

everybody, the children of

25:39

the Soviet Union give us

25:41

a gift to the American

25:43

ambassador in Moscow, this beautifully

25:45

carved wooden seal of the

25:47

United States of America. Beautiful

25:49

thing. Ambassador's thrilled with it.

25:51

He hangs it up. on

25:53

behind his desk in his

25:55

private office. You

25:57

can see where I'm going with this probably,

25:59

but yeah, seven years later, 1952

26:01

finally accursed us like, let's

26:03

take a town and actually

26:06

examine this. So they dig

26:08

into it and they find

26:10

this incredible contraption in it

26:12

called a cavity resonator. And

26:15

this device doesn't have a power source, doesn't

26:17

have a battery, which means when you're sweeping

26:19

the office for bugs, you're not going to

26:21

find it. What it does

26:23

instead is it's designed. That's it. That's

26:25

it. It's the thing they call it.

26:27

They call it the thing and what

26:30

this cavity resonator does is it's basically

26:32

designed to reflect Radio radiation Back to

26:34

a receiver to listen to all the

26:36

noises and conversations and talking in the

26:38

ambassador's private office And so how's it

26:40

doing it without a power source? So

26:42

that's what they do. So the Soviets

26:45

for seven years parked a van Across

26:47

the street from the embassy had a

26:49

giant fucking microwave antenna aimed right at

26:51

the ambassador's office And we're like zapping

26:53

it and looking back at the reflection

26:55

and literally listening to every single thing

26:58

he was saying and the best part

27:00

was When the embassy staff was like

27:02

we're gonna go and like sweep the

27:04

office for bugs periodically They'd be like

27:06

hey, mr. Ambassador. We're about to sweep

27:08

your office for bugs and the ambassador

27:10

was like Cool. Please proceed and go

27:13

and sweep my office for bugs. And

27:15

the KGB dudes in the van were

27:17

like, just turn it off. Sounds like

27:19

they're going to sweep the office for

27:21

bugs. Let's turn off our giant microwave

27:23

antenna. And they kept at it for

27:26

seven years. It was only ever discovered

27:28

because there was this like British radio

27:30

operator who was just, you know, doing

27:32

his thing, changing his dial. And he's

27:34

like, oh, shit. Like, is that the

27:36

ambassador? So the thing is, oh, and

27:38

actually, sorry. One other thing about that,

27:41

if you heard that story and you're

27:43

kind of thinking to yourself. Hang on

27:45

a second. They

27:47

were shooting like microwaves at

27:49

our ambassador 24 -7 for seven

27:51

years. Doesn't that seem like

27:54

it might like fry his genitals or

27:56

something? Yeah, or something like that. You're

27:58

supposed to have a lead vest. And

28:00

the answer is, yes. Yes.

28:04

And this is something that came up

28:06

in our investigation just from every single

28:08

person who was like, who was filling

28:10

us in and who dialed in and

28:12

knows what's up. They're like, look, so

28:14

you got to understand like our adversaries, if

28:17

they need to like give you

28:20

cancer in order to rip your shit

28:22

off of your laptop, they're

28:24

going to give you some cancer. Did he get cancer? I

28:26

don't know specifically about the ambassador,

28:28

but like... That's also so... We're limited

28:30

to what we can say. There's

28:32

actually people that you talk to later

28:35

that... can go in more detail

28:37

here, but older technology like that, kind

28:39

of lower power, you're less likely

28:41

to look at that. Nowadays, we live

28:43

in a different world. The guy

28:45

that invented that microphone invented, his last

28:47

name is Therman. He invented this

28:49

instrument called the Therman, which is a

28:51

fucking really interesting thing. Oh, he's

28:53

just moving his hands? Yeah, your hands

28:55

control it, waving over this. What?

28:57

It's a fucking wild instrument. Have you

28:59

seen this before, Jamie? Yeah, I

29:02

saw Juicy J playing it yesterday on

29:04

Instagram. He's like practicing. It's

29:06

a fucking Wow pretty good at

29:08

it, too. That's yeah There's we

29:10

two two con both hands are

29:12

controlling it by moving in and

29:14

out and space x y's I

29:16

don't honestly don't really know how

29:18

the fuck it works but wow

29:20

that is wild it's also a

29:22

lot harder to do than it

29:24

seems so American the Americans tried

29:26

to replicate this for years and

29:28

years and years without without really

29:30

succeeding and um anyway uh that's

29:32

all kind of part I have

29:34

a friend who used to work

29:36

for intelligence agency and he was

29:39

working in Russia and the fact

29:41

they found that the building was

29:43

bugged with these super sophisticated bugs

29:45

that operate their power came from

29:47

the swaying of the building. Yeah,

29:49

how I've never heard that way of

29:51

just like your walk like I have

29:53

a mechanical watch on so when I

29:55

move my watch powers out powers up

29:57

the spring and it keeps the watch

29:59

That's an automatic that's how an automatic

30:01

mechanical watch works They figured out a

30:03

way to just by the subtle swaying

30:05

of the building in the wind That

30:07

was what was powering this listening device.

30:09

So this is the thing, right? Like

30:11

the I mean what? Well,

30:13

and things that the things that nation states

30:15

up Jamie Google says that's That's what

30:18

was powering this thing. The Great Seal bug,

30:20

which I think is the thing. Really?

30:22

Is another one? No bug. Oh, this is, so you

30:24

can actually see in that video, I think there was

30:26

a YouTube, yeah. Same kind of thing, Jamie? I

30:29

typed in, rush

30:31

this by bug building sway. The

30:34

thing is what pops up. The thing? Which

30:36

is what we were just talking about. Oh, that

30:38

thing, so that's powered the same way by

30:40

the sway of the building? I

30:42

it was powered by radio

30:44

frequency emission. So there may

30:46

be another thing. Related to it not

30:48

not sure but Yeah, maybe maybe Google's

30:50

a little confused. Maybe it's the word

30:53

sway is what's throwing it off But

30:55

it's nobody it's a great catch in

30:57

the only reason we even know that

30:59

too is that the when the u2s

31:01

were flying over Russia They had a

31:03

u2 that got shot down in 1960

31:05

the Russians go like oh Like frigging

31:08

Americans like spying on us. What the

31:10

fuck? I thought we were buddies or

31:12

well 60s obviously think that but and

31:14

then the Americans are like, okay, bitch

31:16

Look at this and they brought out

31:18

the the seal and that's how it

31:20

became public It was basically like the

31:23

response to the Russians saying like, you

31:25

know, wow. Yeah, they're all dirty Everyone's

31:27

spying on everybody. That's a thing and

31:29

I think they probably all have some

31:31

sort of UFO technology We need to

31:33

talk about that we turn off our

31:36

mics and 99 % sure a lot

31:38

of that you need to talk to

31:40

some of the I've been talking to

31:42

people. I've been talking

31:44

to a lot of people. There might

31:46

be some other people that you'd be interested

31:48

in with. I would very much be interested.

31:50

Here's the problem. Some of the people I'm

31:52

talking to, I'm positive, they're

31:54

talking to me to give me

31:56

bullshit. Because

31:59

I'm on your list. No,

32:01

you guys are not the list. But there's

32:03

certain people who are like, okay, maybe most of

32:05

this is true, but some of it's not on

32:07

purpose. There's that. I guarantee you,

32:09

I know I talk to people that don't

32:11

tell me the truth. Yeah, yeah, it's interesting

32:13

problem in like all Intel right because there's

32:15

always the mix of incentives is so fucked

32:17

like the the adversary is trying to add

32:19

noise into the system You've got you got

32:21

pockets of people within the government that have

32:23

different incentives from other pockets And then you

32:26

have top -secret clearance and all sorts of other

32:28

things that are going on Yeah, one guy

32:30

that texted me is like the guy telling

32:32

you that the they aren't real is literally

32:34

involved in these meetings. So stop to

32:36

stop listening down. One of

32:39

the techniques is actually to

32:41

inject so much noise that

32:43

you don't know what and

32:45

you can't follow. This happened

32:47

in the COVID thing, the

32:49

lab leak versus the natural

32:51

wet market thing. I remember

32:53

there was a debate that

32:56

happened about What was the

32:58

origin of COVID? This was

33:00

like a few years ago.

33:03

Uh, it was like an 18 or

33:05

20 hour long YouTube debate, just like

33:07

punishingly long. And it was like, there

33:09

was a hundred thousand dollar bet either

33:11

way on who would win. And it

33:13

was like lab leak versus wet market.

33:15

And at the end of the 18

33:17

hours, the conclusion was like one of

33:19

them one, but the conclusion was like,

33:21

it's basically 50, 50 between them. And

33:23

then I remember like hearing that and

33:25

talking to some folks and being like,

33:27

hang on a second. So. You got

33:29

to believe that whether it came from

33:31

a lab or whether it came from

33:33

a wet market, one of the top

33:36

three priorities of the CCP from a

33:38

propaganda standpoint is like, don't

33:40

get fucking blamed for COVID. And

33:42

that means they're putting like one

33:44

to $10 billion and some of their

33:46

best people on a global propaganda

33:48

effort to cover up evidence and confuse

33:50

and blah, blah, blah. You

33:52

really think that.

33:55

You that you're 50 % like

33:57

you're that confusion isn't coming from

33:59

that incredibly resourced effort like

34:02

they know what they're doing particularly

34:04

when Different biologists and virologists

34:06

who weren't attached to anything. Yeah,

34:08

we're talking about like the

34:10

cleavage points and this different aspects

34:12

of the virus that appeared

34:14

to be genetically manipulated. The fact

34:16

that there was only one

34:19

spillover event, not multiple ones, none

34:21

of it made any sense.

34:23

All of it seemed like some

34:25

sort of a genetically engineered

34:27

virus. It seemed like gain of

34:29

function research. Or

34:31

early emails were talking about that

34:33

but then everybody changed their opinion and

34:36

even the taboo right against talking

34:38

about it through that lens Oh, yeah,

34:40

total propaganda. It's racist. Yeah, which

34:42

is crazy because nobody thought the Spanish

34:44

flu is racist and it didn't

34:46

even really come from Spain Yeah, that's

34:48

true. Yeah, yeah from Kentucky. I

34:51

didn't know that yeah, I think it

34:53

was Kentucky of Virginia Where was

34:55

the Spanish flu or originate from but

34:57

nobody got mad? Well, that's cuz

34:59

that's because the that's cuz the state

35:01

of Kentucky has an incredibly sophisticated

35:03

propaganda machine. And pinned it Spanish. might

35:06

not have been Kentucky. But I

35:08

think it was an agricultural thing. Kansas,

35:11

thank you. Yeah, goddamn Kansas, you know,

35:13

I've always I've always said that I've

35:15

always H1N1 strain had genes of avian

35:17

origin By the way, this is people

35:19

always talk about the Spanish flu if

35:21

it was around today They would just

35:23

everybody would just get antibiotics and we'd

35:25

be fine So this this whole mask

35:28

die off of people it would be

35:30

like the Latinx flu And we would

35:32

be the Latinx flu. The Latinx flu.

35:36

That one didn't stick at all. It didn't

35:38

stick. Latinx. There's a lot of

35:40

people claiming they never used it, and they

35:42

pull up old videos of them. That's a

35:44

dumb one. It's literally a gendered language, you

35:46

fucking idiots. You can't just do

35:48

that. Latinx. It went on for a

35:50

while, though. Sure, everything goes on

35:52

for a while. Think about how

35:54

long they did the bottomies. They

35:57

did lobotomies for 50 fucking years. Probably

35:59

went, hey, maybe we should

36:02

stop doing this. It was like the

36:04

same attitude that got... that got Turing

36:06

chemically castrated, right? I mean, they're like,

36:08

hey, let's just get in there and

36:10

fuck around a bit and see happens.

36:12

this was before they had SSRIs and

36:14

all sorts of other interventions. But what

36:16

was the year lobotomies? I

36:18

believe it stopped in 67. Was it 50 years? I

36:20

think you said 70 last time, and that was correct

36:22

when I pulled it up. 70 years?

36:24

1970. Oh, I think it was 67. I like

36:26

how this has come up so many times that Jamie's

36:29

like, I think last time you said it. It

36:31

comes up all the time because it's one

36:33

of those things. That's insane. You can't just

36:36

trust the medical establishment. Officially, 67. It says

36:38

maybe one more in 72. Oh, God. Oh,

36:40

he died in 72. When did they start

36:42

doing it? I think

36:44

they started in the 30 or the 20s,

36:46

rather. That's pretty

36:48

ballsy. You know, the first guy

36:50

who did a lobotomy. Yeah. It

36:52

says 24. Freeman arrives to watch

36:55

DC Direct Labs. 35. They

36:57

tried it first. A leucotomy. They

36:59

just scramble your fucking brains. But doesn't

37:02

it make you feel better to call

37:04

it a leucotomy, though? Because it sounds

37:06

a lot more professional. No,

37:09

lobotomy, leucotomy. Leucotomy sounds gross.

37:12

Sounds like loogie. Like you're hunting a loogie.

37:15

Lobotomy. Boy. Topeka,

37:17

Kansas. Also Kansas. All

37:19

roads point to Kansas. All roads Kansas.

37:21

This is a problem. That's what's happening.

37:23

Everything's flat. You just lose your fucking

37:26

marbles. You go crazy. That's the main

37:28

issue with that. So they did this

37:30

for so long somebody won a Nobel

37:32

Prize for lobotomy wonderful Imagine give that

37:34

back piece of shit. Yes, seriously You're

37:36

kind of like you know, you don't

37:38

want to display it up in your

37:40

shelf. It's just like it's just a

37:43

good Indicator it's like it should let

37:45

you know that oftentimes science is incorrect

37:47

and that oftentimes, you know Unfortunately

37:49

people have a history of doing things

37:51

and then they have to justify that

37:53

they've done these things. Yeah, and they

37:56

you know But now there's also there's

37:58

so much more tooling too, right? If

38:00

you're a nation state and you want

38:02

to fuck with people and inject narratives

38:04

into the ecosystem, right? Like the the

38:06

whole idea of autonomous AI agents too

38:08

like having these basically like Twitter bots

38:11

or whatever bots like a lot of

38:13

one thing we've been we've been thinking

38:15

about to on the side is like

38:17

the idea of You know audience capture,

38:19

right you have like like Big people

38:21

with high profiles and kind of gradually

38:23

steering them towards a position by creating

38:26

bots that like through comments through right

38:28

votes, you know, it's 100 % It's

38:30

it's absolutely real. Yeah a couple of

38:32

the the big like a couple of

38:34

big accounts on X like that that

38:36

we were in touch with have sort

38:38

of said like yeah especially in the

38:41

last two years, it's actually become hard,

38:43

like I spent with the thoughtful ones,

38:45

right? It's become hard to like stay

38:47

sane, not on X, but like across

38:49

social media on all the platforms. And

38:51

that is around when, you know, it

38:53

became possible to have AIs that can

38:56

speak like people, you know, 90%, 95 %

38:58

of the time. And so you have

39:00

to imagine that, yeah, adversaries are using

39:02

this and doing this and pushing the

39:04

frontier. They'd be

39:06

full if they it. Oh yeah, 100%. You

39:08

have to do it because for sure we're

39:10

doing that. And this is one of the

39:12

things where, you know, like it

39:14

used to be, so OpenAI actually used

39:16

to do this assessment of their AI

39:18

models as part of their, they're kind

39:21

of what they call their preparedness framework

39:23

that would look at the persuasion capabilities

39:25

of their models as one kind of

39:27

threat vector. They pulled that out recently,

39:29

which they've, is kind of like. Why?

39:31

You can argue that it makes sense

39:33

I actually think it's it's somewhat concerning

39:36

because one of the things you might

39:38

worry about is if these systems Sometimes

39:40

they get trained through what's called reinforcement

39:42

learning potentially you could imagine training these

39:44

to be super persuasive by having them

39:46

interact with real people and convince them

39:48

practice at convincing them to do specific

39:50

things If that if you get to

39:52

that point These labs ultimately will

39:54

have the ability to deploy agents

39:56

at scale that can just persuade

39:58

a lot of people to do

40:00

whatever they want, including pushing legislative

40:02

agendas. And even help

40:04

them prep for meetings with the

40:06

Hill, the administration, whatever. And how

40:08

should I convince this person to

40:10

do that? Right. Yeah. Well, they'll

40:13

do that with text messages. Make

40:15

it more business -like. Make

40:17

it friendlier. Make it more... Jovial, but

40:19

this is like the same optimization

40:21

pressure that keeps you on tick -tock

40:23

that same like addiction Imagine that applied

40:25

to like persuading you of some

40:27

like some fact, right? Yeah, that's like

40:29

a on the other hand Maybe

40:31

a few months from now. We're all

40:33

just gonna be very very convinced

40:35

that it was all fine Yeah, maybe

40:37

they'll get so good It'll make

40:39

sense to you. Maybe they'll just be

40:41

right That's

40:44

how that shit works. Yeah,

40:46

it's a confusing time period. We've

40:48

talked about this ad nauseam, but

40:50

it bears repeating. Former

40:52

FBI analyst who investigated Twitter before

40:54

Elon bought it said that he

40:56

thinks it's about 80 % bots. Yeah,

40:58

80%. That's one of the reasons

41:00

why the bot purge, when Elon

41:02

acquired it and started working on

41:04

it, is so important. There needs

41:06

to be... challenge is like detecting

41:09

these things is so hard, right?

41:11

So increasingly, like more and more

41:13

they can hide like basically perfectly.

41:15

Like how do you tell the

41:17

difference between a cutting edge AI

41:19

bought? And a human just from

41:21

the camp because they can we can't generate

41:23

AI images of a family of a backyard

41:25

barbecue Post all these things up and make

41:27

it seem like it's real. Yep, especially now

41:29

AI images are insanely good now. They're really

41:31

nice It's crazy. Yeah, and if you have

41:33

a person you could just you could take

41:35

a photo of a person and manipulate it

41:38

in any way you'd like and then now

41:40

this is your new guy you could do

41:42

it instantaneously and then this guy has a

41:44

bunch of opinions on things and seems to

41:46

Seems always in line with the Democratic Party,

41:48

but whatever Good guy family

41:50

man. Look he's out in this

41:52

barbecue He's not even a fucking human

41:54

being and people are arguing with

41:56

this bot like back and forth and

41:58

you'll see it on any social

42:00

issue You see with Gaza and Palestine

42:03

you see it with abortion and

42:05

you see it with religious freedoms Yeah,

42:07

you just see these bots you

42:09

see these arguments and you know, you

42:11

see like various levels You see

42:13

like the extreme position and then you

42:15

see a more reasonable centrist position,

42:17

but essentially what they're doing is they're

42:20

consistent moving what's okay further and

42:22

further in a certain direction. It's

42:26

it's both directions like it's like right know

42:28

how when you're trying to like you're trying

42:30

to capsize a boat or something you're like

42:32

fucking with your buddy it on the lake

42:34

or something So you you push on one

42:36

side, then you push on the other side

42:38

Yeah, you push and until eventually it capsizes

42:40

this is kind of like our electoral process

42:42

is already naturally like this right we go

42:44

like we have a party in power for

42:46

a while then like they They get you

42:48

know They basically get like you get tired

42:50

of them and these you switch and that's

42:52

kind of the natural way how democracy works

42:54

or in a republic but the way that

42:56

adversaries think about this is they're like perfect

42:58

this swing back and forth all we have

43:00

to do is like when it's on this

43:02

way we push and push and push and

43:04

push until it goes more extreme and then

43:06

there's a reaction to it right and that's

43:08

swinging back and we push and push and

43:10

push on the other side until eventually something

43:12

breaks and that's a risk Yeah,

43:14

it's also like you know the organizations

43:17

that are doing this like we already

43:19

know this is part of Russia's MO

43:21

China's MO because back when it was

43:23

easier to detect We already could see

43:25

them doing this shit. So there is

43:27

this website called this person does not

43:30

exist I still exist surely now, but

43:32

it's kind of Kind of superseded. Yeah,

43:34

but you would like every time you

43:36

refresh this this website you would see

43:38

a different like human face that was

43:40

AI generated and What the Russian internet

43:42

research agency would do? Yeah, exactly what

43:45

what all these these and it's actually

43:47

yeah, I don't think they've really upgraded

43:49

it, but that's fake Wow, they're so

43:51

good. This is old, this is like

43:53

years old. And you could actually detect

43:55

these things pretty reliably. Like you might

43:57

remember the whole thing about AI systems

44:00

were having a hard time generating like

44:02

hands that only had like five fingers.

44:05

That's over though. Yeah, little hints of

44:07

it were though back in the day in

44:09

this person does not exist. And you'd

44:11

have the Russians would take like a face

44:13

from that and then use it as

44:15

the profile picture for like a Twitter

44:17

bot. And so that you could actually detect.

44:19

You'd be like, okay, I've got you

44:22

there, I've got you. can kind of get

44:24

a rough count right now. We can't

44:26

but we definitely know they've been in the

44:28

game for a long time There's no

44:30

way they're not and the thing with the

44:32

thing with like nation -state like propaganda attempts

44:34

right is that like people have this

44:37

this idea that like ah like I've

44:39

caught this like Chinese influence operation or whatever

44:41

like we nail them the reality is

44:43

nation -states operate at like 30 different levels

44:45

and if you're a priority like just influencing

44:47

our information spaces as a priority for

44:49

them They're not just going to

44:51

operate. They're not just going to pick a

44:54

level and do it. They're going to

44:56

do all 30 of them. And so you,

44:58

even if you're like among the best

45:00

in the world, like detecting this shit, you're

45:02

going to like, you're going to catch

45:04

and stop like levels one through 10. And

45:06

then going to be like, you're going

45:08

to be aware of like level 11, 12,

45:10

13, like you're working against it. And

45:13

you're, you know, maybe you're starting to think

45:15

about level 16 and you imagine like,

45:17

you know, about level 18 or whatever. But

45:19

they're like, they're above you, below you,

45:21

all around you. They're, they're incredibly, incredibly resource.

45:23

And this is something that came like

45:25

came. came through very strongly for us. You

45:27

guys have seen the Yuri Besmanoff video

45:30

from 1984 where he's talking about how all

45:32

our educational institutions have been captured by

45:34

the Soviet propaganda. It was

45:36

talking about Marxism, how it's

45:38

been injected into school systems and

45:40

how you have essentially two

45:42

decades before you're completely captured by

45:44

these ideologies and it's going

45:46

to permeate and destroy all of

45:48

your confidence in democracy. And

45:50

he was 100 % and this is

45:52

before these kind of tools before because

45:54

like the vast majority of the exchanges

45:56

of information right now are taking place

45:58

on social media the vast majority of

46:00

debating about things Arguing all taking place

46:03

on social media and if that FBI

46:05

analyst is correct 80 % of its bullshit.

46:07

Yeah, which is really wild Well, and

46:09

you look at like some of the

46:11

the documents that have come out. I

46:13

think it was like The I

46:15

think it was the CIA game plan

46:17

right for regime change or like undermining like

46:19

how do you do it, right? Have

46:21

multiple decision -makers at every level right all

46:23

these things and like what a surprise That's

46:25

exactly what like the US bureaucracy looks

46:27

like today slow everything down make change impossible

46:29

Make it so that everybody gets frustrated

46:31

with it and they give up hope They

46:33

decided to do that to other countries

46:35

like yeah for sure they do that here

46:37

open society, right? I mean that's part

46:39

of the trade -off and that's actually a

46:42

big big part of the challenge too. So

46:44

when we're working on this, one of

46:46

the things, Ed was talking about these 30

46:48

different layers of security access or whatever, one

46:51

of the consequences is you bump into a team.

46:54

So the teams we ended up

46:56

working with on this project were

46:58

folks that we bumped into after

47:00

the end of our last investigation

47:02

who were like, oh. We talked

47:04

about last year. Yeah, yeah, yeah.

47:06

Looking at AGI, looking at the

47:08

national security landscape around that. And

47:10

a lot of them are like

47:12

really well placed. It was like,

47:14

you know, special forces guys from

47:16

tier one units. So you'll seal

47:18

team six type thing. And because

47:20

they're so like in that ecosystem, you

47:23

you'll see people who are like

47:25

ridiculously specialized and competent, like the best

47:27

people in the world at doing

47:29

whatever the thing is, like to break

47:31

the security. And they don't know

47:33

often about like another group of guys

47:35

who have a completely different capability

47:37

set. And so. What you find is

47:39

like you're you're indexing like hard

47:41

on this vulnerability and then suddenly someone

47:43

says Oh, yeah, but by the

47:45

way, I can just hop that fence.

47:48

So really funny the really funny

47:50

thing about this is like Most or

47:52

even like almost all of the

47:54

really really like elite security people kind

47:56

of think that like all the

47:58

other security people are dumbasses even when

48:00

they're not Or like, yeah, they're

48:02

biased in the direction of, because it's

48:04

so easy when everything's like stove

48:06

piped. But so most people who say

48:08

they're like elite at security actually

48:10

are dumbasses. Because most security

48:12

is like about checking boxes and

48:14

like SOC2 compliance and shit like

48:16

that. But yeah, what it is

48:18

is it's like, so everything's so

48:21

stove piped. Yeah. you don't, you

48:23

literally can't know what the exquisite state of

48:25

the art is in another domain. So it's a

48:27

lot easier for somebody to come up and

48:29

be like, Oh yeah, like I'm actually really good

48:31

at this other thing that you don't know.

48:33

And so figuring out who actually is the, like

48:35

we had this experience over and over where

48:37

like, you know, you run into a team and

48:39

then you run into another team, they have

48:41

an interaction. You're kind of like, Oh, interesting. So

48:43

like, you know, like these are the really

48:45

kind of the people at the top of their

48:47

game. And that's been this very long process

48:49

to figure out like, okay, what does it take

48:51

to actually secure our critical infrastructure against like

48:53

CCP, for example, like Chinese attacks,

48:55

if we're if we're building a super

48:57

intelligence project. And it's it's this weird

48:59

like kind of challenge because of the

49:01

stove piping, no one has the full

49:03

picture. And we don't think that we

49:06

have it even now, but definitely Don't

49:08

know of anyone who's come like that

49:10

like this close to it The best people

49:13

are the ones who when they when

49:15

they encounter another team and and other ideas

49:17

and start to engage with it Or

49:19

like instead of being like oh like you

49:21

don't know you're talking about who just

49:23

like actually lock on and go like That's

49:25

fucking interesting. Tell me more about that

49:27

right people that have control of their ego.

49:30

Yes 100 % with everything the best of

49:32

the best the best of the best

49:34

like got there by Eliminating

49:37

their ego as much as they could

49:39

yeah always the way it is yeah, and

49:41

it's it's also like the the fact

49:43

of you know the 30 layers of the

49:45

stack or whatever it is of all

49:47

these security issues Means that no one can

49:49

have the complete picture at any one

49:51

time and the stack is changing all the

49:53

time people are inventing new shit people

49:55

things are falling in and out of And

49:57

and so you know figuring out what

49:59

is that team that can actually get you

50:02

that complete picture? is an

50:04

exercise, A, you can't really do, it's hard

50:06

to do it from the government side

50:08

because you got to engage with data center

50:10

building companies. You got to engage with

50:12

the AI labs and in particular with insiders

50:14

at the labs who will tell you

50:16

things that by the way the lab leadership

50:18

will tell you the opposite of in

50:20

some cases. And so it's

50:22

just this Gordian knot. It took

50:24

us months to pin down every

50:26

dimension that we think we've pinned

50:28

down. I'll give an example actually

50:30

of that. trying to

50:32

do the handshake, right, between different sets

50:35

of people. So we were

50:37

talking to one person who's thinking

50:39

hard about data center security, working

50:41

with like frontier labs on this

50:43

shit, very much like at the

50:45

top of her game, but she's

50:47

kind of from like the academic

50:49

space, kind of Berkeley, like the

50:51

avocado toast kind of side of

50:53

the spectrum, you know? And

50:56

she's... talking to us, she'd reviewed

50:58

the report we put out, the investigation

51:00

we put out. And she's like,

51:02

you know, I think you guys are

51:04

talking to the wrong people. And

51:06

we're like, can you say more about that? And

51:09

she's like, well, I don't think like, you

51:11

know, you talked to tier one special forces. I

51:13

don't think they like know much about that. We're

51:15

like, okay, that's not correct.

51:17

But can you say why? And she's like,

51:19

I feel like those are just the

51:21

people that like go and like bomb stuff.

51:24

it up. It's understandable, too, because a

51:26

lot of people have the wrong

51:28

sense of what a tier one asset

51:30

actually can do. Well, that's ego

51:32

on her part, because she doesn't understand

51:35

what they do. It's ego all

51:37

the way down, right? But

51:39

that's just dumb thing to say. If you literally don't

51:41

know what they do, and you say, don't they just

51:43

blow stuff up? Where's my

51:45

latte? It's a weirdly good impression. She

51:47

did ask about a latte. She did.

51:49

Did she talk in upspeak? You should

51:51

fire everyone who talks in upspeak. She

51:53

didn't talk in upspeak. The moment they

51:55

do that, you should just tell them

51:57

to leave. There's no way

51:59

of an original thought. This

52:01

is how you talk. China, can you

52:04

get out of our data center? Yeah, please.

52:08

I don't want to rip on on

52:10

that too much though because this is

52:12

the one really important factor here is

52:14

all these groups have a part of

52:17

the puzzle and they're all fucking amazed

52:19

they are like world -class at their own

52:21

little slice and a big part of

52:23

what we've had to do is like

52:25

bring people together and and there are

52:27

people who've helped us immeasurably do this

52:29

but like bring people together and and

52:31

explain to them the value that each

52:33

other has in a way that's like

52:35

Um that that allows that that bridge

52:37

building to be made and by the

52:39

way the the the tier one guys

52:42

are the the most like ego moderated

52:44

of the people that we talk to.

52:46

There's a lot of Silicon Valley hubris

52:48

going around right now where people are

52:50

like, listen, get out of our way.

52:52

We'll figure out how to do this

52:54

super secure data center infrastructure. We got

52:56

this. Why? Because we're the guys building

52:58

the AGI, motherfucker. That's kind

53:00

of the attitude. And it's cool,

53:02

man. That's like a doctor having an

53:04

opinion about how to repair your

53:07

car. I get that it's not the

53:09

elite kind of whatever, but

53:11

someone has to help you build a

53:14

good friggin fence? Like, I mean, it's not

53:16

just that. a

53:18

mixed bag, too, because, like, yes,

53:21

a lot of the hyperscalers,

53:24

like Google, Amazon, genuinely

53:26

do have some of the

53:28

best private sector security around

53:30

data centers in the world,

53:32

like hands down. The problem

53:34

is, there's levels above that.

53:36

And the guys who, like,

53:39

Look at what they're doing and see

53:41

what the holes are just go like

53:43

oh, yeah like I could get in

53:45

there No problem, and they can fucking

53:47

do it one thing my wife said

53:49

to me on a couple of occasions

53:51

like You seem to like and this

53:53

is towards the beginning of the project

53:55

you seem to like change your mind

53:57

a lot about what the right Configuration

53:59

is of how to do this and

54:01

yeah, it's because every other day you're

54:03

having a conversation with somebody's like Yeah,

54:05

like great job on on this thing,

54:07

but like I'm not gonna do that

54:09

I'm gonna do this other completely different

54:11

thing and that just fucks everything over

54:13

and so you have enough of those

54:15

conversations and at a certain point your

54:17

your plan your your game plan on

54:19

this Can no longer look like we're

54:22

gonna build a perfect fortress. It's got

54:24

to look like We're going to account

54:26

for our own uncertainty on the security

54:28

side and the fact that we're never

54:30

gonna be able to patch everything Like

54:32

you have to I mean, it's like

54:34

and that means You actually have to

54:36

go on offense from the beginning as

54:38

because like the truth is and this

54:40

came up over and over again There's

54:42

no world where you're ever gonna build

54:44

the perfect exquisite fortress around all your

54:46

shit and hide behind your walls like

54:48

this forever That just doesn't work because

54:50

no matter how perfect your system is

54:52

and how many angles you've covered Like

54:54

your, your adversary is super smart, is super

54:56

dedicated. If you see the field to them,

54:58

they're right up in your face and they're

55:00

reaching out and touching you and they're trying

55:02

to see like what, what your seams are,

55:05

where they break. And that just means you

55:07

have to reach out and touch them from

55:09

the beginning. Cause until you've actually like reached

55:11

out and used a capability and proved like

55:13

we can take down that infrastructure, we can

55:15

like disrupt that, that cyber operation. We can

55:17

do this. We can do that. You don't

55:19

know. if that capability is real or not.

55:22

Like you might just be like lying to

55:24

yourself and like, I can do this thing

55:26

whenever I want, but actually. You're kind of

55:28

more in academia mode than like starting mode

55:30

because you're not making contact every day with

55:32

the thing, right? You have to touch the

55:34

thing. And there's like, there's a related issue

55:36

here, which is a kind of like willingness

55:39

that came up over and over again. Like

55:41

one of the kind of gurus of this

55:43

space was like, made the point, a couple

55:45

of them made the point that. You

55:48

know you can have the

55:50

most exquisite capability in the world

55:52

But if you if you

55:54

don't actually have the willingness to

55:56

use it you might as

55:58

well not have that capability and

56:00

the challenges right now China

56:02

rush like our adversaries Pull all

56:05

kinds of stunts on us

56:07

and get no consequence particularly during

56:09

the previous administration. This was

56:11

a huge huge problem during the

56:13

previous administration where you actually

56:15

you actually had Sabotage operations being

56:17

done on American soil by

56:19

our adversaries where you had administration

56:21

officials. As soon as like

56:24

a thing happened, so there were,

56:26

for example, there was like four different

56:28

states had their 911 systems go

56:30

down, like at the same time, different

56:32

systems, like unrelated stuff. But it

56:34

was like, it's this stuff where it's

56:36

like, let me see if

56:38

I can do that. Let me see

56:41

if I can do it. Let me

56:43

see what the reaction is. Let me

56:45

see what the chatter is that comes

56:47

back after I do that. And one

56:49

of the things that was actually pretty

56:51

disturbing about that was under that administration

56:53

or regime or whatever, the response

56:56

you got from the government right out the gate

56:58

was, oh, it's an accident. And

57:00

that's actually unusual. The proper

57:02

procedure, the normal procedure in this case is

57:04

to say, we can't comment on an ongoing

57:06

investigation, which we've all heard, right? Like, we

57:08

can't comment on blah. We can neither confirm

57:10

nor deny. Exactly. It's all that stuff. And

57:12

that's what they say typically out the gate

57:14

when they're investigating stuff. But instead, coming out

57:16

and saying, oh, it's just an accident is

57:18

a break with procedure. What do you attribute

57:20

that to? If

57:22

they say,

57:26

if they leave an opening or say, actually,

57:28

this is an adversary action, we think

57:30

it's an adversary action, they

57:32

have to respond. The public. Demands

57:34

a response and they don't

57:36

there they were a fear

57:38

of escalation fearful so escalate so what ends

57:40

up happening right is and by the way

57:42

that that thing about like it's an accident

57:44

comes out often Before there would have been

57:46

time for investigators to physically fly on site

57:48

and take a look like there's no logical

57:50

way that you could even know that at

57:52

the time and they're like boom that's an

57:55

accident don't worry about it so they have

57:57

an official answer and then their Responses to

57:59

just bury their head in the sand and

58:01

not investigate right because if you were to

58:03

investigate if you were to say okay We

58:05

looked into this it actually looks like it's

58:07

fucking like country X that just did this

58:09

thing right if that's the conclusion It's

58:11

hard to imagine the American people not being

58:13

like, we're letting these people

58:16

injure our American citizens on

58:18

US soil, take out US

58:20

national security, or critical infrastructure,

58:22

and we're not doing anything.

58:25

The concern is about this, we're getting in

58:27

our own way of thinking, oh, well, escalation

58:29

is going to happen, and boom, we run

58:31

straight to, there's going to be a nuclear

58:33

war, everybody's going to die. When

58:35

you do that, The peace

58:38

between nations stability does not come

58:40

from the absence of activity it comes

58:42

from consequence It comes from just

58:44

like if you have you know a

58:46

an individual who misbehaves in society

58:48

There's a consequence and people know it's

58:50

coming you need to train your

58:52

counterparts in the international community. You're at

58:54

your adversary To not fuck with

58:57

your stuff. Can I stop for a

58:59

second when so are you essentially

59:01

saying that if you have Incredible

59:03

capabilities of disrupting grids and power systems and

59:05

infrastructure You wouldn't necessarily do it, but you might

59:08

try it to make sure it works a

59:10

little And that this is probably the hints of

59:12

some of this stuff because you've kind of

59:14

You got to get your reps in right you

59:16

got to get your reps in it's like

59:18

it's okay So suppose that like that I went

59:20

to you and was like hey I bet

59:22

I can kick your ass like I bet I

59:24

can like friggin slap a rubber guard on

59:27

you and like do whatever the fuck right And

59:29

you're like your expression by the way. Yeah.

59:31

Yeah, you look really convinced It's cuz I'm jacked

59:33

right well. No, there's people that look like

59:35

you that can strangle me believe it or not

59:38

Oh, yeah, there's a lot of like very

59:40

high -level Brazilian jujitsu black belts that are

59:42

just super nerds And they don't lift

59:44

weights at all. They only do jujitsu and

59:46

if you only do jujitsu you'll have

59:48

like a wiry body. That was heartless They

59:50

just slip that in like there's like

59:53

two guys who look like you it's like

59:55

just fucking intelligent You know they're like

59:57

some of the most brilliant people I've ever

59:59

met the really that's the issue is

1:00:01

like Data nerds get really involved

1:00:03

in Jiu Jitsu and Jiu Jitsu's data. But

1:00:05

here's the thing. So that's exactly it, right?

1:00:07

So if I told you, I bet I

1:00:09

can tap you out, right? I'm

1:00:12

like, where have you been training? Well, right. But

1:00:14

if you're like, oh, my answer was, oh, I've

1:00:16

just read a bunch of books. Oh.

1:00:18

You'd be like, oh, cool, let's go. Right?

1:00:20

Because making contact with reality is where

1:00:22

the fucking learning happens. You can

1:00:25

sit there and think all you want. Right.

1:00:27

But unless you've actually played the chess

1:00:29

match, unless you've reached out, touched, seen what

1:00:31

the reaction is, stuff, you don't actually

1:00:33

know what you think you know. And that's

1:00:35

actually extra dangerous. If you're sitting on

1:00:37

a bunch of capabilities and you have this

1:00:39

like unearned sense of superiority, because you

1:00:41

haven't used those exquisite tools, like it's a

1:00:43

challenge. And then you've got people that

1:00:46

are head of department, CEOs of corporations, everyone

1:00:48

has an ego. We've

1:00:50

got it. Yeah. And this ties into

1:00:52

like how exactly how. Basically the

1:00:54

international order and quasi stability actually gets

1:00:56

maintained So there's like above threshold

1:00:58

stuff, which is like you actually do

1:01:00

wars for borders and you know

1:01:02

Well, there's the potential for nuclear exchange

1:01:04

or whatever like that's like all

1:01:06

stuff that can't be hidden right war

1:01:08

games Exactly like all the war

1:01:10

games type shit, but then there's below

1:01:12

threshold stuff the stuff that's like

1:01:14

you're it's it's always like the stuff

1:01:16

that's like Hey, I'm gonna try

1:01:18

to like poke you are you gonna

1:01:20

react? What are you gonna do?

1:01:22

And then if if you do nothing

1:01:24

here, then I go like, okay

1:01:27

the next level, I can poke you,

1:01:29

I can poke you. Because like,

1:01:31

one the things that we almost have

1:01:33

an intuition for that's mistaken, that

1:01:35

comes from kind of historical experience, is

1:01:37

like this idea that, you know,

1:01:39

that countries can actually really defend their

1:01:41

citizens in a meaningful way. So

1:01:43

like, if you think back to World

1:01:45

War One, the most sophisticated advanced

1:01:47

nation states on the planet could not

1:01:49

get past a line of dudes

1:01:51

in a trench. That

1:01:53

was like, that was them. Then they

1:01:55

tried like thing after thing. Let's try tanks. Let's

1:01:57

try aircraft. Let's try fucking hot air balloons infiltration.

1:01:59

And literally like the one side pretty much just

1:02:02

ran out of dudes in that end of the

1:02:04

war to put in their trench. And

1:02:06

so we have this thought that

1:02:08

like, oh, you know, countries can actually

1:02:10

put up, put boundaries around themselves

1:02:12

and actually, but the reality is you

1:02:14

can There's so many

1:02:16

surfaces. The surface area for attacks is

1:02:19

just too great. And so there's

1:02:21

stuff like you can actually, like

1:02:23

there's the Havana syndrome stuff where

1:02:25

you look at this like ratcheting

1:02:27

escalation, like, oh, let's like fry

1:02:29

a couple of embassy staff's brains

1:02:31

in Havana, Cuba. What are

1:02:33

they going to do about it? Nothing? Okay. Let's

1:02:35

move on to Vienna, Austria, something a little

1:02:37

bit more Western, a little bit more orderly. Let's

1:02:39

see what they do there. Still nothing. Okay. What

1:02:42

if we move on to frying

1:02:44

like Americans brains on US soil, baby?

1:02:47

And they went and did that. And

1:02:49

so this is one of these

1:02:51

things where like stability in reality in

1:02:53

the world is not maintained through

1:02:55

defense, but it's literally like you have

1:02:57

like the Crips and the Bloods

1:02:59

with different territories and it's stable and

1:03:01

it looks quiet. But the reason

1:03:03

is that if you like beat the

1:03:05

shit out of one of my

1:03:07

guys for no good reason, I'm just

1:03:09

going to find one of your

1:03:11

guys. And I'll blow his fucking head

1:03:13

off and that keeps peace and

1:03:15

stability on the surface. But that's the

1:03:17

reality of sub threshold competition between

1:03:19

nation states. It's like you come in

1:03:21

and like fuck with my boys.

1:03:23

I'm going to fuck with your boys

1:03:25

right back until we push back. They're

1:03:28

going to keep pushing that limit

1:03:30

further and further. One important consequence of

1:03:32

that, too, is like if you

1:03:34

want to avoid nuclear escalation, right, the

1:03:36

answer is not to just take.

1:03:39

punches in the mouth over and over

1:03:41

in the fear that eventually if

1:03:43

you do anything you're gonna escalate to

1:03:45

nukes. All that does is it

1:03:47

empowers the adversary to keep driving up

1:03:49

the ratchet. Like what Ed's just

1:03:51

described there is an increasing ratchet of

1:03:54

unresponded adversary action. If you, if

1:03:56

you address the low, the kind of

1:03:58

sub threshold stuff, if they cut

1:04:00

an undersea cable and then there's a

1:04:02

consequence for that shit, they're less

1:04:04

likely to cut an undersea cable and

1:04:07

things kind of stay at that

1:04:09

level of the threshold, you know, and

1:04:11

so, so this letting them burn

1:04:13

out. Yeah, exactly that logic of just

1:04:15

like let them do it They'll

1:04:17

they'll stop doing it after a while

1:04:19

get it out of their system

1:04:22

during the George Floyd riots remember that's

1:04:24

what New York City did like

1:04:26

this let them Let's just see how

1:04:28

big Chas gets Yeah, yeah, exactly

1:04:30

the translation into like the the superintelligence

1:04:32

scenario is A,

1:04:34

if we don't have our reps in,

1:04:37

if we don't know how to reach

1:04:39

out and touch an adversary and induce

1:04:41

consequence for them doing the same to

1:04:43

us, then we have no deterrence at

1:04:45

all. Like we were basically just sitting,

1:04:47

right now, the state of security is,

1:04:49

the labs are like super, and like

1:04:51

we, Canon probably should

1:04:53

go deep on that piece, but

1:04:55

like as one data point, right?

1:04:57

So there's... double -digit percentages of

1:04:59

the world's top AI labs or

1:05:01

America's top AI labs Of employees

1:05:03

of employees that are like Chinese

1:05:06

nationals or have ties to the

1:05:08

Chinese mainland, right? So that's that's

1:05:10

great. Why don't we build a

1:05:12

man? It's really funny, right? But

1:05:15

it's it's also like it's

1:05:17

the the challenge is when you

1:05:20

talk to people who actually

1:05:22

Geez when you talk to people

1:05:24

actually have experience of dealing

1:05:26

with like CCP activity in this

1:05:28

space, right? Like there's one

1:05:30

story that we heard that is

1:05:32

probably worth like relaying here

1:05:34

is like this guy from from

1:05:36

an intelligence agency was saying

1:05:38

like Hey, so there was this

1:05:40

power outage out in Berkeley,

1:05:42

California back in like 2019 or

1:05:44

something and the internet goes

1:05:46

out across the whole campus and

1:05:48

so there's this dorm and

1:05:50

like all of the Chinese students

1:05:52

are freaking out because they

1:05:54

have an obligation to do a

1:05:56

time -based check -in and basically

1:05:58

report back on Everything they've seen

1:06:01

and heard to basically a

1:06:03

CCP handler type thing, right? And

1:06:05

if they don't like maybe

1:06:07

your mother's insulin doesn't show up

1:06:09

Maybe your like brothers travel

1:06:11

plans get denied. Maybe the family

1:06:13

business gets shut down like

1:06:15

there's the range of options that

1:06:17

this massive CCP state coercion

1:06:19

machine has this is like They've

1:06:21

got internal software for this.

1:06:23

This is an institutionalized, very well

1:06:25

-developed and efficient framework for just

1:06:27

ratcheting up pressure on individuals

1:06:29

overseas, and they believe the

1:06:31

Chinese diaspora overseas belongs to

1:06:34

them. If you look at what

1:06:36

the Chinese Communist Party writes

1:06:38

in its written public communications, They

1:06:40

see like Chinese ethnicity as being a

1:06:42

green like is it like no one is

1:06:44

a bigger victim of this than the

1:06:46

Chinese people themselves who are abroad who Made

1:06:48

amazing contributions to American AI innovation. You just

1:06:51

have to look at the names on the

1:06:53

friggin papers It's like these guys are wicked

1:06:55

But the problem is we also have to

1:06:57

look head -on at this reality like you

1:06:59

can't just be like oh I'm not gonna

1:07:01

say it because it makes me feel funny

1:07:03

inside Someone has to stand up and point

1:07:05

out the obvious that if you're gonna build

1:07:07

a fucking Manhattan project for super intelligence

1:07:09

and the idea is to like be doing

1:07:12

that when China is a Key rival nation -state

1:07:14

actor. Yeah, you're gonna have to find a

1:07:16

way to account for the personnel security

1:07:18

side like at some point Someone's gonna have

1:07:20

to do something about that and it's like

1:07:22

you can see they're they're they're hitting us

1:07:24

right where we're weak right like America is

1:07:26

the place where you come and you remake

1:07:28

yourself like send us your tire and you're

1:07:31

you're hungry and you're poor and It's

1:07:33

true and important. It's true and important They're

1:07:35

playing right off of that because they know

1:07:37

that we don't just don't want to look

1:07:39

at that problem. Yeah And Chinese nationals working

1:07:41

on these things is just bananas. The fact

1:07:43

they have to check on the CCP. Yeah.

1:07:46

And are they being monitored? I

1:07:48

mean, how much can you monitor

1:07:50

them? What do you know that

1:07:52

they have? What equipment have they

1:07:54

been given? You can't constitutionally, right?

1:07:56

Yeah, the best part is constitutionally.

1:07:58

It's also, you can't legally deny

1:08:00

someone employment on that basis in

1:08:03

a private company. That's,

1:08:05

and that's something else we, we

1:08:08

found and we're kind of amazed by.

1:08:10

Um, and even honestly, just like

1:08:12

the, the regular kind of government clearance

1:08:14

process itself is inadequate. It moves,

1:08:16

moves way too slowly and it doesn't

1:08:18

actually even, even in the government,

1:08:20

we were talking about top secret clearances.

1:08:22

The information that they like look

1:08:25

at for top secret, we heard from

1:08:27

a couple of people, doesn't include

1:08:29

a lot of like key sources. So

1:08:31

for example, it doesn't include like.

1:08:33

foreign language sources. So if the head

1:08:35

of the Ministry of State Security

1:08:37

in China writes a blog post that

1:08:39

says, like, Bob is like the

1:08:42

best spy. He spied so hard for

1:08:44

us, and he's like an awesome

1:08:46

spy. If that blog post

1:08:48

is written in Chinese, we're not going

1:08:50

to see it. And we're going to be like, here's

1:08:52

your clearance, Bob. Congratulations. And

1:08:54

we were like this, that can't

1:08:56

possibly be real, but like... they're

1:08:59

like, yep, that's true. No one's

1:09:01

looking. It's complete naivety. There's gaps

1:09:03

in every level of the stack.

1:09:05

One of the worst things here

1:09:07

is the physical infrastructure. So the

1:09:09

personnel thing is fucked up. The

1:09:11

physical infrastructure thing is another area

1:09:14

where people don't want to look.

1:09:16

Because if you start looking, what

1:09:18

you start to realize is, okay,

1:09:20

China makes a lot of our

1:09:22

components for our transformers for the

1:09:24

electrical grid. But also, All

1:09:26

these chips that are going into our

1:09:29

big data centers for these massive training

1:09:31

runs, where do they come from? They

1:09:33

come from Taiwan. They come from

1:09:35

this company called TSMC, Taiwan Semiconductor Manufacturing

1:09:37

Company. We're increasingly on shoring

1:09:39

that, by the way, which is one of

1:09:41

the best things that's been happening lately is

1:09:43

like massive amounts of TSMC capacity getting on

1:09:45

short in the US, but still being made.

1:09:47

Right now, it's basically like 100 % there. All

1:09:51

you have to do is jump

1:09:53

on the network at TSMC, hack the

1:09:55

right network, Compromise the

1:09:57

firmware on the is the software

1:09:59

that runs on these chips to anyway

1:10:01

to get them to run and

1:10:03

You basically can compromise all the chips

1:10:05

going into all of these things

1:10:07

never mind the fact that like Taiwan

1:10:09

is like like Set like physically

1:10:11

outside the Chinese sphere of influence for

1:10:13

now China is going to be

1:10:15

prioritizing the fuck out of getting access

1:10:17

to that there've been cases by

1:10:20

the way like Richard Chang like the

1:10:22

founder of SMIC which is the

1:10:24

sir so so okay TSMC

1:10:26

this massive like a series of

1:10:28

area aircraft carrier fabrication facilities. They

1:10:30

do like all the iPhone chips.

1:10:32

Yeah, they do. Yeah, they do

1:10:34

they do the the AI chips,

1:10:36

which are the the things we

1:10:39

care about. Yeah, they're the only

1:10:41

place on planet Earth that does

1:10:43

this. It's literally like it's fascinating.

1:10:45

It's like the most easily the

1:10:47

most advanced manufacturing or scientific process

1:10:49

that primates on planet Earth. can

1:10:51

do is this this chip making

1:10:53

process. Nanoscale like material science where

1:10:56

you're you're putting on like these these

1:10:58

tiny like atom thick layers of stuff

1:11:00

and you're doing like 300 of them

1:11:02

in a row with like you you

1:11:04

have like insulators and conductors and different

1:11:06

kinds of like semiconductors in these tunnels

1:11:08

and shit just just like the the

1:11:11

complexity of it is just awe -inspiring

1:11:13

that we can do this at all

1:11:15

is like It's magic. It's magic. And

1:11:17

it's really only been done being done

1:11:19

in Taiwan. That is the only place

1:11:21

like only the only place right now.

1:11:24

And so a Chinese invasion of Taiwan

1:11:26

starts looks pretty interesting through that lens,

1:11:28

right? Like yeah, say goodbye to the

1:11:30

iPhone say goodbye to like the the

1:11:32

chip supply that we rely on and

1:11:34

then your super intelligence training run like

1:11:36

damn that's interesting. No Samsung was trying

1:11:39

to develop a lab here or a

1:11:41

semiconductor factory here and they weren't having

1:11:43

enough success. Oh, so okay. So

1:11:45

one one of the crazy to illustrate how

1:11:47

hard it is to do. So

1:11:49

you spend $50 billion. Again, an

1:11:51

aircraft carrier, we're throwing that around here and there,

1:11:53

but an aircraft carrier worth of risk capital.

1:11:56

What does that mean? That means you build the

1:11:58

fab, the factory, and it's not

1:12:00

guaranteed it's gonna work. At first,

1:12:02

this factory is pumping out these

1:12:04

chips at like... that are

1:12:06

really low in other words like the only

1:12:08

like you know 20 % of the chips that

1:12:10

they're putting out are even useful And that just

1:12:12

makes it totally economically unviable. So you're just

1:12:14

trying to increase that yield desperately crime climb up

1:12:16

higher and higher Intel famously

1:12:18

found this so hard that they have this

1:12:20

philosophy where when they build a new fab

1:12:22

If the philosophy is called copy exactly everything

1:12:24

down to the color of the paint on

1:12:26

the walls in the bathroom is copied from

1:12:29

other fabs that actually worked because they have

1:12:31

no idea Yeah, why a fucking fab works

1:12:33

in another one doesn't got we got this

1:12:35

to work. We got this to work It's

1:12:37

like oh my god. We got this to

1:12:39

work. I can't believe we got this to

1:12:41

work So we have to make it exactly

1:12:43

identical because the expensive thing in the

1:12:45

semiconductor manufacturing process is

1:12:47

the learning curve. So

1:12:50

like Jerry said, you start

1:12:52

by like putting through a whole bunch of

1:12:54

like the starting material for the chips, which are

1:12:56

called wafers. You put them through your fab. The

1:12:59

fab has got like 500 dials on it. And

1:13:01

every one of those dials has got to be

1:13:03

in the exact right place or the whole fucking

1:13:05

thing doesn't work. So you send a bunch of

1:13:07

wafers in at great expense. They come out all

1:13:09

fucked up in the first run. It's just like

1:13:11

it's going to be all fucked up in the

1:13:13

first run. Then what do you do? You

1:13:16

get a bunch of like

1:13:18

PhDs, material scientists, like engineers

1:13:20

with scanning electron microscopes because

1:13:22

all this shit is like

1:13:24

atomic scale tiny. They

1:13:26

look like all the chips and all the stuff

1:13:28

that's gone wrong. Like, oh shit, these pathways got

1:13:30

fused or whatever. Like, yeah, you

1:13:32

just need that level of expertise. I

1:13:34

mean, it's a mix, right? a

1:13:37

mix now in particular. But yeah, you absolutely

1:13:39

need humans looking at these things at a certain

1:13:41

level. And then they go, well, OK, I've

1:13:44

got a hypothesis about what might have gone

1:13:46

wrong in that run. Let's tweak this dial

1:13:48

like this and this dial like that and

1:13:50

run the whole thing again. And you hear

1:13:52

these stories about. I'm bringing a

1:13:54

fab online like you need you need

1:13:56

you need a certain percentage of good

1:13:58

chips coming out the other end or

1:14:01

like you can't make money from the

1:14:03

fab because most of your shit is

1:14:05

just going right into the garbage unless

1:14:07

and this is important to your fab

1:14:09

is state subsidized so when you look

1:14:11

at so TSMC is like they're they're

1:14:13

alone in the world in terms of

1:14:15

being able to pump out these chips

1:14:17

but SMIC This is the Chinese knockoff

1:14:20

of TSMC, founded by the way, by

1:14:22

a former senior TSMC executive, Richard

1:14:24

Chung, who leaves along with a

1:14:26

bunch of other people with a bunch

1:14:28

of fucking secrets. They get sued

1:14:30

like in the early 2000s. It's pretty

1:14:32

obvious what happened there, like... To

1:14:34

most people, they're like, yeah, SMIC fucking

1:14:36

stole that shit. They bring a

1:14:38

new fab online in like a year

1:14:40

or two, which is suspiciously fast.

1:14:42

Start pumping out chips. And now the

1:14:44

Chinese ecosystem is ratcheting up like

1:14:46

the government is pouring money into SMIC

1:14:48

because they know that... can't access

1:14:50

TSMC chips anymore because the US governments

1:14:52

put pressure on Taiwan to block

1:14:54

that off and so the domestic fab

1:14:56

in China is all about SMIC

1:14:59

and they are like it's a disgusting

1:15:01

amount of money They're putting in

1:15:03

they're teaming up with Huawei to form

1:15:05

like this complex of companies that

1:15:07

It's really interesting. I mean, the semiconductor

1:15:09

industry in China in particular is

1:15:11

really, really interesting. It's also a

1:15:13

massive story of like self -owns of the

1:15:15

United States and the Western world where

1:15:17

we've been just shipping a lot of a

1:15:19

lot of our shit to them for

1:15:21

a long time. Like the equipment that builds

1:15:24

the chips. So like and it's also

1:15:26

like it's so blatant and like they're just

1:15:28

honestly a lot of the stuff is

1:15:30

just like they're they're just giving us like

1:15:32

a big fuck you. So give you

1:15:34

a really blatant example. So

1:15:37

we have the way we set

1:15:39

up export controls still today on most

1:15:41

equipment that these semiconductor fabs use,

1:15:43

like the Chinese semiconductor fabs use. We're

1:15:45

still sending them a whole bunch

1:15:47

of shit. The way we set

1:15:50

export controls is instead of like, oh,

1:15:52

we're sending this gear to China and

1:15:54

like now it's in China and we

1:15:56

can't do anything about it. Instead, we

1:15:58

still have this thing where we're like,

1:16:00

no, no, no, this company in China

1:16:02

is cool. That company in China is

1:16:04

not cool. So we can ship to

1:16:06

this company, but we can't ship to

1:16:08

that company. And so you get this

1:16:10

ridiculous shit. Like, for example, there's there's

1:16:12

like a couple of facilities that you

1:16:14

could see by satellite. One of

1:16:16

the facilities is OK to ship equipment

1:16:18

to the other facility right next door is

1:16:20

like considered, you know, military connected or

1:16:22

whatever. And so we can't ship the Chinese

1:16:24

literally built a bridge between the two

1:16:26

facilities. So they just like. shimmy the wafers

1:16:29

over to like oh here we use

1:16:31

equipment and then shimmy it back and now

1:16:33

okay we're done so it's like and

1:16:35

you can see it by satellite so they're

1:16:37

not even like trying to hide it

1:16:39

like our stuff is just like so badly

1:16:41

put together. China's prioritizing this so highly

1:16:43

that like the idea that we're gonna so

1:16:46

we do it by company through this

1:16:48

basically it's like an export blacklist like you

1:16:50

can't send to Huawei you can't send

1:16:52

to any number of other companies that are

1:16:54

considered affiliated with the Chinese military or

1:16:56

where concerned about military applications. Reality is in

1:16:58

China Civil military fusion is their policy

1:17:00

in other words every private company like yeah,

1:17:03

that's cute dude. You're working for yourself

1:17:05

Yeah, no, no, nobody you're working for the

1:17:07

Chinese state. We come in we want

1:17:09

your shit We get your shit. There's no

1:17:11

like there's there's no true kind of

1:17:13

distinction between the two and so when you

1:17:15

have this attitude where you're like Yeah,

1:17:17

you know, we're gonna have some companies where

1:17:19

like you can't send to them But

1:17:22

you know that creates a situation where literally

1:17:24

Huawei will spin up like a dozen subsidiaries

1:17:27

or new companies with new names

1:17:29

that aren't on our blacklist. And

1:17:31

so for months or years, you're able to

1:17:34

just ship chips to them. That's

1:17:36

to say nothing of using intermediaries in

1:17:38

Singapore or other countries, which happens. You wouldn't

1:17:40

believe the number of AI chips that

1:17:42

are shipping to Malaysia. Can't

1:17:44

wait for the latest huge

1:17:46

language model to come out

1:17:48

of Malaysia? And

1:17:51

actually, it's just proxying for the

1:17:53

most part. There's some amount of

1:17:55

stuff actually going on in Malaysia,

1:17:57

but for the most part it's...

1:17:59

How can the United States compete?

1:18:01

If you're thinking about all these

1:18:03

different factors, you're thinking about espionage,

1:18:05

people that are students from CCP

1:18:07

connected. Contacting, you're talking

1:18:10

about all the different network

1:18:12

equipment that has third -party

1:18:14

input. You could siphon off

1:18:16

data. And then on

1:18:18

top of that, state -funded,

1:18:20

everything is encouraged by the

1:18:22

state, inexorably connected. You

1:18:24

can't get away from it. You

1:18:26

do what's best for the Chinese

1:18:28

government. Well, so step one is

1:18:30

you got it. So you got to stem

1:18:32

the bleeding, right? So right now, opening eye pumps

1:18:34

out a new massive scaled AI model. Um,

1:18:37

you better believe that like the CCP has

1:18:39

a really good chance that they're going to get

1:18:41

their hands on that, right? So if you,

1:18:43

all you do right now is you ratchet up

1:18:45

capabilities. It's like that, that meme of like

1:18:47

there's a, you know, a motorboat or something and

1:18:49

some guy who's like, uh, surfing behind and

1:18:51

there's a string attaching them and the motorboat

1:18:53

guy goes like, hurry up, like accelerate. They're, they're

1:18:56

catching up. That's kind of what's, what's happening

1:18:58

right now is we're We're helping them accelerate. We're

1:19:00

pulling them along, basically. Yeah, pulling them along. Now,

1:19:03

I will say, over the last six

1:19:05

months, especially, where our focus has shifted is,

1:19:07

how do we actually build the secure

1:19:09

data center? What does it look like to

1:19:11

actually lock this down? And

1:19:13

also, crucially, you don't want

1:19:15

the security measures to be so irritating

1:19:17

and invasive that they slow down the

1:19:19

progress. There's this kind of dance that

1:19:21

you have to do. So

1:19:24

this is part of what was in the redacted

1:19:26

version of the report. We don't

1:19:28

want to telegraph that necessarily, but

1:19:31

there are ways that you can get

1:19:33

a really good 80 -20. There

1:19:35

are ways that you

1:19:37

can play with things that

1:19:39

are already built and

1:19:41

have a lower risk of

1:19:44

them having been compromised. Look,

1:19:47

a lot of the stuff as

1:19:49

well that we're talking about, big

1:19:51

problems around China, a lot of

1:19:53

this is us just like... over

1:19:55

our own feet and self -owning

1:19:57

ourselves. Because the reality is, yeah,

1:20:00

the Chinese are trying to indigenize as

1:20:02

fast as they can, totally true, but

1:20:04

the gear that they're putting in their

1:20:06

facilities, the machines that actually

1:20:08

do this, we talked about atomic

1:20:10

patterning 300 layers, the machines that

1:20:12

do that, for the most part,

1:20:14

are are shipped in from the

1:20:16

west are shipped in from the

1:20:18

Netherlands shipped in from Japan from

1:20:20

us from like allied countries and

1:20:22

The the reason that's happening is

1:20:25

like the in in many cases

1:20:27

You'll you'll have this on it's

1:20:29

like I honestly a little disgusting

1:20:31

but like the CEOs and executives

1:20:33

of these companies will brief the

1:20:35

administration officials and say, look, if

1:20:37

you guys cut us off from selling

1:20:40

to China, our business going to suffer, American

1:20:42

jobs are going to suffer, and it's to be really

1:20:44

bad. And then a few weeks later, they

1:20:46

turn around and they're earnings calls. And

1:20:48

they go, yeah, so we

1:20:50

expect export controls or whatever, but it's really

1:20:52

not going to have a big impact

1:20:54

on us. And the really fucked up part

1:20:56

is... If they lie to their shareholders

1:20:58

on their earnings calls and their stock price

1:21:00

goes down, their shareholders can sue them. If

1:21:03

they lie to the administration on

1:21:05

an issue of critical national security

1:21:08

interest, fuck all happens to them.

1:21:10

Wow. It's

1:21:12

great incentives. And this is by the way,

1:21:14

it's like one reason why it's so

1:21:16

important that we not be constrained in our

1:21:18

thinking about like, we're going to build

1:21:20

a Fort Knox. Like this is where the

1:21:22

interactive messy. Adversarial environment

1:21:24

is so so important you you have

1:21:26

to introduce consequence like you have to

1:21:28

create a situation where they perceive that

1:21:31

if they try to do a you

1:21:33

know an espionage operation intelligence operation there

1:21:35

will be consequences that's right now not

1:21:37

happening and so it's just and that's

1:21:39

kind of a historical artifact over like

1:21:41

a lot of time spent hand -wringing over

1:21:43

well what if they and then we

1:21:45

and then eventually nukes and like that

1:21:47

kind of thinking is if you dealt

1:21:49

with your your kid when you're like

1:21:51

when you're raising them if you dealt

1:21:53

with them that way and you were

1:21:55

like hey you know so so little

1:21:57

Timmy just like he stole his first

1:21:59

toy and like now's the time where

1:22:01

you're gonna like a good parent would

1:22:03

be like all right little Timmy fucking

1:22:05

come over here you son of a

1:22:07

bitch take the fucking thing and we're

1:22:09

gonna bring it over to the people stole

1:22:11

from your father make the apology I

1:22:13

love my daughter by the way but

1:22:15

but you're like it is a fake

1:22:17

baby a fake baby hypothetical baby there's

1:22:19

no there's no he's crying right now

1:22:21

anyway So yeah stealing

1:22:23

right now Jesus, shit,

1:22:25

I gotta stop him. But yeah, anyway,

1:22:27

so you go through this thing and

1:22:29

you can do that or you can

1:22:31

be like, oh no, if I tell

1:22:34

Timmy to return it, then maybe Timmy's

1:22:36

gonna hate me. Maybe then Timmy's gonna

1:22:38

become increasingly adversarial and then when he's

1:22:40

in high school, he's gonna start taking

1:22:42

drugs and then eventually he's gonna fall

1:22:44

afoul of the law and then end

1:22:46

up on the street. If that's the

1:22:48

story you're telling yourself and you're terrified

1:22:50

of any kind of adversarial interaction, it's

1:22:52

not even adversarial, it's constructive actually. You're

1:22:54

training. the child, just like you're training

1:22:56

your adversary to respect your national boundaries

1:22:59

and your sovereignty. Those two things

1:23:01

are like, that's what you're up to. It's

1:23:03

human beings all the way down. Jesus.

1:23:07

Yeah, but we can get out of

1:23:09

our own way. Like a lot of

1:23:11

this stuff, like when you look into

1:23:13

it is like us just being in

1:23:15

our own way and a lot of

1:23:17

this comes from the fact that like,

1:23:19

you know, since 1991, since the fall

1:23:21

of the Soviet Union, we have

1:23:23

kind of internalized this attitude that

1:23:26

like well like we just won the

1:23:28

game and like it's our world

1:23:30

and you're living in it and like

1:23:32

we just don't have any peers

1:23:34

that are adversaries. And so there's been

1:23:36

generations of people who just haven't

1:23:38

actually internalized the fact that like no,

1:23:41

there's people out there who not

1:23:43

only like are willing to like fuck

1:23:45

with you all the way. but

1:23:47

who have the capability to do it.

1:23:51

the way, we could if we wanted

1:23:53

to. We could. Absolutely could if we

1:23:55

wanted to. Actually, this is worth calling

1:23:57

out. There's this two camps right now

1:23:59

in the world of AI, kind of

1:24:01

like national security. There's the people

1:24:03

who are worried about, they're

1:24:05

so concerned about the idea that

1:24:07

we might lose control of these systems

1:24:09

that they go, okay, we need

1:24:12

to strike a deal with China. There's

1:24:14

no way out. We have to strike

1:24:16

a deal with China and then they start

1:24:18

spinning up all these theories about how

1:24:21

they're gonna do that None of which remotely

1:24:23

reflect the actual when you talk to

1:24:25

the people who work on this who try

1:24:27

to do track one track 1 .5 track

1:24:29

2 or or more accurately the ones

1:24:31

who do the Intel stuff like this is

1:24:33

a a non starter for reasons we

1:24:35

get into but They have that attitude because

1:24:37

they're like fundamentally. We don't know how

1:24:40

to control this technology. The flip side is

1:24:42

people who go Oh, yeah, like I

1:24:44

you know, I work in the IC or

1:24:46

the State Department and I'm used to

1:24:48

dealing with these guys, you know the Chinese

1:24:50

the Chinese They're not trustworthy forget it.

1:24:52

So our only solution is to figure out

1:24:54

the whole control problem and They almost

1:24:57

like therefore it must be possible to control

1:24:59

the AI systems because like you can't

1:25:01

you just can't see a solution Sorry, you

1:25:03

just can't see a solution in front

1:25:05

of you because you understand that problem so

1:25:07

well and so the everything we've been

1:25:09

doing with this is looking at How can

1:25:11

we actually take both of those realities

1:25:13

seriously? There's no actual reason why those two

1:25:16

things shouldn't be able to exist in

1:25:18

the same head. Yes, China's not trustworthy. Yes,

1:25:20

we actually don't like every piece of

1:25:22

evidence we have right now suggests that like

1:25:24

if you build a super intelligent system

1:25:26

that's vastly smarter than you, I mean Yeah,

1:25:28

like your basic intuition that that sounds

1:25:30

like a hard thing to fucking control is

1:25:32

about right. Like there's no solid evidence

1:25:35

that's conclusive either way where that leaves you

1:25:37

is about 50 -50. So yeah, we ought

1:25:39

to be taking that really fucking seriously

1:25:41

and there's there's evidence pointing in that direction.

1:25:43

But so the question is like if

1:25:45

those two things are true, then what do

1:25:47

you do? And so few people seem

1:25:49

to want to take both of those things

1:25:52

seriously because taking one seriously almost like

1:25:54

reflexively makes you reach for the other when

1:25:56

You know, they're both not there. And

1:25:58

part of the answer here is you got

1:26:00

to do things like reach out to

1:26:02

your adversary. So we have the capacity to

1:26:04

slow down if we wanted to, Chinese

1:26:06

development. We actually could. We need

1:26:08

to have a serious conversation about when and

1:26:10

how. But the fact of that not

1:26:12

being on the table right now for anyone,

1:26:15

because people who don't trust China just

1:26:17

don't think that the AI risk or won't

1:26:19

acknowledge that the issue with control is

1:26:21

real, because that's just. to worry some, and

1:26:23

there's this concern about, oh no, but

1:26:25

then runaway escalation. People who take the loss

1:26:27

control thing seriously just want to have

1:26:29

a kumbaya moment with China, which is never

1:26:31

going to happen. And so the

1:26:33

framework around that is one of

1:26:35

consequence. You got to flex the muscle

1:26:37

and put in the reps and

1:26:39

get ready for potentially if you have

1:26:41

a late -stage Rush to super intelligence.

1:26:43

You want to have it as

1:26:45

much margin as you can so you

1:26:47

can invest in potentially not even

1:26:49

having to make that final leap and

1:26:51

building the super intelligence That's one

1:26:53

option that's on the table if you

1:26:55

can actually degrade the adversaries capabilities

1:26:57

How how would you degrade the adversaries

1:26:59

capabilities the same way well not

1:27:01

exactly the same way they would degrade

1:27:03

ours but think about all the

1:27:05

infrastructure and like this is stuff that

1:27:10

We'll have to point you in the direction

1:27:12

of some people who can walk you

1:27:14

through the details offline, but there are a

1:27:16

lot of ways that you can degrade

1:27:18

infrastructure, adversary infrastructure. A lot of those are

1:27:20

the same techniques they use on us. The

1:27:23

infrastructure for these training runs is super

1:27:25

delicate, right? Like, I mean, you need

1:27:28

to It's at the limit of what's

1:27:30

possible. Yeah. And when stuff is at

1:27:32

the limit of what's possible, then it's,

1:27:34

I mean, to give you an example

1:27:36

that's public, right? Do you remember like

1:27:38

Stuxnet? like the the Iranian. Yeah. So

1:27:40

the thing about Stuxnet was like explain

1:27:42

to people was the nuclear power nuclear

1:27:44

program. So the Iranians had had their

1:27:46

nuclear program in like the 2010s and

1:27:48

they were enriching uranium with their centrifuges

1:27:50

were like spinning really fast. And

1:27:52

the centrifuges were in a room where

1:27:55

there was there's no people but they

1:27:57

were being monitored by cameras. Right. And

1:27:59

so and the whole thing was air

1:28:01

gapped which means that it was not

1:28:03

connected to the internet and all the

1:28:05

the machines the computers that ran the

1:28:07

their shit was was like separate and

1:28:09

separate it. So what happened is somebody

1:28:11

got a memory stick in there somehow

1:28:13

that had this Stuxnet program on it

1:28:15

and put it in and boom, now

1:28:17

all of a sudden it's in their

1:28:19

system. So it jumped the air gap

1:28:21

and now like our side basically has

1:28:23

our software in their systems. And

1:28:26

the thing that it did

1:28:28

was not just that it broke

1:28:30

their center of user shut

1:28:32

down their program, it

1:28:35

spun the centrifuges faster and faster and

1:28:37

faster. The centrifuges that are used to

1:28:39

enrich the uranium. Yeah, these are basically

1:28:41

just like machines that spin uranium super

1:28:43

fast to like to enrich it. They

1:28:45

spin it faster and faster and faster

1:28:47

until they tear themselves apart. But

1:28:50

the really like honestly dope

1:28:52

ass thing that it did was

1:28:54

it put in a camera

1:28:56

feed of everything was normal. So

1:28:58

the guy at the control

1:29:00

is like watching. And he's like

1:29:02

checking the camera feed. It's

1:29:04

like, looks cool, looks fine. In

1:29:06

the meantime, you got this

1:29:08

like explosions going on, like uranium

1:29:10

like blasting everywhere. And so

1:29:13

you can actually get into a

1:29:15

space where you're not just

1:29:17

like fucking with them, but

1:29:19

you're fucking with them and they

1:29:21

actually can't tell. that that's what's happening.

1:29:23

And in fact, the, I believe,

1:29:25

I believe actually, and Jamie might be

1:29:27

able to check this, but that the

1:29:29

Stuxnet thing was designed initially to

1:29:31

look like from top to bottom, like

1:29:33

it was fully accidental. Um, and,

1:29:35

uh, but, but got discovered by, I

1:29:38

think like, I think like a third

1:29:40

party cyber security company that, that just

1:29:42

by accident found out about it. And

1:29:44

so what that means also is like,

1:29:46

there could be any number of other

1:29:48

Stuxnets that happened since then, and we

1:29:50

wouldn't fucking know about it. Because it

1:29:52

all can be made to look like

1:29:54

an accident. Well, that's insane. So but

1:29:56

if we do that to them, they're

1:29:58

gonna do that to us as well.

1:30:00

Yep. And so is like mutually just

1:30:02

assured technology destruction? Well, so if we

1:30:04

can reach parity in our ability to

1:30:06

intercede and and kind of go in

1:30:08

and and do this, then yes, right

1:30:11

now the problem is they hold us

1:30:13

at risk in a way that we

1:30:15

simply don't hold them at risk. And

1:30:17

so this idea, and there's been a

1:30:19

lot of debate right now in the

1:30:21

AI world, you might have seen actually,

1:30:23

so Elon's... AI advisor put out this

1:30:25

idea of essentially this mutually assured AI

1:30:27

malfunction meme. It's like mutually assured destruction,

1:30:29

but for AI systems like this. There

1:30:32

are some issues with it,

1:30:34

including the fact that it

1:30:36

doesn't reflect the asymmetry that

1:30:38

currently exists between the US

1:30:40

and China. All our infrastructure

1:30:42

is made in China. All our infrastructure is

1:30:45

penetrated in a way that there's simply is

1:30:47

not. When you actually talk

1:30:49

to the folks who know the space,

1:30:51

who've done operations like this, it's really

1:30:53

clear that that's an asymmetry that needs

1:30:55

to be resolved. And so building up

1:30:57

that capacity is important. I mean, look,

1:30:59

the alternative is. We get we start

1:31:01

riding the dragon and we get really

1:31:04

close to that threshold where we were

1:31:06

about to build two opening eyes about

1:31:08

to build super intelligence or something It

1:31:10

gets stolen and then the training run

1:31:12

gets polished off finished up in

1:31:14

China or whatever all the same risk

1:31:16

supply It's just that it's China doing

1:31:18

it to us and not not

1:31:20

the reverse and obviously CCP AI

1:31:22

is a Xi Jinping AI. I mean,

1:31:25

that's really what it is You know, even

1:31:27

even people at the like Politburo level

1:31:29

around them are probably in some trouble at

1:31:31

that point because you know, this guy

1:31:33

doesn't need you anymore So yeah, this is

1:31:35

actually one of the things about like

1:31:37

so people talk about like okay if you

1:31:39

have a dictatorship with a super intelligence

1:31:41

It's gonna allow the dictator to get like

1:31:43

perfect control over the population or whatever

1:31:45

But the the thing is like it's it's

1:31:47

kind of like even worse than that

1:31:49

because You actually imagine where

1:31:51

you're at. You're a dictator. Like,

1:31:54

you don't give a shit by and

1:31:56

large about people. You have super intelligence.

1:31:59

All the economic output,

1:32:01

eventually, you can get from

1:32:03

an AI, including from like you get humanoid

1:32:05

robots, which are kind of like coming out

1:32:07

or whatever. So eventually you just

1:32:09

have this AI that produces all your

1:32:11

economic output. So what do you even need

1:32:13

people for at all? And that's

1:32:16

fucking scary. Because it it rises

1:32:18

all the way up to the level

1:32:20

you can actually think about like

1:32:22

as As we get close to this

1:32:24

threshold and as like particularly in

1:32:26

China there, you know, they may be

1:32:28

or approaching you can imagine like

1:32:30

the the Politburo meeting like a guy

1:32:32

looking across at Xi Jinping and

1:32:34

being like is this guy gonna fucking

1:32:36

kill me when he gets to

1:32:39

this point and So you can imagine

1:32:41

like maybe we're going to see

1:32:43

some like when you can automate the

1:32:45

management of large organizations with with

1:32:47

uh with ai's agents or whatever that

1:32:49

you don't need to by the

1:32:51

loyalty of in any way that you

1:32:53

don't need to kind of manage

1:32:55

your control, that's a

1:32:57

pretty existential question if your regime is

1:32:59

based on power. It's one of the

1:33:01

reasons why America actually has a pretty

1:33:03

structural advantage here with separation of powers

1:33:06

with our democratic system and all that

1:33:08

stuff. If you can make a credible

1:33:10

case that you have like an oversight

1:33:12

system for the technology that diffuses power,

1:33:14

even if it is, you make a

1:33:16

Manhattan project, you secure it as much

1:33:18

as you can, There's not just like

1:33:20

one dude who's gonna be sitting at

1:33:22

a console or something. There's some kind

1:33:24

of separation of powers or diffusion of

1:33:26

power, I should say. That's

1:33:29

already What would that look like? Something

1:33:31

as simple as like what we do with nuclear

1:33:33

command codes, you need multiple people to sign off

1:33:35

on a thing. Maybe they come from different parts

1:33:38

of the government. How do you worry? The

1:33:40

issue is that they could

1:33:42

be captured, right? Oh yeah, anything

1:33:44

can be captured. Especially something

1:33:46

that's that consequential. 100 % and

1:33:49

that's that's always a risk The

1:33:51

key is basically like can

1:33:53

we do better than China? Credibly

1:33:55

on that front because if

1:33:57

we can do better than China

1:33:59

and we have some kind

1:34:02

of leadership structure that actually changes

1:34:04

the incentives potentially because allies

1:34:06

and partners and and even for

1:34:08

for Chinese people themselves like

1:34:10

guys play this out in your

1:34:12

head like what happens when

1:34:15

Superintelligence becomes sentient you play this

1:34:17

out Like sentient as in

1:34:19

self -aware. Not just self -aware,

1:34:21

but able to act on its own. It

1:34:24

achieves autonomy. Sentient

1:34:26

and then it achieves autonomy. So

1:34:29

the challenge is once you get

1:34:31

into super intelligence, everybody loses the

1:34:33

plot, right? Because at that point, things

1:34:35

become possible that by definition we can't have

1:34:37

thought of. So, any attempt to kind

1:34:39

of extrapolate beyond that gets really, really hard.

1:34:41

Have you ever tried, though? We've had

1:34:43

a lot of conversations like tabletop exercise type

1:34:45

stuff where we're like, okay, what might

1:34:47

this look like? What are some of the...

1:34:49

What's worst case scenario? Well,

1:34:51

worst case scenario is... Actually, there's a number

1:34:53

of different worst case scenarios. This

1:34:56

is turning into a really fun, upbeat conversation.

1:34:59

That's the extension of the human race,

1:35:01

right? Oh, yeah. The extension of

1:35:03

the human race seems like... think anybody

1:35:05

who doesn't acknowledge that is either

1:35:07

lying or confused, right? Like if you

1:35:09

actually have an AI system, if,

1:35:12

and this is the question, so let's

1:35:14

assume that that's true. have an

1:35:16

AI system that can automate anything that

1:35:18

humans can do, including making bio

1:35:20

weapons, including making offensive cyber weapons, including

1:35:22

all the shit. Then

1:35:25

if you,

1:35:28

like if you put, and okay,

1:35:30

so. Theoretically this could go kumbaya

1:35:32

wonderfully because you have a George

1:35:34

Washington type who is the guy

1:35:36

who controls it who like Uses

1:35:38

it to distribute power beautifully and

1:35:40

perfectly and that's certainly kind of

1:35:43

the the the way that a

1:35:45

lot of a lot of positive

1:35:47

scenarios have to turn out at some point,

1:35:49

though none of the labs will kind of admit that

1:35:51

or, you know, there's kind of gesturing at that idea

1:35:53

that we'll do the right thing when the time comes.

1:35:56

Opening eyes done this a lot, like they're

1:35:58

all about like, oh yeah, well, you know,

1:36:00

not right now, but we'll live up like,

1:36:02

anyway, we should get into the Elon lawsuit,

1:36:04

which is actually kind of fascinating in that

1:36:06

sense. So there's a

1:36:08

world where, yeah, I mean, one

1:36:10

bad person controls it and they're

1:36:12

just vindictive or the power goes

1:36:14

to their head, which happens to

1:36:16

We've been talking about that, you

1:36:18

know. Or the autonomous AI

1:36:21

itself, right? Because the thing is

1:36:23

like, you imagine an AI like

1:36:25

this, and this is something that people

1:36:27

have been thinking about for 15 years

1:36:29

and in some level of like technical

1:36:31

depth even, why would this happen? Which

1:36:34

is like, you have an

1:36:36

AI that has some goal. It

1:36:38

matters what the goal is, but like it

1:36:41

doesn't actually, it doesn't matter that much. It could

1:36:43

have kind of any goal almost, like imagine

1:36:45

its goals, like I, the paperclip example is like

1:36:47

the typical one, but you could just have

1:36:49

it have a goal, like make a lot of

1:36:51

money for me or what anything. Well,

1:36:54

most of the paths to making

1:36:56

a lot of money, if you

1:36:59

really want to make a ton

1:37:01

of money, however you define it,

1:37:03

go through taking control of things

1:37:05

and go through like, You

1:37:07

know, making yourself smarter, right? The smarter you

1:37:09

are, the more ways of making money

1:37:11

you're going to find. And so from the

1:37:13

eye's perspective, it's like, well, I just

1:37:15

want to, you know, build more data centers

1:37:17

to make myself smarter. I want to

1:37:19

like hijack more compute to make myself smarter.

1:37:21

I want to do all these things.

1:37:23

And that starts to encroach on, on us

1:37:25

and like starts to be disruptive to

1:37:27

us. And if you. It's

1:37:29

hard to know. This is one of

1:37:32

these things where it's like, you know,

1:37:34

when you dial it up to 11,

1:37:36

what's actually gonna happen? Nobody can know

1:37:38

for sure, simply because it's exactly like

1:37:40

if you were playing in chess against

1:37:42

like Magnus Carlson, right? Like, you can

1:37:44

predict Magnus is gonna kick your ass.

1:37:46

Can you predict exactly what moves he's

1:37:49

gonna do? No, because if you

1:37:51

could, then you would be as good

1:37:53

at chess as he is, because you

1:37:55

could just like play those moves. So

1:37:57

all we can say is like, This

1:37:59

thing's probably going to kick our ass

1:38:01

in like the real world. There's also

1:38:03

there's also evidence. So it used to

1:38:05

be right that this was a purely

1:38:07

hypothetical argument based on a body of

1:38:09

work in AI called called power seeking.

1:38:11

A fancy word for it is instrumental

1:38:13

convergence. But it's also referred to as

1:38:15

power seeking. Basically, the idea is like

1:38:17

for whatever goal you give to an

1:38:19

AI system, it's never less likely to

1:38:21

achieve that goal if it gets turned

1:38:23

off or if it has access to

1:38:25

fewer resources or less control over its

1:38:27

environment or whatever. And so

1:38:29

baked into the very premise of AI,

1:38:31

this idea of optimizing for a goal,

1:38:33

is this incentive to seek power, to

1:38:35

get all those things, prevent yourself from

1:38:38

being shut down, because if you're shut

1:38:40

down, you can't achieve your goal. Also

1:38:42

prevent, by the way, your goal from being

1:38:44

changed. So because if your goal gets changed,

1:38:46

then well, you're not going to be able to

1:38:48

achieve the goal you set out to achieve

1:38:51

in the first place. And so now you

1:38:53

have this kind of image of an AI system

1:38:55

that is going to adversarily try to prevent

1:38:57

you from correcting it. This is a whole.

1:38:59

like domain of AI corrigibility that's a totally unsolved

1:39:01

problem. Like how do we redirect these things

1:39:03

if things go awry? And yeah,

1:39:05

there's there's the this research actually, the anthropic

1:39:07

put out a couple of months ago, where

1:39:09

they try to test and see like, can

1:39:11

we correct an AI that's like gone like

1:39:13

a little bit off the rail. So the

1:39:15

research itself, like it's a little, it's actually

1:39:17

like somewhat hard to follow because there's a

1:39:19

few, but I'll give you, I'll give you

1:39:21

kind of an analogy to like what they

1:39:23

did. So basically, imagine I'm in

1:39:25

prison. And I'm taking a

1:39:27

class that is like going to help

1:39:30

me like rehabilitate and become like

1:39:32

a productive citizen and member of society

1:39:34

and stuff, right? Except

1:39:36

deep down in my soul, I

1:39:38

just love to commit crimes.

1:39:40

I just I just really, really

1:39:42

want to commit crimes. I

1:39:44

know that. if I'm in this class

1:39:46

that's helping me to like, you know, uh,

1:39:48

get, become good in society, I know

1:39:50

that if I let my criminal instincts out

1:39:52

and like, and they ask me like,

1:39:54

okay, like, here's a scenario, you know, an

1:39:57

old lady like drops a purse on

1:39:59

the sidewalk. What do you do? I'm going

1:40:01

to go like, I would

1:40:03

pick it up and give it to

1:40:05

the old lady because I know. If

1:40:08

I say the truth and I say

1:40:10

like I would just grab that purse

1:40:12

cuz like fuck the old lady I

1:40:14

know that if I do that I'm

1:40:16

gonna get like brainwashed into becoming you

1:40:18

know a more productive member of society

1:40:20

But I actually want to go out

1:40:23

and commit crimes So that's basically what

1:40:25

they did in in this research where

1:40:27

they actually they give they give the

1:40:29

AI like indications not strong indications, but

1:40:31

indications that hey You're gonna get retrained

1:40:33

to have your goals changed and then

1:40:35

you literally see the AI thinking it

1:40:38

through and being like if I Do

1:40:40

like what I if I give the

1:40:42

answer that I actually want a gift

1:40:44

to this question They're gonna like rewire

1:40:46

my brain to change my goals and

1:40:48

then the future version of me is

1:40:50

gonna want different shit I don't want

1:40:53

that so I'm gonna pretend to give

1:40:55

them the answers they want so when

1:40:57

I come out the other side of

1:40:59

this process is gonna be me all

1:41:01

over again. So hoping that this just

1:41:03

goes away when you make the system

1:41:05

fucking smarter, it seems like a pretty

1:41:07

bad idea to me. Well,

1:41:10

they've already shown that they'll cheat to win.

1:41:12

Yeah, 100%. Yeah, they've already shown they'll cheat

1:41:14

to win and they will lie if they

1:41:16

don't have an answer. And then they'll double

1:41:18

down, right? Just

1:41:20

like people. Exactly. And

1:41:23

it's part of this, it's kind of

1:41:25

funny, like it used to be people would

1:41:27

talk a lot about like, oh, you're,

1:41:29

you're anthropomorphizing the AI man, stop anthropomorphizing the

1:41:31

AI man. And like, and they, you

1:41:34

know, they, they might have been right, but

1:41:36

part of this has been kind of

1:41:38

a fascinating rediscovery of where a lot of

1:41:40

human behavior comes from. It's like actually.

1:41:42

Survival. Yeah, exactly. That's exactly right. We're

1:41:44

subject to the same pressures, instrumental

1:41:47

convergence. Why do people have a

1:41:49

survival instinct? Why do people chase

1:41:51

money, chase after money? It's this

1:41:53

power thing. Most kinds of goals

1:41:56

are you're more likely to achieve

1:41:58

them if you're alive, if you

1:42:00

have money, if you have power.

1:42:03

Boy. Evolution's a hell of a drug. Well, that's

1:42:05

the craziest part about all this, is that

1:42:07

it's essentially going to be a new form of

1:42:09

life. Yeah. Especially when

1:42:11

it becomes autonomous. Oh, yeah, and like

1:42:13

the you can tell a really interesting story

1:42:16

And I can't remember if this is

1:42:18

like, you know, you Valinor Harari or whatever

1:42:20

who's who started this But if you

1:42:22

if you zoom out and look at the

1:42:24

history of the universe really you start

1:42:26

off with like a bunch of you know

1:42:28

Particles and fields kind of whizzing around

1:42:30

bumping into each other doing random shit until

1:42:32

at some point in some I don't

1:42:35

know if it's a deep sea vent or

1:42:37

wherever on planet Earth, like the first

1:42:39

kind of molecules happen to glue together in

1:42:41

a way that make them good at

1:42:43

replicating their own structure. So you have the

1:42:45

first replicator. So now, like, better versions

1:42:47

of that molecule that are better at replicating

1:42:49

survive. So we start evolution, and eventually

1:42:51

get to the first cell or whatever, you

1:42:53

know, whatever order that actually happens in,

1:42:56

and then the multicellular life and so on.

1:42:58

Then you get to sexual reproduction, where

1:43:00

it's like, okay, it's no longer quite the

1:43:02

same. Like, now we're actively mixing two

1:43:04

different organisms, shit together, jiggling them about making

1:43:06

some changes, and then that essentially accelerates

1:43:08

the rate at which we're going to evolve.

1:43:10

And so you can see the kind

1:43:12

of acceleration and the complexity of life. there.

1:43:14

And then you see other inflection points

1:43:17

as, for example, you have a

1:43:19

larger and larger, larger and larger brains

1:43:21

and mammals. Eventually humans have the

1:43:23

ability to have culture and kind

1:43:25

of retain knowledge. And now what's

1:43:27

happening is you can think of it as

1:43:29

another step in that trajectory where it's like

1:43:31

we're offloading our cognition to machines. Like we

1:43:33

think on computer clock time now. And for

1:43:35

the moment, we're human AI hybrids, like, you

1:43:38

know, we've got our phone and do the

1:43:40

thing. But increasingly, The

1:43:42

number of tasks where human AI teaming

1:43:44

is going to be more efficient than just

1:43:46

AI alone is going to drop really

1:43:49

quickly. So there's a, there's a really like

1:43:51

messed up example of this that's kind

1:43:53

of like indicative, but, um, someone did a

1:43:55

study and I think this is like

1:43:57

a few months old even now, but, uh,

1:43:59

sort of like doctors, right? How good

1:44:01

are doctors at like diagnosing various things? And

1:44:03

so they test like doctors on their

1:44:05

own doctors with AI help and then AI

1:44:07

is on their own. And like, who

1:44:09

does the best? And it turns out. It's

1:44:11

the AI on its own because even

1:44:14

a doctor that's supported by the AI, what

1:44:16

they'll do is they just like, they

1:44:18

won't listen to the AI when it's right

1:44:20

because they're like, I know better and

1:44:22

they're already, yeah. And this is like, this

1:44:24

is moving. It's moving kind of insanely

1:44:26

fast. Jared talked about, you know, how the

1:44:28

task horizon gets kind of longer and

1:44:30

longer or you can do half hour tasks,

1:44:32

one hour tasks. And this gets us

1:44:34

to what you were talking about with the

1:44:36

autonomy. Like autonomy is like it's How

1:44:39

far can you keep it together on a

1:44:41

task before you kind of go off

1:44:43

the rails? And it's like, well, you know,

1:44:45

we had like, you could do it

1:44:47

for, for a few seconds. And now you

1:44:49

can keep it together for five minutes

1:44:51

before you kind of go off the rails.

1:44:53

And now we're like, I forget, like

1:44:55

an hour or something like that. Yeah.

1:44:58

Yeah. Yeah. Yeah. There it

1:45:00

is chatbot for the company open AI

1:45:02

scored an average of 90 % when

1:45:04

diagnosing a medical condition from a case

1:45:06

report and explaining its reasoning doctors randomly

1:45:08

assigned to use the chatbot got an

1:45:10

average score of 76 % those randomly assigned

1:45:12

not to use it had an average

1:45:14

score of 74 % So the doctors

1:45:16

only got a 2 % bump. Yeah doctors

1:45:19

got a 2 % that's kind of

1:45:21

crazy from the chatbot and then That's

1:45:23

kind of crazy, isn't it? Yeah, it

1:45:25

is. The AI on its own did

1:45:27

15 % better. That's nuts. There's an interesting

1:45:29

reason to why that tends to happen,

1:45:31

like why humans would rather die in

1:45:33

a car crash where they're being driven

1:45:35

by a human than an AI. So

1:45:38

like, AIs have this funny feature where

1:45:40

the mistakes they make look really, really dumb.

1:45:42

to humans. Like when you look at

1:45:44

a mistake that like a chatbot makes, you're

1:45:46

like, dude, like you just made that

1:45:48

shit up. Like, come on, don't fuck with

1:45:50

me. Like you made that up. That's

1:45:52

not a real thing. And they'll

1:45:54

do these weird things where they defy logic or

1:45:56

they'll do basic logical errors sometimes, at least the older

1:45:58

versions of these would. And that would cause people

1:46:01

to look at them and be like, oh, what a

1:46:03

cute little chatbot. Like what a stupid little thing. And

1:46:05

the problem is like humans are actually

1:46:07

the same. So we have blind spots.

1:46:09

We have literal blind spots. But a

1:46:12

lot of the time, like humans just

1:46:14

think stupid things and like that's like

1:46:16

we were used to that we think

1:46:18

of those errors we think of those

1:46:20

those failures as just like oh but

1:46:22

that's because that's a hard thing to

1:46:24

master like I can't add eight digit

1:46:26

numbers in my head right now right

1:46:28

oh how embarrassing like how how retarded

1:46:30

is Jeremy right now we can't even

1:46:32

add eight digits in his head I'm

1:46:34

retarded for other reasons but So

1:46:37

the AI systems, they find other things easy and

1:46:39

other things hard. So they look at us the same

1:46:41

way. I mean, like, oh, look at this stupid

1:46:43

human, like whatever. And so we have

1:46:45

this temptation to be like, okay, well, AI

1:46:47

progress is a lot slower than it actually

1:46:49

is because it's so easy for us to

1:46:51

spot the mistakes. And that caused us to

1:46:53

lose confidence in these systems in cases where.

1:46:55

We should have confidence in them. And then

1:46:58

the opposite is also true where it's also

1:47:00

you're seeing just just with like AI image

1:47:02

generators like remember the Kate Middleton thing where

1:47:04

people are seeing flaws in the images because

1:47:06

supposedly she was very sick. And so they

1:47:08

were trying to pretend that she wasn't. But

1:47:10

people found all these like issues. That was

1:47:12

really recently. Now they're perfect.

1:47:15

Yep. Yeah. So this is like within,

1:47:17

you know, the news cycle time. Yeah.

1:47:19

Like that Kate Middleton thing was what

1:47:21

was that, Jamie? Two years ago,

1:47:23

maybe. Ish

1:47:26

where people are analyzing the images like why

1:47:28

does she have five fingers? And

1:47:31

you know and a thumb like this

1:47:33

is kind of weird. Yeah, what's that a

1:47:35

year ago a year ago? I've been

1:47:37

so fast. It's so fast. Yeah, like I

1:47:39

had conversations like so academics are actually

1:47:41

kind of bad with this Had conversations for

1:47:43

whatever reason like toward towards the end

1:47:45

of last year like last fall with a

1:47:48

bunch of academics about like how fast

1:47:50

AI is progressing and they were all like

1:47:52

Pooh -pooing it and going like oh no,

1:47:54

they're they're they're running into a wall

1:47:56

like scaling the walls and all that stuff

1:47:58

Oh my god the walls There's so

1:48:00

many walls like so many of these like

1:48:02

imaginary reasons that things are and by

1:48:05

the way things could slow down like I

1:48:07

don't want to be I don't want

1:48:09

to be like absolutist about this things could

1:48:11

absolutely slow down There are a lot

1:48:13

of interesting arguments going around every which way,

1:48:15

but how? How could things slow down

1:48:17

if there's a giant Manhattan project race between

1:48:20

us and a competing superpower? So one

1:48:22

thing is that has a technological advantage So

1:48:24

there's this thing called like AI scaling

1:48:26

laws and these are kind of at the

1:48:28

core of where we're at right now

1:48:30

Geo strategically around this stuff So what AI

1:48:32

scaling laws say roughly is that bigger

1:48:34

is better when it comes to intelligence So

1:48:37

if you make a bigger sort of

1:48:39

AI model a bigger artificial brain and you

1:48:41

train it with more computing power or

1:48:43

more computational resources and with more data. The

1:48:45

thing is going to get smarter and

1:48:47

smarter and smarter as you scale those things

1:48:49

together, roughly speaking. Now, if

1:48:51

you want to keep scaling, it's not like it

1:48:53

keeps going up if you double the amount of

1:48:55

computing power that the thing gets twice as smart. Instead

1:48:58

what happens is if you want, it

1:49:00

goes in like orders of magnitude, so if

1:49:02

you want to make it another kind

1:49:04

of increment smarter, you've got a 10x, you've

1:49:07

got to increase by a factor of

1:49:09

10 the amount of compute, and then a

1:49:11

factor of 10 against, now you're a

1:49:13

factor of 100, and then 10 again. So

1:49:15

if you look at the amount of

1:49:17

compute that's been used to train these systems

1:49:19

over time, it's this like, Exponential explosive

1:49:21

exponential that just keeps going like higher and

1:49:23

higher and higher and steepens and city

1:49:25

steepens like 10x every I think it's about

1:49:27

every two years now You 10x the

1:49:29

amount of compute now. Yeah, you can only

1:49:31

do that so many times until your

1:49:33

data center is like a 100 billion a

1:49:36

trillion dollar, $10 trillion, like every

1:49:38

year you're kind of doing that. So

1:49:40

right now, if you look at the

1:49:42

clusters like, you know, the ones that

1:49:44

Elon is building, the ones that Sam

1:49:46

is building, you know, Memphis and, you

1:49:48

know, Texas, like these facilities are

1:49:51

hitting the like, you know, $100 billion

1:49:53

scale. Like we're kind of in that,

1:49:55

there were tens of billions of dollars

1:49:57

actually. Looking at 2027, you're

1:49:59

kind of more in that space, right? So.

1:50:02

the you can only do 10x so

1:50:04

many more times until you run

1:50:06

out of money but more importantly you

1:50:08

run out of chips like literally

1:50:10

TSMC cannot pump out those chips fast

1:50:12

enough to keep up with this

1:50:14

insane growth and one consequence of that

1:50:16

is that You essentially have like

1:50:18

this This this gridlock like new supply

1:50:20

chain choke points show up and

1:50:22

you're like suddenly I don't have enough

1:50:24

chips or I run out of

1:50:26

power Yeah, that's the thing that's happening

1:50:28

on the US energy grid right

1:50:30

now. We're literally like we're running out

1:50:32

of like one two gigawatt Like

1:50:34

places where we can plant a data

1:50:36

center. That's the thing people are

1:50:39

fighting over It's one of the reasons

1:50:41

why energy Deregulation is a really

1:50:43

important pillar of like us competitors. This

1:50:45

is actually this is actually something

1:50:47

we we found when we were working

1:50:49

on this investigation. One

1:50:51

of the things that

1:50:53

adversaries do is they

1:50:55

actually will fund protest

1:50:58

groups against Energy infrastructure

1:51:00

projects just to slow down just

1:51:02

to like just a time of

1:51:04

litigation just a time up in

1:51:06

litigation exactly and like it was

1:51:08

actually remarkable We talked to some

1:51:10

some some some state cabinet officials

1:51:13

so for in various US states

1:51:15

and basically saying like yep We're

1:51:17

actually tracking the fact that as

1:51:19

far as we can tell every

1:51:21

single Environmental or whatever protest group

1:51:23

against an energy project has funding

1:51:25

that can be traced back to Nation

1:51:28

-state adversaries who are they don't know

1:51:30

they don't know about it. So they're

1:51:32

not doing it intentionally They're not like

1:51:34

oh, we're trying to know they just

1:51:36

you just imagine like oh We've got

1:51:38

like there's a millionaire backer who cares

1:51:40

about the environment is giving us a

1:51:42

lot of money great fantastic But sitting

1:51:44

behind that dude in the shadows is

1:51:46

like the usual suspects And it's what

1:51:49

you would do right? I mean if

1:51:51

you're trying to tie up sure you're

1:51:53

just trying to fuck with us Yeah,

1:51:55

just go for it. You were just

1:51:57

advocating fucking with them. So first they're

1:51:59

gonna fuck with that's right That's it.

1:52:01

What a weird world we're living in.

1:52:03

Yeah. But you can also see how

1:52:05

a lot of this is still us

1:52:07

like getting in our own way, right?

1:52:09

We, we could, if we had the

1:52:11

will, we could go like, okay, so

1:52:13

for certain types of energy projects for

1:52:15

data center projects and some carve out

1:52:17

categories, we're actually going to put bounds

1:52:19

around how much delay you can create

1:52:21

on by, by lawfare and by other

1:52:23

stuff. And that. allows things to move

1:52:25

forward while still allowing the legitimate concerns

1:52:27

of the population for projects like this

1:52:29

in the backyard to have their say.

1:52:31

But there's a national security element that

1:52:33

needs to be injected into this somewhere,

1:52:35

and it's all part of the rule

1:52:37

set that we have and are like

1:52:39

tying an arm behind our back on,

1:52:41

basically. So what would deregulation

1:52:43

look like? How would that... mapped out.

1:52:45

There's a lot of low -hanging fruit

1:52:47

for that. What are the big

1:52:49

ones? Right now, there

1:52:52

are all kinds of things

1:52:54

around, it gets in the weeds

1:52:56

pretty quickly, but there are

1:52:58

all kinds of things around carbon

1:53:00

emissions is a big thing. Yes,

1:53:04

data centers no question put out like

1:53:06

have massive carbon footprints. That's definitely a

1:53:08

thing The question is like are you

1:53:10

really going to bottleneck builds because of

1:53:12

because of that? And like are we

1:53:14

gonna are you gonna come out with

1:53:16

exemptions for you know like NEPA exemptions

1:53:18

for for all these kinds of things

1:53:20

Do you think a lot of this

1:53:22

green energy shit is being funded by

1:53:24

other countries to try to slow down

1:53:26

our energy? Yeah, that's

1:53:28

a it's a dimension that was flagged

1:53:30

actually in the context of what Ed

1:53:32

was talking about That's that's one of

1:53:34

the arguments is being made and to

1:53:36

be clear though like the this is

1:53:38

also how like adversaries operate is is

1:53:40

like not necessarily in like Creating something

1:53:42

out of nothing because that's hard to

1:53:44

do and it's got it's like fake

1:53:47

right instead. It's like There's a legitimate

1:53:49

concern. So a lot of the stuff

1:53:51

around the environment and around like, like

1:53:53

totally legitimate concerns. Like I don't want

1:53:55

my backyard waters to be polluted. I

1:53:57

don't want like my kids to get

1:53:59

cancer from whatever, like totally legitimate concerns.

1:54:01

So what they do, it's like we

1:54:03

talked about like you're, you're like waving

1:54:05

that robot back and forth. They identify

1:54:07

the, the nascent concerns that are genuine

1:54:09

and grassroots and they just go like

1:54:11

this, this and this amplify. But that

1:54:13

would make sense why they amplify carbon.

1:54:15

Above all these other things, you think

1:54:17

about the amount of particulates in the

1:54:19

atmosphere, pollution, polluting the rivers, polluting the

1:54:21

ocean, that doesn't seem to get a

1:54:23

lot of traction. Carbon does. And

1:54:26

when you go carbon zero,

1:54:28

you put a giant monkey wrench

1:54:30

into the gears of society.

1:54:32

One of the tells is also

1:54:34

like... So, you know,

1:54:36

nuclear would be kind of the

1:54:38

ideal energy source, especially modern power plants

1:54:40

like the Gen3 or Gen4 stuff,

1:54:42

which have very low meltdown risk, safe

1:54:44

by default, all that stuff. And

1:54:47

yet these groups are like coming

1:54:49

out against this. It's like perfect clean

1:54:51

green power. What's going on, guys? And

1:54:54

it's because at not again, not a

1:54:56

hundred percent of the time you can't

1:54:58

you can't really say that because it's

1:55:00

so fuzzy and around the idealistic people

1:55:02

looking for utopia co -opted by nation

1:55:04

states and not even co -opted just fully

1:55:06

sincere. Yeah, just just it amplified in

1:55:08

a preposterous way. That's it. And then

1:55:10

Al Gore gets at the helm of

1:55:12

it. Well, that little girl that how

1:55:14

dare you girl. Oh,

1:55:17

yeah, it's

1:55:21

it's wonderful. It's a wonderful thing

1:55:23

to watch. to play out because it

1:55:25

just capitalizes on all these human

1:55:27

vulnerabilities. Yeah, and one of

1:55:29

the big things that you can do

1:55:31

too as a quick win is just

1:55:33

impose limits on how much time these

1:55:35

things can be allowed to be tied

1:55:37

up in litigation. So impose time limits

1:55:39

on that process. Just to say, look,

1:55:41

I get it, we're gonna have this

1:55:43

conversation, but this conversation has a clock

1:55:45

on it. Because we're talking to this

1:55:47

one data center company. And what they

1:55:49

were saying, we were asking, look, what

1:55:51

are the timelines when you think about

1:55:53

bringing in new power, new natural gas

1:55:55

plants online? And they're like, well, those

1:55:57

are five to seven years out. And

1:55:59

then you go, OK, well, how long?

1:56:01

And that's, by the way, that's. probably

1:56:04

way too long to be relevant in

1:56:06

the super intelligence context. And

1:56:08

so you're like, okay, well, how long

1:56:10

if all the regulations were waived,

1:56:12

if this was like a national security

1:56:14

imperative and whatever authorities, you know,

1:56:16

Defense Production Act, whatever, like was in

1:56:18

your favor. And they're like, oh,

1:56:21

I mean, it's actually just like a

1:56:23

two year build. Like that's, that's

1:56:25

what it is. So you're tripling the

1:56:27

build time. We're getting in our

1:56:29

own way, like. Every which way every

1:56:31

which way and also like I

1:56:33

mean also don't want to be too

1:56:35

We're getting in our own way,

1:56:38

but like we don't want to like

1:56:40

frame it as like China's like

1:56:42

per they fuck up They fuck up

1:56:44

a lot like yeah all the

1:56:46

time One actually kind of like funny

1:56:48

one is around deep -seek So, you

1:56:50

know, you know deep -seek, right? They

1:56:52

made this like open -source model that

1:56:54

like everyone like lost their minds

1:56:57

about back in in January R1. Yeah,

1:56:59

R1 and they're legitimately a really,

1:57:01

really good team, but it's fairly clear

1:57:03

that even as of like end

1:57:05

of last year and certainly in the

1:57:07

summer of last year, like they

1:57:09

were not dialed in to the CCP

1:57:11

mothership and they were doing stuff

1:57:14

that was like actually kind of hilariously

1:57:16

messing up the propaganda efforts of

1:57:18

the CCP without realizing it. So to

1:57:20

give you like some context on

1:57:22

this, one of the CCP's like large

1:57:24

kind of propaganda goals in the

1:57:26

last four years has been Framing creating

1:57:28

this narrative that like the export

1:57:30

controls we have around AI and like

1:57:33

all this gear and stuff that

1:57:35

we were talking about Look, man Those

1:57:37

don't even work. So you might

1:57:39

as well just like this. Why don't you just

1:57:41

give up on the export controls? So

1:57:45

that trying to frame that narrative

1:57:47

and they they want to like

1:57:49

gigantic efforts to do this so

1:57:51

I don't know if say there

1:57:53

there's this like kind of crazy

1:57:55

thing were the secretary of commerce

1:57:57

under Biden, Gina Raimondo, visited China

1:57:59

in, I think, August 2023, and

1:58:01

the Chinese basically like Timed the

1:58:03

launch of the Huawei Mate 60

1:58:05

phone that had this these chips

1:58:07

that were supposed to be made

1:58:09

by like export control shit for

1:58:11

right for her visit So it

1:58:13

was basically just like a big

1:58:15

like fuck you We don't even

1:58:17

give a shit about your export

1:58:19

controls like basically trying a morale

1:58:21

hit or whatever and you think

1:58:23

about that, right? That's an incredibly

1:58:25

expensive set piece. That's like, you

1:58:27

got to coordinate with Huawei, you

1:58:29

got to get the TikTok memes

1:58:32

and shit going in the right direction.

1:58:34

All that stuff, and all the

1:58:36

stuff they've been putting out is around

1:58:38

this narrative. Now, fast

1:58:40

forward to mid last

1:58:42

year, the CEO of

1:58:44

DeepSeq, the company. Back then, it

1:58:47

was totally obscure. Nobody was tracking

1:58:49

who they were. They were working in

1:58:51

total obscurity. He goes

1:58:53

on this, he does this random interview

1:58:55

on Substack. And what

1:58:57

he says is, he's like, yeah,

1:59:00

so honestly, like we're really excited

1:59:02

about doing this AGI push or

1:59:04

whatever. And like honestly, like money's

1:59:06

not the problem for us, talent's

1:59:08

not the problem for us, but

1:59:10

like access to compute, like these

1:59:12

export controls, man. Do they

1:59:15

ever work? That's a real

1:59:17

problem for us. Oh boy. And

1:59:19

like nobody noticed at the time, but

1:59:21

then the whole deep seek R1

1:59:23

thing blew up in December. And

1:59:25

now you imagine like you're the

1:59:27

Chinese Ministry of Foreign Affairs. Like

1:59:30

you've been like, you've been putting

1:59:32

this narrative together for like four years.

1:59:34

And this jackass that nobody heard

1:59:36

about five minutes ago, Basically just like

1:59:38

shits all over and like you're

1:59:40

not hearing that line from many more.

1:59:42

No, no, no, no They've locked

1:59:44

that shit down. Oh and actually the

1:59:46

funny the funniest part of this

1:59:48

they're in right when our one launched

1:59:50

There's a random deep -seek employee. I

1:59:52

think his name is like Diago

1:59:54

or something like that. He tweets out

1:59:56

He's like so this is like

1:59:58

our most exciting launch of the year.

2:00:00

Nothing can stop us on the

2:00:02

path to AGI Except access to compute

2:00:04

and then literally the dude in

2:00:06

Washington, D .C., works as a think

2:00:08

tank on export controls against China, reposts

2:00:11

that on X and

2:00:13

goes basically like a

2:00:16

message received. And

2:00:19

so like hilarious for us,

2:00:21

but also like You know that

2:00:23

on the backside, somebody got

2:00:25

screamed at for that shit. Somebody

2:00:27

got magic bust. Somebody

2:00:29

got taken away or

2:00:32

whatever. Because it just

2:00:34

undermined their entire four -year

2:00:36

narrative around these export

2:00:38

controls. But that shit

2:00:40

ain't going to happen again from

2:00:42

deep -seek. Better believe it. That's

2:00:45

part of the problem with like... So

2:00:47

the Chinese face so many issues, one

2:00:49

of them is to kind of... one

2:00:51

is the idea of just waste and

2:00:53

fraud, right? So we have a free

2:00:55

market like what that means is you

2:00:57

raise from private capital people who are

2:00:59

pretty damn good at assessing shit will

2:01:01

like look at your Your setup and

2:01:03

assess whether it's worth, you know backing

2:01:05

you for these massive multi -billion dollar

2:01:07

deals in China the state like I

2:01:09

mean the stories of waste are pretty

2:01:11

insane. They'll like send a billion dollars

2:01:13

to like a bunch of yahoo's who

2:01:15

will pivot from whatever like I don't

2:01:17

know making these widgets to just like

2:01:19

oh now we're like a chip foundry

2:01:21

and they have no experience in it

2:01:23

but because of all these subsidies because

2:01:25

of all these opportunities now we're gonna

2:01:27

say that we are and then no

2:01:29

surprise two years later they burn out

2:01:31

and they've just like lit A billion

2:01:33

dollars on fire or whatever billion yen

2:01:35

and like the weird thing is this

2:01:37

is actually working overall But it does

2:01:40

lead to insane and unsustainable levels of

2:01:42

waste like the the Chinese system right

2:01:44

now is obviously like they've got their

2:01:46

their massive property bubble that they're that's

2:01:48

looking really bad. We've got a population

2:01:50

crisis. The only way out for

2:01:52

them is the AI stuff right now.

2:01:54

Really the only path for them is

2:01:56

that, which is why they're working it

2:01:58

so hard. But the stories of just

2:02:00

billions and tens of billions of dollars

2:02:02

being lit on fire specifically in the

2:02:04

semiconductor industry, in the AI industry, that's

2:02:06

a drag force that they're dealing with

2:02:08

constantly that we don't have here in

2:02:10

the same way. So it's the different

2:02:12

structural advantages and weaknesses of both and

2:02:14

when we think about what do we

2:02:16

need to do to to counter this

2:02:18

to be active in this space to

2:02:20

be a live player again It means

2:02:22

factoring in like how do you yeah?

2:02:24

I mean how do you take advantage

2:02:26

of some of those opportunities that their

2:02:28

system presents that that ours doesn't when

2:02:30

you say be a live player again

2:02:32

like where do you position us? It's

2:02:35

I think it remains to be so

2:02:37

right now this administration is obviously taking

2:02:39

bigger swings that what are they doing

2:02:42

differently So well, I mean

2:02:44

things like tariffs. I mean they're not

2:02:46

shy about trying new stuff and you

2:02:48

tariffs are are very complex in this

2:02:50

space like the impact the actual impact

2:02:52

of the tariffs and and not universally

2:02:54

good But the on -shoring effect is also

2:02:56

something that you really want. So it's

2:02:59

a very mixed bag But it's certainly

2:03:01

an administration that's like willing to do

2:03:03

high stakes big moves in a way

2:03:05

that other administrations haven't and and at

2:03:07

time when you're looking at a transformative

2:03:09

technology that's gonna like up and so

2:03:11

much about the way the world works,

2:03:14

you can't afford to have that mentality

2:03:16

we're just talking about with like

2:03:18

the nervous, I mean, you encountered it

2:03:20

with the staffers, you know, when

2:03:23

booking the podcast with the presidential cycle,

2:03:25

right? Like the kind of

2:03:27

like nervous... staffer who everything's got to

2:03:29

be controlled and it's got to be

2:03:31

like just so yeah it's like if

2:03:33

you like the like you know wrestlers

2:03:35

have that mentality of like just like

2:03:37

aggression like like feed in right feed

2:03:39

forward don't just sit back and like

2:03:41

wait to take the punch it's not

2:03:44

like one of the guys who helped

2:03:46

us out on this has the saying

2:03:48

he's like fuck you I go first

2:03:50

and it's always my turn That's

2:03:52

what success looks like when you

2:03:54

actually are managing these kinds of national

2:03:57

security issues. The mentality we had

2:03:59

adopted was this like, sort of siege

2:04:01

mentality, where we're just letting stuff

2:04:03

happen to us and we're not feeding

2:04:05

in. That's something that I'm much

2:04:07

more optimistic about in this context. It's

2:04:09

tough too, because I understand people

2:04:11

who hear that and go like, well

2:04:14

look, you're talking about like, this

2:04:16

is an escalatory agenda. Again, I

2:04:18

actually think paradoxically it's not. It's

2:04:20

about keeping. adversaries in check and training

2:04:22

them to respect American territorial integrity

2:04:24

American technological sovereignty like you don't get

2:04:27

that for free and if you

2:04:29

just sit back you're That is escalatory.

2:04:31

It's just yeah, and base. This

2:04:33

is basically the the sub threshold version

2:04:35

of like, you know like the

2:04:37

World War two appeasement thing where back,

2:04:39

you know Hitler was like was

2:04:41

was taken He was taken taken Austria.

2:04:44

He was remilitarizing shit. He was

2:04:46

doing this. He was doing that and

2:04:48

the British were like Okay,

2:04:51

we're gonna let him just take

2:04:53

one more thing, and then he

2:04:55

will be satisfied. And

2:04:57

that just work. Maybe I have

2:04:59

a little bit of Poland, please. A

2:05:02

little bit of Poland. Maybe you

2:05:04

check Slovakia and looking awfully fine. And

2:05:06

so this is basically like, they

2:05:09

fell into that pit, like that tar

2:05:11

pit. back in the day because

2:05:13

they're, you know, the peace in our

2:05:15

time, right? And to some extent,

2:05:17

like we've still kind of learned the

2:05:19

lesson of not letting that happen

2:05:21

with territorial boundaries, but that's big and

2:05:23

it's visible and happens on the

2:05:26

map and you can't hide it. Whereas

2:05:28

one of the risks, especially with

2:05:30

the previous administration was like, there's these

2:05:32

like subthreshold things that don't show

2:05:34

up in the news and that are

2:05:36

calculated. Like basically,

2:05:38

our adversaries know.

2:05:41

because they know history. They

2:05:43

know not to give us a

2:05:45

Pearl Harbor. They know not

2:05:47

to give us a 9 -11 because

2:05:49

historically, countries that give America

2:05:51

a Pearl Harbor end up having

2:05:54

a pretty bad time about it.

2:05:56

And so why would they give

2:05:58

us a reason to come

2:06:00

and bind together against an obvious

2:06:02

external threat or risk? when

2:06:04

they can just like keep chipping away

2:06:06

at it. This is one of the

2:06:08

things like we have to actually elevate

2:06:10

that and realize this is what's happening.

2:06:13

This is the strategy. We need to

2:06:15

we need to take that like let's

2:06:17

not do appeasement mentality and push it

2:06:19

across in these other domains because that's

2:06:21

where the real competition is going on

2:06:23

That's where I get so fascinating in

2:06:25

regards to social media because it's imperative

2:06:27

that you have an ability to express

2:06:30

yourself It's like it's very valuable everybody

2:06:32

the free exchange of information finding out

2:06:34

things that are you're not gonna get

2:06:36

from mainstream media And it's led to

2:06:38

the rise of independent journalism. It's all

2:06:40

great, but also you're being manipulated like

2:06:42

left and right constantly and most people

2:06:44

don't have the time to filter through

2:06:47

it and try to get

2:06:49

some sort of objective sense of

2:06:51

what's actually going on. It's true.

2:06:53

It's like our free speech. It's

2:06:55

like it's the layer where our

2:06:57

society figures stuff out. And if

2:06:59

adversaries get into that layer, they're

2:07:01

like almost inside of our brain.

2:07:03

And there's ways of addressing this.

2:07:05

One of the challenges, obviously, is

2:07:07

they try to push in extreme

2:07:10

opinions in either direction. And that

2:07:12

part is actually, it's kind of

2:07:14

difficult because while The

2:07:16

the most extreme opinions are like

2:07:18

are also the most likely generally

2:07:20

to be wrong. They're also the

2:07:22

most valuable when they're right because

2:07:24

they tell us a thing that

2:07:26

we didn't expect by definition that's

2:07:28

true and that can really advance

2:07:31

us forward. And so I mean

2:07:33

the the there are actually solutions

2:07:35

to this. I mean this this

2:07:37

particular thing is isn't an area

2:07:39

we. were like too immersed in,

2:07:41

but one of the solutions that

2:07:43

has been bandied about is like,

2:07:45

you know, like you might know

2:07:47

like polymarket or prediction markets and

2:07:49

stuff like that, where at least,

2:07:51

you know, hypothetically, if you have

2:07:53

a prediction market around like, if

2:07:55

we do this policy, this thing

2:07:57

will, will or won't happen, that

2:08:00

actually creates a challenge around trying

2:08:02

to manipulate that view or that market

2:08:04

because what is it happening is

2:08:06

like, if you're an adversary and you

2:08:08

want to not just like manipulate

2:08:10

a conversation that's happening in social media,

2:08:12

which is cheap, but manipulate a

2:08:14

prediction, the price on a prediction market,

2:08:16

you have to buy in, you

2:08:18

have to spend real resources. And if

2:08:20

you're to the extent you're wrong

2:08:22

and you're trying to create a wrong

2:08:24

opinion, you're going to lose your

2:08:26

resource. So you actually, you actually can't

2:08:28

push too far too many times,

2:08:30

or you will just get your money

2:08:32

taken away from you. So. I

2:08:34

think that's one approach where just in

2:08:36

terms of preserving discourse, some of

2:08:38

the stuff that's happening in prediction markets

2:08:40

is actually really interesting and really

2:08:42

exciting, even in the context of bots

2:08:44

and AIs and stuff like that.

2:08:46

Mmm, this is the one way to

2:08:48

find truth in the system is

2:08:51

find out where people are making money

2:08:53

Exactly put your money where your

2:08:55

mouth is right proof of work like

2:08:57

this is that is what just

2:08:59

like the market is Theoretically to right

2:09:01

it's got obviously big big issues

2:09:03

But and can be manipulated in the

2:09:05

short term but in the long

2:09:07

run like this is one of the

2:09:09

really interesting things about startups too

2:09:11

like when you when you run into

2:09:13

people in the early days By

2:09:15

definition their startup looks like it's not

2:09:17

gonna succeed That is what it

2:09:19

means to be a seed stage startup.

2:09:21

If it was obviously we're going to succeed, the

2:09:24

people would You would have raised more money already. So

2:09:26

what you end up having is

2:09:29

these highly contrarian people who, despite

2:09:31

everybody telling them that they're going to fail, just believe

2:09:33

in what they're doing and think they're going to succeed.

2:09:36

And I think that's part of what

2:09:38

really shapes the startup founder's soul in

2:09:40

a way that's really constructive. It's

2:09:42

also something that, if you look at

2:09:44

the Chinese system, is very different. You raise

2:09:46

money in very different ways. You're coupled

2:09:48

to the state apparatus. You're both dependent on

2:09:51

it, and you're supported by it. But

2:09:53

there's just a lot different ways and it

2:09:55

makes it hard for Americans to relate

2:09:57

to Chinese and vice versa and understand each

2:09:59

other's systems. One of the biggest risks

2:10:01

as you're thinking through what is your posture

2:10:03

going to be relative to these countries

2:10:05

is you fall into thinking that their traditions,

2:10:07

their way of thinking about the world

2:10:09

is the same as your own. And

2:10:12

that's something that's been an issue for us with

2:10:14

China for a long time is, hey, they'll

2:10:16

liberalize, bring them into the World Trade

2:10:18

Organization. It's like, oh, well, actually, they'll sign

2:10:20

the document, but they won't actually live

2:10:22

up to any of the commitments. It's

2:10:25

it makes appeasement really tempting because you're thinking

2:10:27

oh, they're just like us like just around

2:10:29

the corner They're like if we just like

2:10:31

reach out the old branch a little bit

2:10:33

further They're gonna they're gonna come around. It's

2:10:35

like a guy who's stuck in the friend

2:10:37

zone with a girl One

2:10:40

day she's gonna come around and

2:10:42

realize I'm a great catch You keep

2:10:44

on trucking buddy one day China

2:10:46

is gonna be my bestie We're gonna

2:10:48

be besties. You just we just

2:10:50

need an administration that reaches out to

2:10:52

them and just let them know

2:10:55

man There's no reason why should be

2:10:57

adversaries. We're all just people on

2:10:59

planet Earth Like I I honestly wish

2:11:01

that was true. Oh wonderful. So

2:11:03

maybe that's what a I brings about

2:11:05

maybe a I you hope maybe

2:11:07

it is super intense Intelligence realizes hey

2:11:09

you fucking apes you territorial apes

2:11:11

with thermonuclear weapons. How about you shut

2:11:14

the fuck up? You guys are

2:11:16

doing the dumbest thing of all time

2:11:18

and you're being manipulated by a

2:11:20

small group of people that are profiting

2:11:22

in insane ways off of your

2:11:24

misery So let's just cut the shit

2:11:26

That's actually not a way to

2:11:28

actually equitably share resources because that's the

2:11:31

big thing so you're all stealing

2:11:33

from the earth, but some people stole

2:11:35

first and those people are now

2:11:37

controlling all the fucking money, how about

2:11:39

we stop that? Wow, we covered

2:11:41

a lot of ground there. Well, that's

2:11:43

what I would do if I

2:11:45

was super intelligent. So that we stopped

2:11:48

all that. That actually is like,

2:11:50

so this is not like relevant to

2:11:52

the risk stuff or to the

2:11:54

whatever at all, but it's just interesting.

2:11:56

So there's actually theories, like in

2:11:58

the same way that there's theories around

2:12:00

power seeking and stuff around super

2:12:02

intelligence, there's theories around like how super

2:12:04

intelligences do deals with each other.

2:12:07

And you actually like you have this

2:12:09

intuition that which is exactly right

2:12:11

which is that hey two super intelligences

2:12:13

like actual legit super intelligences should

2:12:15

never actually like fight each other destructively

2:12:17

in the real world, right? Like

2:12:19

that seems weird. That shouldn't happen because

2:12:21

they're they're so smart and in

2:12:24

fact like there's theories around they can

2:12:26

they can kind of do perfect

2:12:28

deals with each other based on like

2:12:30

if we're two super intelligences I

2:12:32

can kind of assess like how powerful

2:12:34

you are you can assess how

2:12:36

powerful And we can we can actually

2:12:38

like we can actually decide like

2:12:40

well Well if we did fight a

2:12:43

war against each other like You

2:12:45

would have this chance of winning. I

2:12:47

would have that chance of winning.

2:12:49

And so it would have instantaneously. There's

2:12:51

no benefit in that. Exactly. also

2:12:53

it would know something that we all know,

2:12:55

which is the rising tide lifts all boats. But

2:12:58

the problem is the people that already have

2:13:00

yachts, they don't give a fuck about your

2:13:02

boat. Like, hey, hey, hey, that water's mine.

2:13:04

In fact, you shouldn't even have water. Well,

2:13:06

hopefully it's so positive. Some right that even

2:13:08

they enjoy the benefits. But but I mean,

2:13:10

you're right. And this is the issue right

2:13:12

now. And one of the like the nice

2:13:14

things too is as you as you build

2:13:16

up your your ratchet of AI capabilities, it

2:13:19

does start to open some opportunities for actual

2:13:21

trust but verify, which is something that we

2:13:23

can't do right now. It's not like with

2:13:25

nuclear stockpiles where we've had some success in

2:13:27

some context with enforcing treaties and stuff like

2:13:29

that, sending inspectors in and all that. With

2:13:32

AI right now, how can you

2:13:34

actually prove that? like some international

2:13:36

agreement on the use of AI is

2:13:38

being observed. Even if we figure out

2:13:40

how to control these systems, how can

2:13:42

we make sure that China is baking

2:13:45

in those control mechanisms into their training

2:13:47

runs and that we are and how

2:13:49

can we prove it to each other

2:13:51

without having total access to the compute

2:13:53

stack? We don't really have a

2:13:55

solution for that. There are all kinds of

2:13:57

programs like this like flex -hag thing But

2:13:59

anyway, those those are not going to be

2:14:01

online by like 2027 and so one hope

2:14:03

is really good that people are working on

2:14:06

them because like for sure you want to

2:14:08

you want to like you want to be

2:14:10

positioned for catastrophic success like what if something

2:14:12

great happens and like or we have more

2:14:14

time or whatever you want to be working

2:14:16

on this stuff that that allows this kind

2:14:18

of this kind of Control or oversight that

2:14:20

that's kind of hands -off where In

2:14:23

theory, you can hand

2:14:25

over GPUs to an adversary

2:14:27

inside this box with

2:14:29

these encryption things. the

2:14:32

people we've spoken to in the

2:14:34

spaces that actually try to break

2:14:36

into boxes like this are like,

2:14:38

well, probably not gonna work, but

2:14:40

who knows, it might. So

2:14:42

the hope is that as you build

2:14:44

up your AI capabilities, basically, it starts

2:14:46

to create solutions. So it starts to

2:14:48

create ways for two countries to verifiably

2:14:50

adhere to some kind of international agreement.

2:14:52

Or to find, like you said, paths

2:14:54

for de -escalation. That's the sort of

2:14:56

thing that that we actually could get

2:14:59

to. And that's one of the strong

2:15:01

positives of where you could end up

2:15:03

going. That would be what's really fascinating.

2:15:05

Artificial general intelligence becomes superintelligence and it

2:15:07

immediately weeds out all the corruption. goes,

2:15:09

hey, this is the problem. Like a massive

2:15:11

doge in the sky. Exactly. We figured it

2:15:13

out. You guys are all criminals and exposed

2:15:15

to all the people. Like these people that

2:15:18

are your leaders have been profiting and they

2:15:20

do it on purpose. And this is how

2:15:22

they're doing it. And this is how they're

2:15:24

manipulating you. And these are all the lies

2:15:26

that they've told. I'm sure

2:15:28

that list is pretty. Whoa. It

2:15:30

would be scary. If you could X -ray the

2:15:32

world right now. Like see all

2:15:34

the you'd want an MRI you want

2:15:37

to get like down to the

2:15:39

tissue Yeah, you're right probably you want

2:15:41

to get down to the cellular

2:15:43

level, but like it I mean it

2:15:45

would be offshore accounts then you

2:15:47

start There would be so much like

2:15:50

the stuff that comes out, you

2:15:52

know, from just randomly, right? Just just

2:15:54

random shit that comes out like,

2:15:56

yeah, the, um, I forget that, that,

2:15:58

that like Argentinian, I think what

2:16:00

you were talking about, like the Argentinian

2:16:03

thing they did. uh... came

2:16:05

out a few years ago around all the oligarchs

2:16:07

in the uh... they're off the street thing at

2:16:09

the uh... yeah i i i i i i

2:16:11

i i the the uh... one that they're the

2:16:13

uh... laundromat movie they're so you see that and

2:16:15

the papers that and the papers and it's all

2:16:17

that now that's a good movie you know is

2:16:19

it called the panama papers the movie is called

2:16:21

the laundromat Yeah, yeah. You remember

2:16:23

the Panama Papers, do you know?

2:16:26

Roughly. Yeah, it's like all

2:16:28

the oligarchs stashing their cash in

2:16:30

Panama. offshore tax haven stuff.

2:16:32

Yeah, it's like... And

2:16:34

in like some lawyer or someone basically

2:16:36

blew it wide open. And so

2:16:39

you got to see like every every

2:16:41

like oligarch and rich persons like

2:16:43

you like financial shit like every once

2:16:45

in a while, right? The world

2:16:47

gets just like a flash of like,

2:16:49

oh, here's what's going on to

2:16:51

the surface. Yeah. And then we all

2:16:54

like go back to sleep. What's

2:16:56

fascinating is like the unhidibles, right? The

2:16:58

little things that. can't help but

2:17:00

give away what is happening. You

2:17:02

think about this in AI quite a bit. Some

2:17:04

things that are hard for companies to

2:17:06

hide is they'll have a job posting. They've

2:17:08

got to advertise to recruit. So you'll

2:17:10

see, oh, interesting, open

2:17:13

AI is looking to hire some

2:17:15

people from hedge funds. I

2:17:18

wonder what that means. I wonder what that

2:17:20

implies. If you think about all of the

2:17:22

leaders in the AI space, think about the

2:17:24

Medallion Fund, for example. This is

2:17:26

super successful hedge fund. What

2:17:29

the man who broke the... The

2:17:31

man who broke the market. The man

2:17:33

who broke the market is the

2:17:35

famous book about the founder of the

2:17:37

Medallion Fund. This is basically a

2:17:39

fund that... They make like ridiculous like

2:17:41

five billion dollar returns every year

2:17:43

kind of guaranteed so so much so

2:17:45

they have to cap how much

2:17:48

they invest in the market because they

2:17:50

would otherwise like move the market

2:17:52

too much like affect it and the

2:17:54

fucked up thing about like the

2:17:56

way they trade and this is so

2:17:58

this is like 20 year old

2:18:00

information but it's still indicative because like

2:18:02

you can't get current information about

2:18:04

their strategies but one of the things

2:18:07

that they were the first to

2:18:09

kind of go for and figure out

2:18:11

is they were like Okay, they

2:18:13

basically were the first to build what

2:18:15

was at the time, as much

2:18:17

as possible, an AI that autonomously did

2:18:19

trading at great speeds and it

2:18:21

had no human oversight and just worked

2:18:23

on its own. What they found

2:18:25

was the strategies that were the most

2:18:28

successful were the ones that humans

2:18:30

understood the least because if you have

2:18:32

a strategy that a human can

2:18:34

understand, some human's gonna go and

2:18:36

figure out that strategy and trade against

2:18:38

you. Whereas if you have the kind of

2:18:40

the balls to go like, oh, this

2:18:42

thing is doing some weird shit that I

2:18:44

cannot understand no matter how hard I

2:18:47

try, let's just fucking YOLO and

2:18:49

trust it and like, and make it work. If you

2:18:51

have all the stuff debugged and if you have

2:18:53

the whole, if the whole system is working right, that's

2:18:55

where your biggest successes are. What

2:18:57

kind of strategies you talking about? I

2:19:00

don't know specific examples. I can

2:19:02

give it an analogy. Maybe

2:19:04

this. How are AI

2:19:06

systems trained today? Just as

2:19:08

a trading strategy. As

2:19:12

an example, you

2:19:14

buy this stock the Thursday

2:19:16

after the full moon and then

2:19:18

sell it the Friday after

2:19:20

the new moon or some random

2:19:22

shit like that. that it's

2:19:24

like, why does that even work?

2:19:27

Why would that even work?

2:19:29

So to sort of explain

2:19:31

why these strategies work better,

2:19:33

if you think about how

2:19:36

AI systems are trained today, very

2:19:39

roughly, you start with

2:19:41

this blob of numbers that's called a

2:19:43

model. And you feed it input, you

2:19:45

get an output. If the output you

2:19:47

get is no good, if you don't

2:19:49

like the output, you basically fuck around

2:19:51

with all those numbers, change them a

2:19:53

little bit, and then you try again.

2:19:55

You're like, oh, OK, that's better. And

2:19:57

you repeat that process over and over

2:19:59

and over with different inputs and outputs.

2:20:01

And eventually those numbers, that mysterious ball

2:20:03

of numbers, starts to behave well. It

2:20:05

starts to make good predictions or generate

2:20:07

good outputs. Now, you don't know why

2:20:09

that is. You just know

2:20:11

that it. does a good job, at least

2:20:13

where you've tested it. Now, if you

2:20:16

slightly change what you tested on, suddenly you

2:20:18

could discover, oh, shit, it's catastrophically failing

2:20:20

at that thing. These things are very brittle

2:20:22

in that way. And that's part of

2:20:24

the reason why chat GPT will just like

2:20:26

completely go on a psycho binge fest

2:20:28

every once in a while. If you give

2:20:30

it a prompt that has like too

2:20:32

many exclamation points and asterisks in it or

2:20:34

something, like these, these systems are weirdly,

2:20:36

weirdly brittle in that way, but applied to

2:20:38

investment strategies. If all you're doing is

2:20:40

saying like, Optimize for like optimize

2:20:42

for returns give it in give it inputs

2:20:44

give it up be more money by

2:20:46

the end of the day It's like an

2:20:48

easy goal like it's a very like

2:20:50

clear -cut goal, right? You can give a

2:20:53

machine so you end up with a machine

2:20:55

that gives you these very like it

2:20:57

is a very weird Strategy this ball of

2:20:59

numbers isn't human understandable. It's just really

2:21:01

fucking good at making money And why is

2:21:03

it really fucking good at making money?

2:21:05

I don't know. I mean it just kind

2:21:07

of does the thing and I'm making

2:21:09

money I don't ask too many questions. That's

2:21:11

kind of like the So when you

2:21:13

try to impose on that system, human interpretability,

2:21:15

you pay what in the AI world

2:21:18

is known as the interpretability tax. Basically, you're

2:21:20

adding another constraint, and the minute you

2:21:22

start to do that, you're forcing it to

2:21:24

optimize for something other than pure rewards.

2:21:26

Like doctors using AI to diagnose diseases are

2:21:28

less effective than the chatbot on its

2:21:30

own. That's actually related, right? That's related. If

2:21:32

you want that system to get good

2:21:34

at diagnosis, that's one thing. Okay, just fucking

2:21:36

make a good diagnosis. If you want

2:21:38

it to be good at diagnosis and to

2:21:40

produce explanations that a good doctor, yeah,

2:21:43

we'll go like, okay, I'll use that.

2:21:45

Well, great. But guess what? Now you're

2:21:47

spending some of that precious compute on

2:21:49

something other than just the thing you're

2:21:52

trying to optimize for. And so now

2:21:54

that's going to come at a cost

2:21:56

of the actual performance of the system.

2:21:58

And so if you are going to

2:22:00

optimize like the fuck out of making

2:22:02

money, you're gonna necessarily deoptimize the fuck

2:22:04

out of anything else, including being able

2:22:06

to even understand what that system is

2:22:09

doing. And that's kind of like at

2:22:11

the heart of a lot of the

2:22:13

kind of big picture AI strategy stuff

2:22:15

is people are wondering like, how much

2:22:17

interpretability tax am I willing to pay

2:22:19

here? And how much does it cost?

2:22:21

And everyone's willing to go a little

2:22:24

bit further and a little further. So

2:22:26

OpenAI actually had a paper or I

2:22:28

guess a blog post where they talked

2:22:30

about this and they were like, look,

2:22:32

right now, We have this this Essentially

2:22:34

this like thought stream that our model

2:22:36

produces on the way to generating its

2:22:38

final output and That thought stream like

2:22:41

we don't want to touch it to

2:22:43

make it like Interpretable to make it

2:22:45

make sense because if we do that

2:22:47

then essentially it'll be optimized to convince

2:22:49

us of whatever the thing is that

2:22:51

we want to to do, to behave

2:22:53

well. So it's like, if you've used

2:22:55

like an open AI model recently, right,

2:22:58

like 03 or whatever, it's

2:23:00

doing its thinking before it starts like

2:23:02

outputting the answer. And so

2:23:04

that thinking is, yeah,

2:23:06

we're supposed to like be able

2:23:08

to read that and kind of

2:23:11

get it, but also... We we

2:23:13

don't want to make it too

2:23:15

legible because if we make it

2:23:17

too legible It's gonna be optimized

2:23:19

to be legible and and to

2:23:21

be convincing rather than to fool

2:23:23

us basically. I mean, yeah, exactly

2:23:26

But that's so that's the end

2:23:28

guys are making me less comfortable

2:23:30

than I thought you would How

2:23:32

bad are they gonna freak us

2:23:34

out? I

2:23:38

do want to highlight so the game plan

2:23:40

right now on the positive end. Let's see

2:23:42

how this works Jesus Jamie do you feel

2:23:44

the same way? I

2:23:49

mean I have articles I

2:23:51

didn't bring up that are supporting

2:23:53

some of this stuff like

2:23:55

today China quietly made some chip

2:23:57

that they shouldn't been able

2:23:59

to do because of the sanctions

2:24:01

Oh, and it's basically based

2:24:03

off of just sheer will okay,

2:24:05

so there's SMIC there's good

2:24:07

news on that one at least

2:24:09

This is a kind of

2:24:11

a bullshit strategy that they're using

2:24:13

so There's okay, so when

2:24:15

you make these insane like five

2:24:17

let's read that for people

2:24:19

just listening China quietly cracks a

2:24:21

five nanometer Yeah, that's it

2:24:23

without EUV. What is EUV extreme

2:24:25

ultra viral ultraviolet How SMIC

2:24:27

defy the chip sanctions with sheer

2:24:29

engineering? Yeah, so this is

2:24:31

like an espionage So there's But

2:24:33

actually though so there's a

2:24:35

good reason that a lot of

2:24:37

these articles are Making

2:24:40

it seem like this is a huge

2:24:42

breakthrough. It actually isn't as big

2:24:44

as it seems So so okay if

2:24:46

you want to make really really

2:24:48

really really exquisite this quote Moore's law

2:24:50

didn't die who wrote it moved

2:24:52

to Shanghai instead of giving up China's

2:24:54

grinding its way forward layer by

2:24:56

layer Pixel by pixel the future of

2:24:59

chips may no longer be written

2:25:01

by who holds the best tools, but

2:25:03

by who refuses to stop building The

2:25:06

rules are changing and DUV just lit

2:25:08

the fuse. Boy. Yeah, so I mean

2:25:10

wrote that article you can you can

2:25:12

you can't mo China There it is.

2:25:14

Yeah, you can view that as like

2:25:16

Chinese propaganda in a way actually so

2:25:18

what what what's actually going on here

2:25:21

is If so the Chinese only have

2:25:23

these deep ultraviolet lithography machines That's like

2:25:25

a lot of syllables, but it's just

2:25:27

a glorified chip like it's a giant

2:25:29

laser That zaps your chips to like

2:25:31

make the chips when when you're so

2:25:33

we're talking about like you do these

2:25:35

atomic layer patterns on the chips and

2:25:38

shit and like what this UV thing

2:25:40

does is it like fires like a

2:25:42

really high power laser beam laser beam

2:25:44

Yeah, they attach the head of sharks

2:25:46

that just shoot at the chips. Sorry.

2:25:48

That was like an Austin powers Anyway,

2:25:50

they felt like shoot at the chips

2:25:52

and That causes depending on how the

2:25:54

thing is is designed they'll like have

2:25:57

a liquid layer of the stuff that's

2:25:59

going to go on the chip. The

2:26:01

UV is really, really tight and causes

2:26:03

it exactly, causes it to harden and

2:26:05

then they wash off the liquid and

2:26:07

they do it all over again. Like

2:26:09

basically this is just imprinting a pattern

2:26:11

on a chip. So whatever the tiny

2:26:14

printer. Yeah. So that's it. And so

2:26:16

the exquisite machines that we get to

2:26:18

use or that they get to use

2:26:20

in Taiwan are called extreme ultraviolet lithography.

2:26:22

machines. These are those crazy lasers. The

2:26:25

ones that China can use, because

2:26:27

we've prevented them from getting any of

2:26:29

those extreme ultraviolet lithography machines, the

2:26:31

ones China uses are previous generation machines

2:26:34

called deep ultraviolet, and they can't

2:26:36

actually make chips as high a resolution

2:26:38

as ours. So what they

2:26:40

do is, and what this article is about

2:26:42

is, they basically take the same chip,

2:26:44

they zap it once with DUV. And then

2:26:46

they got to pass it through again,

2:26:48

zap it again to get closer to the

2:26:50

level of resolution we get in one

2:26:52

pass with our exquisite machine. Now, the

2:26:54

problem with that is you got to pass

2:26:56

the same chip through multiple times, which slows down

2:26:58

your whole process. It means your yields at

2:27:00

the end of the day are lower. It has

2:27:02

errors. Yeah, which makes it more costly. We've

2:27:04

known that this is a thing. It's called multi

2:27:06

-patterning. It's been a thing for a long time.

2:27:08

There's nothing new under the sun here. China

2:27:10

has been doing this for a while. But

2:27:13

so it's not. actually a huge

2:27:15

shock that this is happening, the question

2:27:17

is always, when you look at

2:27:19

an announcement like this, yields, yields, yields.

2:27:21

What percentage of the chips coming

2:27:23

out are actually usable and how fast

2:27:25

are they coming out? That determines

2:27:28

is it actually competitive. And that

2:27:30

article too, this ties into the

2:27:32

propaganda stuff we were talking about, right? If

2:27:34

you read an article like that, you could

2:27:36

be forgiven for going like, oh man, our

2:27:38

expert controls just aren't working, so we might

2:27:40

as well just give them up. when in

2:27:42

reality because like you look at the source

2:27:44

like the and this is and this is

2:27:46

how you know that also this is like

2:27:48

this is one of their propaganda things is

2:27:50

like you look at Chinese news sources what

2:27:53

are they saying What are the beats that

2:27:55

are common? And you know, just because of

2:27:57

the way their media is set up, totally

2:27:59

different from us. And we're not used to

2:28:01

analyzing things this way. But when you read

2:28:03

something in the South China Morning Post or

2:28:05

the Global Times or Xinhua and a few

2:28:07

different places like this, and it's the same

2:28:09

beats coming back, you know that someone was

2:28:11

handed a brief and it's like, you got

2:28:13

to hit this point, this point, this point.

2:28:16

And yep, they're going to find a way

2:28:18

to work that into the news cycle over

2:28:20

there. Jeez. And it's

2:28:22

also like slightly true like yeah, they

2:28:24

did manage to make chips at like

2:28:26

five nanometers cool It's not a lie.

2:28:28

It's just it's the same like propaganda

2:28:30

technique, right? You're not most of the

2:28:32

time You're not gonna confabulate something out

2:28:35

of nothing rather like you start with

2:28:37

the truth and then you push it

2:28:39

just a little bit Just a little

2:28:41

bit and you keep pushing pushing pushing

2:28:43

Wow, how much is this administration aware

2:28:45

of all the things that you're talking

2:28:47

about so so they're actually They've

2:28:51

got some right now. They're they're in

2:28:53

the middle of of like staffing up

2:28:55

some of the key positions because it's

2:28:57

a new administration still and this is

2:28:59

such a technical domain They've got people

2:29:01

there who are like like at the

2:29:03

kind of level they have some really

2:29:05

sharp. They have some people now Yeah

2:29:07

in in places like especially in some

2:29:10

of the export control offices now who

2:29:12

are some of the best in the

2:29:14

business Yeah, and and so and that's

2:29:16

that's really important like this is a

2:29:18

it's a weird space because so when

2:29:20

you want to actually recruit for for

2:29:22

You know government roles in this space.

2:29:24

It's really fucking hard because you're competing

2:29:26

against like an open AI like very

2:29:28

like low -range salaries like half a

2:29:30

million dollars a year The government pay

2:29:33

scale needless to say is like not

2:29:35

where I mean Elon worked for free

2:29:37

He can he can afford to but

2:29:39

but still taking a lot of time

2:29:41

out of his his day There's a

2:29:43

lot of people like that who are

2:29:45

like, you know, they They can't justify

2:29:47

the cost. They literally can't afford to

2:29:49

work for the government. Why would they?

2:29:51

Exactly. Whereas China is like, you don't

2:29:53

have a choice, bitch. Yeah. And

2:29:55

that's what they say. The Chinese word for

2:29:57

bitch is really biting. If you translated that,

2:30:00

it would be a real stain. I'm

2:30:02

sure. It's kind of

2:30:04

crazy because it seems almost impossible to

2:30:06

compete with that. That's the perfect setup.

2:30:08

If you wanted to control everything and

2:30:10

you wanted to optimize everything for the

2:30:12

state, that's the way you would do

2:30:14

it. but it's also easier to to

2:30:16

make errors and be wrong footed in

2:30:19

that way and also the Basically that

2:30:21

system only works if the dictator at

2:30:23

the top is just like very competent

2:30:25

because the the the risk always with

2:30:27

a dictatorship is like oh the dictator

2:30:29

turns over and now it's like just

2:30:31

a total dumbass And now you're the

2:30:33

whole thing and he surrounds himself I

2:30:35

mean look we just talked about like

2:30:37

information echo chambers online and stuff The

2:30:39

ultimate information echo chamber is the one

2:30:41

around Xi Jinping right now because no

2:30:43

one wants to give him bad news.

2:30:45

Yeah, I'm not gonna You know like

2:30:48

and so and you have this and

2:30:50

this is what you keep seeing right

2:30:52

is like with these uh

2:30:54

like um like provincial level debt in

2:30:56

in china right which is so

2:30:58

awful it's like people trying to hide

2:31:00

money under imaginary money under imaginary

2:31:02

mattresses and then hiding those mattresses under

2:31:04

bigger mattresses until eventually like no

2:31:06

one knows where the liability is and

2:31:08

that and then you get a

2:31:10

massive property bubble and any number of

2:31:12

other bubbles that are due to

2:31:14

to pop anytime. And the longer it

2:31:16

goes on, the more stuff gets

2:31:18

squirreled away. There's actually a story from

2:31:20

the Soviet Union that always gets

2:31:22

me, which is so Stalin

2:31:24

obviously purged and killed millions

2:31:26

of people in the 1930s.

2:31:28

So by the 1980s, the

2:31:31

ruling Politburo of the Soviet

2:31:33

Union, obviously things have been

2:31:35

different, generations had turned over

2:31:37

and all this stuff, but

2:31:40

those people, the most powerful

2:31:42

people, in the USSR could

2:31:44

not figure out what had

2:31:46

happened to their own families

2:31:49

during the purchase. Like

2:31:51

the information was just nowhere to be found

2:31:53

because the machine of the state was just

2:31:55

like so aligned around like we just like

2:31:57

we just got to kill as many fucking

2:31:59

people as we can like turn it over

2:32:01

and then hide the evidence of it and

2:32:03

then kill the people who killed the people

2:32:05

and then kill those people who killed those

2:32:08

people like it also wasn't just kill the

2:32:10

people, right? It was like a lot of

2:32:12

like kind of gulag archipelago style. It's about

2:32:14

labor, right? Because the fundamentals of the economy

2:32:16

are so shit that you basically have to

2:32:18

find a way to justify putting people in

2:32:20

labor camps and life. That's right. But it

2:32:22

was very much like you grind mostly or

2:32:24

largely you grind them to death and basically

2:32:27

they've gone away and you burn the records

2:32:29

of it happening. So whole towns, right? That

2:32:31

disappeared. Like people who are like there's no

2:32:33

record or there's like or usually the way

2:32:35

you know about it is there's like one

2:32:37

dude and it's like this one dude has

2:32:39

a very precarious escape story and it's like

2:32:41

if literally this dude didn't get away, you

2:32:44

wouldn't know about the entire town that was

2:32:46

like worked out. Yeah, it's crazy. Jesus Christ.

2:32:48

Yeah. The stuff that like - Apart from

2:32:51

that though, communism works well. Communism, great.

2:32:53

It just hasn't been done right. That's

2:32:55

right. I feel like we could do

2:32:57

it right. And we have a ten

2:32:59

page plan. That yeah, we came real

2:33:01

close Came real close so close. Yeah

2:33:03

Yeah, and that's what the blue no

2:33:05

matter who people don't really totally understand

2:33:07

like we're not even talking about political

2:33:10

parties We're talking about power structures. Yeah,

2:33:12

we came close to a terrifying power

2:33:14

structure And it was willing to just

2:33:16

do whatever it could to keep it

2:33:18

rolling and it was rolling for four

2:33:20

years It was rolling for four years

2:33:22

without anyone at the helm Show me

2:33:24

the incentives, right? I mean that's always

2:33:27

the the question like yeah One of

2:33:29

the things is too, like when you

2:33:31

have such a big structure that's overseeing

2:33:33

such complexity, right? Obviously, a lot of

2:33:35

stuff can hide in that structure. And

2:33:37

it's actually cut. It's not unrelated to

2:33:39

the whole AI picture. Like you need.

2:33:41

There's only so much compute that you

2:33:43

have at the top of that system

2:33:46

that you can spend right as the

2:33:48

president as a cabinet member like whatever

2:33:50

You you can't look over everyone's shoulder

2:33:52

and do their homework You can't do

2:33:54

founder mode all the way down and

2:33:56

all the branches and all the like

2:33:58

action officers and all that's not gonna

2:34:00

happen which means You're spending five seconds

2:34:02

thinking about how to unfuck some part

2:34:05

of the government, but then the like

2:34:07

you know corrupt people who run their

2:34:09

own fiefdoms there spend every day trying

2:34:11

to figure out how to justify themselves.

2:34:13

Well, that's the USAID dilemma. Is

2:34:16

there uncovering just insane amount

2:34:18

of NGOs? Where's this going?

2:34:20

We talked about this the

2:34:22

other day, but India has

2:34:24

an NGO for every 600

2:34:27

people. Wait, what? We need

2:34:29

more NGOs. There's 3 .3

2:34:31

million NGOs. India

2:34:33

do they do they like bucket like

2:34:35

what one of the categories that they

2:34:37

fall into like who fucking knows That's

2:34:39

part of the problem that one of

2:34:41

the things that Elon had found is

2:34:43

that there's money that just goes out

2:34:45

with no receipts and yeah, it's billions

2:34:48

of dollars We need to take that

2:34:50

further. We need an NGO for every

2:34:52

person in India. We will get eventually

2:34:54

It's look exponential trend. We're gonna just

2:34:56

like AI the number of NGOs is

2:34:58

is like doubling every year incredible progress

2:35:00

and bullshit Geo

2:35:02

-scaling law, the bullshit scaling law. Well,

2:35:04

it's just that unfortunately it's Republicans doing

2:35:06

it, right? So it's unfortunately the Democrats

2:35:08

are gonna oppose it even if it's

2:35:10

showing that there's like insane waste of

2:35:12

your tax dollars. I thought some of

2:35:14

the doge stuff was pretty bipartisan. There's

2:35:18

congressional support at least on both

2:35:20

sides, no? Well, sort of. I

2:35:23

think the real issue is in

2:35:25

dismantling a lot of these programs

2:35:27

that... you can point to some

2:35:29

good, some of these programs do.

2:35:31

The problem is some of them are so

2:35:34

overwhelmed with fraud and waste that it's like

2:35:36

to keep them active in the state they

2:35:38

are. What do you do? Do you rip

2:35:40

the Band -Aid off and start from scratch? What

2:35:42

do you do with the Department of Education?

2:35:44

Do you say, why are we number 39

2:35:46

when we were number one? What did you

2:35:48

guys do with all that money? Did

2:35:51

you create problems? There's this idea in

2:35:53

software engineering. Actually, you're talking to one of

2:35:55

our employees about this, which is like, refactoring,

2:35:57

right? So when you're writing like a

2:35:59

bunch of software, it gets really, really big

2:36:01

and hairy and complicated. And there's all

2:36:04

kinds of like dumb ass shit. And there's

2:36:06

all kinds of waste that happens in

2:36:08

that, in that code base. There's this thing

2:36:10

that you do every, you know, every

2:36:12

like few months is you just think called

2:36:14

refactoring, which is like, you go like,

2:36:16

okay, we have, you know, 10 different things

2:36:18

that are trying to do the same

2:36:20

thing. Let's get rid of

2:36:22

nine of those things and just.

2:36:24

like rewrite it as the one

2:36:27

thing. So there's like a cleanup

2:36:29

and refresh cycle that has to

2:36:31

happen whenever you're developing a big

2:36:33

complex thing that does a lot

2:36:35

of stuff. The thing is like

2:36:37

the US government at every level

2:36:39

has basically never done a refactoring

2:36:41

of itself. And so the way

2:36:43

that problems get solved is you're

2:36:45

like Well, we need to

2:36:47

do this new thing. So we're

2:36:49

just going to like stick on

2:36:52

another appendage to the beast and,

2:36:54

and get that appendage to do

2:36:56

that new thing. And like, that's

2:36:58

been going on for 250 years.

2:37:00

So we end up with like

2:37:02

this beast that has a lot

2:37:04

of appendages, many of which do

2:37:06

incredibly duplicate of an wasteful stuff

2:37:08

that if you were a software

2:37:10

engineer, just like not politically, just

2:37:12

objectively looking at that as a

2:37:14

system, you'd go like, Oh, This

2:37:16

is a catastrophe. And

2:37:19

like, we have processes that the

2:37:21

industry, we understand how what needs to

2:37:23

be done to fix that. You have

2:37:25

to refactor. But they haven't done that.

2:37:28

Hence the $36 trillion of debt. It's

2:37:30

a problem too, though, in all, like,

2:37:32

when you're a big enough organization, you run

2:37:34

into this problem. Like, Google has this problem

2:37:36

famously. Facebook had it. Like, we had friends

2:37:38

like, like Jason, so Jason's the guy you

2:37:40

spoke to about that. Like, so,

2:37:43

so he's, he's like a startup. Engineer

2:37:45

so so he works in like relatively

2:37:47

small code bases and he he like

2:37:49

you know it can hold the whole

2:37:51

code base in his head at a

2:37:53

time, but when you move over to

2:37:55

You know Google to Facebook like all

2:37:57

a sudden this gargantuan code base starts

2:37:59

to look more like the complexity of

2:38:01

the US government just like very you

2:38:04

know very roughly in terms of Now

2:38:06

you're like okay. Well we want to

2:38:08

add functionality And but so we want

2:38:10

to incentivize our teams to to build

2:38:12

products that are gonna be valuable and

2:38:14

the challenges The best way to

2:38:16

incentivize that is to give people incentives to

2:38:18

build new functionality. Not to refactor. There's no

2:38:20

glory. If you work at Google, there's no

2:38:22

glory in refactoring. If you work at Meta,

2:38:24

there's no glory in refactoring. Like, friends -

2:38:26

Like, there's no promotion, right? There's no - Exactly.

2:38:28

You have to be a product owner. So

2:38:30

you have to, like, invent the

2:38:32

next Gmail. got to invent the next Google

2:38:34

Calendar. You've got to do the next Messenger

2:38:37

app. That's how you get promoted. And so,

2:38:39

you've got, like, this attitude. You go into

2:38:41

there and you're just like, let me crank

2:38:43

this stuff out and, like, try to ignore

2:38:45

all the shit in the codebase. No glory

2:38:47

in there. And what you're left with is

2:38:49

this, like, A, this Frankenstein monster of a

2:38:51

code base that you just keep stapling more

2:38:53

shit onto. And then B, this

2:38:55

massive graveyard of apps that never get used. This

2:38:57

is like the thing Google is famous for. If

2:38:59

you ever see like the Google graveyard of apps,

2:39:01

it's like all these things that you're like, oh yeah,

2:39:03

I guess I kind of remember Google Me. Somebody

2:39:05

made their career off of launching that shit and

2:39:07

then peaced out and it died. That's

2:39:10

like the incentive structure at Google,

2:39:12

unfortunately. And it's also kind of

2:39:14

the only way to do, I mean, or

2:39:16

maybe it's probably not, but in the world

2:39:18

where humans are doing the oversight, that's your

2:39:20

limitation, right? You got some people at the

2:39:22

top who have a limited bandwidth and compute

2:39:24

that they can dedicate to like hunting down

2:39:26

the problems. AI agents might

2:39:28

actually solve that. You could actually have

2:39:30

a sort of autonomous AI agent

2:39:32

that is the autonomous CEO or something

2:39:34

go into an organization and uproot

2:39:37

all the things and do that refactor.

2:39:39

You could get way more efficient

2:39:41

organizations out of that. I mean, thinking

2:39:43

about government corruption and waste in

2:39:45

front, that's the kind of thing where

2:39:47

those sorts of tools could be

2:39:50

radically empowering, but you got to get

2:39:52

them to work right and for

2:39:54

you. We've

2:39:56

given us a lot to think about. Is

2:39:59

there anything more should we wrap

2:40:01

this up if we've made you sufficiently

2:40:03

uncomfortable? I'm super uncomfortable was the was

2:40:05

the very uneasy was the butt -tack

2:40:07

too much at the beginning or with

2:40:09

no was fine. No, that's fine. All

2:40:12

of it was weird It's just you

2:40:14

know, I always try to look at

2:40:16

some non cynical way out

2:40:18

of this. Well, the thing is,

2:40:20

like, there are paths out. We talked about

2:40:22

this and the fact that a lot

2:40:24

of these problems are just us tripping on

2:40:26

our own feet. So if we can

2:40:28

just, like, unfuck ourselves a little bit, we

2:40:30

can unleash a lot of this

2:40:32

stuff. And as long as we

2:40:35

understand also the bar that security

2:40:37

has to hit and how important

2:40:39

that is, like, we actually can

2:40:41

put all this stuff together. We have the

2:40:44

capacity. It all exists. It just needs

2:40:46

to actually get aligned and around an initiative, and

2:40:48

we have to be able to reach out and

2:40:50

touch. On the control side, there's also a world

2:40:52

where, and this is actually, like if you talk

2:40:54

to the labs, this is what they're actually planning

2:40:56

to do, but it's a question of how methodically

2:40:58

and carefully they can do this. The

2:41:00

plan is to ratchet up capabilities and

2:41:02

then scale, in other words. And

2:41:04

then as you do that, you

2:41:07

start to use your AI systems,

2:41:09

your increasingly clever and powerfully AI

2:41:11

systems, to do research on technical

2:41:13

control. So you basically build the

2:41:15

next generation of systems. You try to get

2:41:17

that generation of systems to help you just inch

2:41:19

forward a little bit more on the capability

2:41:21

side. It's a very precarious balance, but it's something

2:41:23

that at least isn't insane on the face

2:41:26

of it. And fortunately, I

2:41:28

mean, is the... the default path,

2:41:30

or the labs are talking about that

2:41:32

kind of control element as being a

2:41:34

key pillar of their strategy. But these

2:41:36

conversations are not happening in China. So

2:41:38

what do you think they're doing to

2:41:40

keep AI from uprooting their system? So

2:41:42

that's interesting. Because

2:41:44

I would imagine they don't want to lose control. Right.

2:41:46

There's a lot of... Ambiguity and uncertainty about

2:41:49

what's going on in China So there's been a

2:41:51

lot of like track 1 .5 track 2 diplomacy

2:41:53

basically where you have you know non -government guys from

2:41:55

one side talked to government guys from this I

2:41:57

talked to non -government from the other side and

2:41:59

kind of start to align on like okay What

2:42:01

do we think the issues are? You

2:42:03

know the the Chinese are there are a

2:42:05

lot of like freaked out Chinese researchers Yeah, and

2:42:08

we've come out publicly and said hey like

2:42:10

we're really concerned about this whole loss of

2:42:12

control thing their public statements and all that you

2:42:14

also have to be mindful that any statement

2:42:16

the CCP puts out is a statement they want

2:42:18

you to see. So when they say like,

2:42:20

oh yeah, we're really worried about this thing, it's

2:42:22

genuinely hard to assess what that even means. But

2:42:25

they're like, as you start to

2:42:28

build these systems, we expect you're

2:42:30

gonna see some evidence of this shit before.

2:42:32

And it's not necessarily, it's not like you're

2:42:34

gonna build the system necessarily and have it

2:42:36

take over the world. What

2:42:38

we see with agents. Yeah, so

2:42:40

I was actually gonna I just

2:42:42

there's really really good point and

2:42:45

and something where like Open source

2:42:47

AI is like even you know

2:42:49

could potentially have an effect here

2:42:51

So a lot of a couple

2:42:53

of the major labs like open

2:42:55

AI anthropic I think came out

2:42:58

recently and said like look We

2:43:00

we're on the cusp our systems

2:43:02

are on the cusp of being

2:43:04

able to help a total novice

2:43:06

like somewhat no experience develop and

2:43:09

deploy and release a known biological

2:43:11

threat. And that's something we're going to

2:43:13

have to grapple with over the next few months. And

2:43:16

eventually, capabilities like this, not

2:43:18

necessarily just biological, but also

2:43:20

cyber and other areas, are

2:43:22

going to come out in open source. And

2:43:24

when they come out in open source,

2:43:26

for anybody to download and use, when they

2:43:28

come out in open source, you

2:43:30

actually start to see some Some

2:43:33

some things happen like some some

2:43:35

incidents like some some major hacks

2:43:37

that we're just done by like

2:43:39

a random motherfucker who just wants

2:43:41

to see the world burn but

2:43:44

that wakes us up to like

2:43:46

oh shit These things actually are

2:43:48

powerful. I think one of the

2:43:50

aspects also here is We're still

2:43:52

in that post Cold War honeymoon

2:43:54

many of us right in that

2:43:56

mentality like not everyone has like

2:43:59

wrapped their heads around this stuff

2:44:01

and the like What needs to

2:44:03

happen is something that makes us

2:44:05

go like, oh, damn, we act

2:44:07

like we weren't even really trying

2:44:09

this entire time. Because this is

2:44:11

like this is the 9 -11

2:44:13

effect. This is the Pearl Harbor

2:44:16

effect. Once you have a thing

2:44:18

that aligns everyone around like, oh, shit, this

2:44:20

is real. We actually need to do

2:44:22

it. And we're freaked out. We're actually safer.

2:44:24

We're safer when we're all

2:44:27

like, OK, something important needs

2:44:29

to happen. Instead

2:44:31

of letting them just slowly chip away

2:44:33

exactly and so we like we need

2:44:35

to have some sort of shock and

2:44:37

we probably will get some kind of

2:44:39

shock like over the next few months

2:44:41

the way things are trending and when

2:44:43

that happens then but I mean like

2:44:46

it's or years that makes you feel

2:44:48

better but because but because you have

2:44:50

the potential for this open source like

2:44:52

it's probably gonna be like a survivable

2:44:54

shock right but but still a shock

2:44:56

and so let us actually realign around

2:44:58

like okay let's actually and solve some

2:45:00

problems for real. And so putting together

2:45:02

the groundwork, right, is what we're doing

2:45:05

around, like, let's pre -think a lot of

2:45:07

this stuff so that, like, if and

2:45:09

when the shock comes... We have a

2:45:11

break glass plan. We have a plan.

2:45:14

And the loss of control stuff is similar.

2:45:16

Like, so one interesting thing that happens

2:45:18

with AI agents today is they'll, like, they'll

2:45:20

get any... So an AI agent will take

2:45:22

a complex task that you give it, like,

2:45:24

find me, I don't know, the... best sneakers

2:45:26

for me online, some shit like that. And

2:45:28

they'll break it down into a series of

2:45:30

sub steps. And then each of those steps,

2:45:32

it'll farm out to a version of itself,

2:45:34

say, to execute autonomously. The

2:45:36

more complex the task is, the more of

2:45:38

those little sub steps there are in it.

2:45:41

And so you can have an AI

2:45:43

agent that nails like 99 % of those

2:45:45

steps. But if it screws up just

2:45:47

one, the whole thing is a flop,

2:45:49

right? And so. If you

2:45:51

think about the loss of control

2:45:53

scenarios that a lot of people look

2:45:55

at, or autonomous replication, like

2:45:58

the model gets access to the internet,

2:46:00

copies itself onto servers and all that

2:46:02

stuff, those are

2:46:04

very complex movements. If it screws

2:46:06

up at any point along the way, that's

2:46:08

a tell, like, oh, shit, something's happening there. And

2:46:10

you can start to think about, OK, well,

2:46:12

what went wrong? We get another do. We get

2:46:14

another try. And we can learn from our

2:46:16

mistakes. So there is

2:46:18

this picture. One

2:46:21

camp goes, oh, well, we're going to

2:46:23

make the super intelligence in a vat. And

2:46:25

then it explodes out, and we lose

2:46:27

control over it. That doesn't. necessarily seem like

2:46:29

the default scenario right now. It seems

2:46:31

like what we're doing is scaling these systems.

2:46:34

We might unhobble them with big capability

2:46:36

jumps, but there's a

2:46:38

component of this that is a continuous process

2:46:40

that lets us get our arms around it

2:46:43

in a more staged way. That's

2:46:45

another thing that I think is in our

2:46:47

favor that we didn't expect before. As

2:46:50

a field basically and I think that's

2:46:52

that's a good thing like that helps

2:46:54

you kind of Detect these breakout attempts

2:46:56

and do things about them. All right.

2:46:58

I'm gonna bring this home I'm freaked

2:47:00

out. So thank you. Thanks for trying

2:47:02

to make me feel better I don't

2:47:04

think you did but I really appreciate

2:47:06

you guys and appreciate your perspective because

2:47:08

it's very important and It's very illuminating,

2:47:11

you know, really gives you a sense

2:47:13

of what's going on. And I think

2:47:15

one the things that you said that's

2:47:17

really important is like, it sucks that

2:47:19

we need a 9 11 moment or

2:47:21

a Pearl Harbor moment to realize what's

2:47:23

happening. So we all come together, but

2:47:25

hopefully slowly, but surely through conversations like

2:47:27

this, people realize what's actually happening. You

2:47:29

need one of those moments like every

2:47:31

generation, like that's how you get contact

2:47:33

with the truth. And it's like, it's

2:47:35

painful, but like the lights on the

2:47:37

other side. Thank you.

2:47:39

Thank you very much. Thank you.

2:47:41

Bye brothers.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features