My Takeaways From AI 2027

My Takeaways From AI 2027

Released Monday, 14th April 2025
Good episode? Give it some love!
My Takeaways From AI 2027

My Takeaways From AI 2027

My Takeaways From AI 2027

My Takeaways From AI 2027

Monday, 14th April 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

Welcome to the Astro Critics 10

0:03

podcast for the 8th of April 2025.

0:05

Title, my takeaways from AI 2027. Here's

0:08

a list of things I updated on

0:10

after working on the scenario. Some of

0:12

these are discussed in more detail in

0:14

the supplements, including the compute

0:17

forecast, timelines forecast, takeoff forecast,

0:19

AI goals forecast, and security

0:22

forecast. I'm highlighting these because

0:24

it seems like a lot

0:26

of people missed their existence.

0:28

and they're what transforms the

0:30

scenario from cool story to

0:32

research-backed debate contribution. These are my

0:35

opinions only, and not necessarily endorsed

0:37

by the rest of the team. Cyber Warfare

0:39

as one of the first geopolitically

0:41

relevant AI skills. AI will scare

0:43

people with hacking before it scares

0:46

people with bioterrorism or whatever.

0:48

Partly because AI is already showing

0:50

especially quick progress at coding,

0:52

partly because it doesn't require lab

0:55

supplies or bomb-making chemicals. and

0:57

partly because there are more

0:59

hackers than would-be terrorists. If

1:01

AI masters cyber warfare, there will

1:03

be intense pressure for the government

1:05

to step in. That's bad for

1:07

open source. It'll be restricted unless

1:09

they find some way to guarantee

1:11

the models can't be trained to hack.

1:14

Bad for the people who want to pause

1:16

AI. We can't let China's army of

1:18

auto hackers get ahead of ours. And ambiguous

1:20

for the AI companies. We don't

1:22

predict they'll get fully nationalized, but

1:24

they'll end up in the same

1:27

bucket as uranium miners, Middle East

1:29

and fertilizer factories, etc. But it's

1:31

good for bio-safety. Governments will

1:33

have to confront tough security questions around

1:35

AI when they first master hacking.

1:37

By the time they master bio- weapon

1:39

production, some sort of regulatory framework

1:42

may already be in place. The

1:44

scenario is agnostic about whether some

1:46

early bioterrorist could get lucky and get

1:48

a small boost from a marginal model.

1:51

but it doesn't expect them to

1:53

have easy access to true

1:55

superintelligence. A period of potential

1:58

geopolitical instability. If

2:01

America has nukes and is willing

2:03

to use them, and Russia doesn't,

2:05

then America automatically wins every conflict.

2:07

So if you're Russia and you

2:09

hear America will get nukes next

2:11

year, what do you do? You

2:13

either surrender or try some desperate

2:16

gambit to destroy their nuclear program.

2:18

Likewise, if you're America, you've got

2:20

nukes and you know Russia will

2:22

get nukes next year, what do

2:24

you do? You can either nuke

2:26

them now and automatically win. or

2:29

you give up your advantage and

2:31

have the whole Cold War. von

2:33

Neumann really wanted to nuke them

2:35

in 1947 and win automatically. We

2:37

didn't do that because we weren't

2:39

psychos, but the logic is sound.

2:42

If true super intelligence is possible,

2:44

then it's a decisive strategic advantage

2:46

in the same sense as nukes.

2:48

You don't even have to be

2:50

a psycho. Maybe you can use

2:52

it to cause a bloodless regime

2:55

change. So if you get it

2:57

first, there's a strong incentive to

2:59

use it right away. And if

3:01

you're on track to get it

3:03

second, there's a strong incentive to

3:05

flip the game board, so that

3:07

doesn't happen. If everybody realizes this

3:10

ahead of time, and America is

3:12

on track to get super intelligence

3:14

three months before China, then there

3:16

may be a period where China

3:18

considers whether to lie down and

3:20

die versus do something dramatic. Kinetic

3:23

strikes on US data centers? In

3:25

a best case scenario, this provides

3:27

an opportunity for a deal. Maybe

3:29

enshrining a peaceful international AI effort.

3:31

You can decide how likely you

3:33

think that one is. The software-only

3:36

singularity. Skeptical futurists expect two types

3:38

of bottlenecks to restrain the singularity.

3:40

There are bottlenecks to AI progress,

3:42

for example compute, that prevent you

3:44

from rocketing to superintelligence too quickly.

3:46

And there are bottlenecks to automation.

3:49

For example, factory build times, regulations,

3:51

that prevent AIs from changing the

3:53

economy too quickly. Take both bottlenec

3:55

seriously. and you get a long

3:57

feedback cycle where AI is getting

3:59

a little more intelligent. automate a

4:01

little more of the economy, including

4:04

chip factories, use that to get

4:06

a little more intelligence still, and

4:08

make a gradual takeoff over the

4:10

course of decades. AI 2027 objects

4:12

to the first bottleneck. Smarter researchers

4:14

can use compute more efficiently. In

4:17

fact, we know this is happening.

4:19

About half of all AI scaling

4:21

since 2020 has been algorithmic progress,

4:23

where we get better at using

4:25

the computer we have. If we

4:27

hold compute constant but get 10

4:30

times algorithmic progress, because of the

4:32

intelligence explosion, then we get 5

4:34

times overall AI improvement. The skeptics

4:36

counter object. The research to speed

4:38

algorithmic progress is itself bottlenecked by

4:40

compute. Researchers need to do experiments

4:42

to determine which new algorithms work

4:45

and what parameters to give them.

4:47

It might be that smarter researchers

4:49

could figure out how to use

4:51

this compute more efficiently. but then

4:53

you don't get an intelligence explosion

4:55

until your AIs are already smarter

4:58

than human researchers. That is, when

5:00

you're already past AGI. AI 2027

5:02

disagrees. Although the counter objection is

5:04

directionally correct, there are little ways

5:06

intelligence can boost speed even when

5:08

computers held constant. How do we

5:11

know? Partly through armchair attempts to

5:13

enumerate possibilities. For example, even if

5:15

you can't speed up by adding

5:17

more researchers, surely giving the same

5:19

researchers higher serial speed has to

5:21

count for something. And partly because

5:24

we surveyed AI researchers and asked,

5:26

if you had a bunch of

5:28

AIs helping you but only the

5:30

same amount of compute, how much

5:32

faster would your research go? And

5:34

they mostly said, somewhat faster. All

5:36

these little boosts will compound on

5:39

themselves in typical intelligence explosion fashion.

5:41

And when you game it out,

5:43

you get a one year or

5:45

so takeoff to superintelligence. Here you've

5:47

avoided bumping up against most of

5:49

the real-world physical bottlenecks to automation.

5:52

Factory build times, regulations, etc. You

5:54

have a data center full of

5:56

superintelligences in a world which is

5:58

otherwise unchanged. You might not even

6:00

have very good consumer-facing AIs. We

6:02

think that AI companies probably won't

6:05

release many new models mid-intelligence explosion.

6:07

They'd rather spend those resources exploding

6:09

faster. Later, when we do try

6:11

to model automation speed, we're asking

6:13

what happens when full superintelligences get

6:15

unleashed on a normal human world,

6:18

rather than what happens when 30%

6:20

smarter AIs try to automate a

6:22

world optimized by 25% smarter AIs.

6:24

The relevance or irrelevance of open

6:26

source AI. In the scenario, the

6:28

leading company's AIs are a year

6:30

or two ahead of the best

6:33

open-source AIs. This isn't a bold

6:35

prediction, it's true now. We only

6:37

say the trend will not change.

6:39

But in the scenario, the intelligence

6:41

explosion only takes a year or

6:43

two. So by the time the

6:46

leading company's AIs passed the human

6:48

level, the open source AIs are

6:50

only somewhat better than the best

6:52

AIs today. That means they aren't

6:54

an effective check on post-intelligence explosion

6:56

super intelligences. It might be even

6:59

worse than that. Once AI becomes

7:01

good at cyber warfare, there will

7:03

be increased pressure on companies like

7:05

Meta and Deep Seek to stop

7:07

releases until they're sure they can't

7:09

be jailbroken to hack people. If

7:11

that's hard, it could slow open

7:14

source even further. AI communication as

7:16

pivotal. In the misalignment branch, AI's

7:18

stop using English chain of thought

7:20

and thinking new release, in quotes.

7:22

A pre-symbolic language of neural weight

7:24

activations. Do humans do this? Is

7:27

this the same as the mental

7:29

ease hypothesis? They communicate by sending

7:31

new release vectors to each other,

7:33

sort of like humans gaining a

7:35

form of telepathy that lets them

7:37

send mental states through email. This

7:40

is good for capabilities. New release

7:42

is faster and richer than English,

7:44

but Doom's alignment. Not only can

7:46

researchers no longer read chain of

7:48

thought to see if the model

7:50

is scheming, they can no longer

7:53

even monitor into AI communication to

7:55

check what they're talking about. For

7:57

example, hey, should we kill all

7:59

humans? In the humanity survives

8:01

branch, companies realize this is dangerous,

8:03

take the capabilities hit and stick

8:05

with English. They monitor a chain

8:08

of thought and into AI communication,

8:10

or more realistically have too dumb

8:12

to plot ais like GPT-4 do

8:14

this. These heavily monitored ais are

8:16

never able to coordinate a successful

8:18

plot and invent good alignment techniques

8:20

while still under human control. When

8:22

real-world researchers debate whether or not

8:24

to implement new release, we hope

8:26

they think, hey! Isn't this the

8:28

decision that doomed humanity in that

8:31

AI 2027 thing? Or if we're

8:33

lucky, the tech level it takes

8:35

to implement new release will also

8:37

provide us with two dump-to-plot GPT-4-style

8:39

new release interpreters, in which case

8:41

we could try monitoring again? Ten

8:43

people on the inside. Title comes

8:45

from this less wrong post, Lincoln

8:47

Post, but it was the impression

8:49

I got from AI 2027 too.

8:51

If things go this fast, there

8:54

won't be time for a grassroots

8:56

level campaign for safety. or even

8:58

for safety-related legislation. Whether or not

9:00

the AI is safe will depend

9:02

on company insiders. First, the CEO

9:04

or board or leadership and how

9:06

much they choose to prioritize safety.

9:08

Second, the alignment team and how

9:10

skilled they are. Third, the rank-and-file

9:12

employees and how much they grumble

9:14

or revolt if their company seems

9:17

to be acting irresponsibly. I suppose

9:19

the national security state would also

9:21

have the opportunity to object, but

9:23

it doesn't seem like the sort

9:25

of thing they would do. This

9:28

is one reason I oppose the

9:30

campaigns that have sprung up recently

9:32

to get safety conscious people to

9:34

quit AI companies. I'm tempted to

9:36

push the opposite. Are we sure

9:38

we shouldn't be pushing safety conscious

9:40

people to be joining AI companies

9:42

as fast as possible? Maybe not

9:44

if you're some genius whose presence

9:46

would massively accelerate capabilities research, but

9:48

if you're a replacement level or

9:50

only slightly above? Sure. This claim

9:52

has not been checked with smart

9:54

people and you should run it

9:56

by experts who have thought about

9:58

it more before acting on it.

10:00

Still, I want to get it

10:02

out there as something to think

10:04

about before the... should quit campaigners

10:06

fill up the space. But this

10:08

also means big possible gains from

10:10

getting anyone other than 10 people

10:12

on the inside involved. For example,

10:14

if labs can commit to or

10:16

be forced into publishing safety cases,

10:18

that brings the number of eyeballs

10:20

on their plans from tens to

10:22

hundreds. Potential for very fast automation.

10:24

I have to admit I'm skeptical

10:26

of this one, but Daniel and

10:28

the other forecasters have done their

10:30

homework and I can only object

10:32

based on vague heuristics. History provides

10:35

examples of very fast industrial transitions.

10:37

For example, during World War II,

10:39

the US converted most civilian industry

10:41

to a war footing within a

10:43

few years. The most famous example

10:45

is willow run, but the government

10:47

asked Ford to build a bomber

10:49

factory. Three years after the original

10:51

request, it was churning out a

10:53

bomber per hour. How did willow

10:55

run move so quickly? It had

10:57

near unlimited money, near unlimited government

10:59

support, talented people in charge. and

11:01

the ability to piggyback off Ford's

11:03

existing capacity to build and staff

11:05

factories. We imagine the first super

11:07

intelligences in their data centers, chomping

11:09

at the bit to transform the

11:11

economy. Alligned super intelligences will want

11:13

this. The faster they automate the

11:15

economy, the faster they can cure

11:17

cancer and produce limitless prosperity. So

11:19

will unaligned super intelligences. The faster

11:21

they automate the economy, the sooner

11:23

they can build their own industrial

11:25

base and kill all humans without

11:27

the lights going out. So they

11:29

plot a tech tree, probably starting

11:31

with humanoid robot workers, automated bio

11:33

labs, 3D printers, and other texts

11:35

that speed up future automation. Then

11:37

they ask for money, government support,

11:39

and factories. Talent obviously is no

11:41

issue for them. We predict they

11:43

get the money. If you get

11:45

an opportunity to invest in a

11:47

superintelligence during the singularity, obviously you

11:50

say yes. We predict they get

11:52

the government support. If China is

11:54

also approaching superintelligence... And the difference

11:56

between full super intelligent automation and

11:58

half-hearted super intelligent automation is a

12:00

GDP growth rate of 25%. 5%

12:02

versus 50% per year, then delaying

12:04

more than a year or so

12:06

is slow motion national suicide. But

12:08

also, persuasion and politics are trainable

12:10

skills. If super intelligences are better

12:12

than humans at all trainable skills,

12:14

we expect them to generally get

12:16

what they want. And we predict

12:18

they get the factories. This is

12:20

maybe over-determined. Did you know that

12:22

right now in 2025, OpenAI's market

12:24

cap is higher than all non-

12:26

Tesla U.S. car companies combined? If

12:28

they wanted to buy out Ford,

12:30

they could do it tomorrow. So

12:32

maybe the three-year pivot to a

12:34

war footing is the right historical

12:36

analogy here. Then AI 2027 goes

12:38

further and says that if 1940s

12:40

bureaucrats can do it in three

12:42

years, then superintelligence can do it

12:44

in one. Though, like I said,

12:46

I have to admit I'm skeptical.

12:48

Most of this, plus the final

12:50

calculations about exactly how many robots

12:52

this implies getting manufactured when, is

12:54

well covered in Ben Todd's, how

12:56

quickly could robots scale up. Special

13:00

economic zones. In the context of

13:02

the software-only singularity, where you start

13:04

with some super intelligences on one

13:06

side and the entire rest of

13:08

the economy on the other, this

13:10

looks like a natural solution. Give

13:13

them some land, doesn't matter if

13:15

it's a random desert, they're AIs,

13:17

and let them tile it with

13:19

factories without worrying about the normal

13:21

human regulations. You can't do everything

13:23

in SEZ or SEZs. At first,

13:26

you might be limited to existing

13:28

car factories. probably in Detroit or

13:30

somewhere, staffed by human labourers in

13:32

a normal city. But they're a

13:34

good next-stage solution. And you might

13:36

be able to make them work

13:38

for some of their first stage.

13:41

For example, through small SECs covering

13:43

a few blocks in Detroit. Super

13:45

persuasion. We had some debates on

13:47

whether to include this one. People

13:49

get really worked up about it,

13:51

and it doesn't dramatically affect things

13:54

either way. But we ended up

13:56

weakly predicting it's possible. Persuasion or

13:58

charisma or whatever you want to

14:00

call it is a normal non-magical

14:02

human skill. Some people are better

14:04

at it than others. Probably they're

14:06

better because of some sort of

14:09

superior data efficiency. They can learn

14:11

good social skills faster, that is

14:13

through fewer social interactions than others.

14:15

A super intelligent AI could also

14:17

do this. If you expect them

14:19

to be inventing nanobots and star

14:22

ships yet unable to navigate social

14:24

situations, you've watched too much 1960s

14:26

sci-fi. Don't imagine them trying to

14:28

do this with a clunky humanoid

14:30

humanoid robot. Imagine them doing it

14:32

with a video conferencing avatar of

14:34

the most attractive person you've ever

14:37

seen. If persuasion only tops out

14:39

at the level of top humans,

14:41

this is still impressive. The top

14:43

humans are very persuasive. They range

14:45

from charismatic charmers, like Bill Clinton,

14:47

to strategic masterminds like Dominic Cummings,

14:49

to Machiavellian statesmen, like Otto von

14:52

Bismarck, to inspirational yet culty gurus,

14:54

like Steve Jobs, to beloved celebrities

14:56

like Taylor Swift. At the very

14:58

least, a superintelligence can combine all

15:00

of these skills. But why should

15:02

we expect persuasion to top out

15:05

the level of top humans? Most

15:07

people aren't as charismatic as Bill

15:09

Clinton. Bill is a freakish and

15:11

singular talent at the far-ish and

15:13

singular talent at the far end

15:15

of an athletic bell curve. But

15:17

the very bell curve shape suggests

15:20

that the far end is determined

15:22

by population size. For example, there

15:24

are enough humans to expect one

15:26

plus six standard deviations runner, and

15:28

that's Usain Bolt, rather than by

15:30

natural laws of the universe. If

15:33

the cosmic speed limit were 15

15:35

miles per hour, you would expect

15:37

many athletic humans to be bunched

15:39

up together at 15 miles per

15:41

hour, with nobody standing out. For

15:43

the far end of the bell

15:45

curve to match the cosmic limit

15:48

would be a crazy coincidence. And

15:50

indeed, the cosmic speed limit is

15:52

about 10 million times Usain Bolt's

15:54

personal best. By the same argument,

15:56

we shouldn't expect the cosmic charisma

15:58

limit to be right. at the

16:01

plus six standard deviation level with

16:03

Clinton. We worry that people will

16:05

round this off to something impossible.

16:07

A godlike ability to hypnotize

16:09

everyone into doing their will instantly.

16:11

Then dismiss it. Whereas it might

16:14

just be another step or two or three

16:16

along the line from you to the coolest kid

16:18

in the high school friend group to a

16:20

really good salesman to Steve Jobs. Or if

16:23

you wouldn't have fallen for Steve

16:25

Jobs, someone you would have fallen

16:27

for. Your favorite influencer. Your

16:29

favorite writer? Oh, but only like

16:31

my favorite writer because she's so

16:33

smart and thinks so clearly. Don't

16:35

worry, if you're not fooled by the

16:37

slick hair and white teeth kind of

16:39

charisma, there'll be something for you too.

16:41

This skill speeds things up because AIs

16:44

can use it even before automation,

16:46

including to build support for their

16:48

preferred automation plans. But the

16:50

scenario is over determined enough that it

16:52

doesn't change too much if you assume

16:54

it's impossible. Which are

16:57

the key superintelligent technologies?

16:59

If AIs invent lie detectors for

17:02

humans, international negotiations get

17:04

much more interesting. What would

17:06

you be willing to agree to if you

17:08

knew for sure that your rivals were telling

17:10

the truth? Or are there ways to fool

17:12

even the perfect lie detector? The deep

17:15

state lies to the president about

17:17

the real plan, then sends the

17:19

president to get tested. Solve for the

17:21

equilibrium. If AIs invent lie

17:23

detectors, for AIs, then alignment becomes

17:25

much easier. But do you trust the

17:28

AIs who invented and tested the lie

17:30

detector when they tell you it works? If

17:32

A.I. can forecast with superhuman

17:34

precision, don't think God, think

17:36

moderately beyond the best existing

17:38

super forecasters, maybe we can more

17:41

confidently navigate difficult decisions.

17:43

We can ask them questions like, does

17:46

this arms race end anywhere good? Or

17:48

what happens if we strike a bargain

17:50

with China using those lie detectors and

17:52

they can give good advice? Maybe if

17:54

ordinary people have these superduper

17:56

forecasters and they all predict impending

17:59

technofutors... and they all agree

18:01

on which strategies best prevent

18:03

the impending techno-fudalism, then civil society

18:06

can do better than the usual

18:08

scattered ineffectual protests. Maybe we

18:10

ask the AIs how to create meaning

18:12

in a world where work has become

18:14

unnecessary and human artistic effort are relevant.

18:17

Hopefully it doesn't answer, Law, you can't.

18:19

If AIs super persuasive, as above, then

18:21

whoever controls the AI has unprecedented

18:23

political power. If techno feudalists

18:26

or autocrats control it, autographs

18:28

control it. Guess we all love

18:30

Big Brother now? If nobody controls it,

18:32

maybe somehow the AI is still open

18:35

source, then we get... What? Something like

18:37

the current internet on steroids,

18:39

where sinister influences build cabals

18:41

of people brainwashed to their

18:43

own point of view? What

18:45

about AI negotiation? Might AI's be

18:47

smart enough to take all positive some

18:49

trades with each other? Might they benefit

18:52

from new enforcement mechanisms,

18:54

like agreements to mutually added

18:56

their weights to want? to

18:58

comply with the treaty? Could you

19:00

use this to end war? Could you

19:02

accidentally overdo it and end up

19:04

locked in some regime you didn't intend?

19:06

What about human intelligence

19:08

enhancement? We may never be

19:10

as smart as the AIs,

19:13

but a world of IQ

19:15

300 humans advised by superintelligences

19:17

might look different from IQ

19:19

100 humans advised by superintelligences.

19:21

Would we be better able to determine

19:23

what questions to ask them? Would society

19:25

be more equal? because

19:27

cognitive inequality is eliminated,

19:30

less equal because only the rich

19:32

enhance themselves? What about

19:34

conscientiousness enhancement, agency

19:36

enhancement, etc. AI 2027

19:39

is pretty vague on social changes

19:41

after the singularity, partly because it

19:43

depends a lot on which combination

19:45

of these technologies you get and

19:47

when you get them. Audio note, as

19:49

mentioned in the previous post, I have

19:52

recorded a full audio version of AI

19:54

2027, which is about four hours long. That's

19:56

the previous post in the

19:58

podcast. There's also... a two-hour condensed

20:00

version which is available on the website

20:02

that Scott links to for AI 2027.

20:05

This is an audio version of Astro

20:07

Codex 10 Scott Alexander Substack. If you

20:09

like it you can subscribe at Astro

20:11

Codex 10 dot sub stack. In addition

20:13

if you like my work creating this

20:16

audio version you can support that on

20:18

Patreon.com at Patreon.com/SSC podcast. To reference this

20:20

please link to the original and contact

20:22

me please use Astro Codex podcast at

20:25

Proton Mail.com. Thank you for listening and

20:27

I'll speak to you next time.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features