How To Build The Future: Sam Altman

How To Build The Future: Sam Altman

Released Monday, 18th November 2024
Good episode? Give it some love!
How To Build The Future: Sam Altman

How To Build The Future: Sam Altman

How To Build The Future: Sam Altman

How To Build The Future: Sam Altman

Monday, 18th November 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

We said from the very beginning we were

0:02

going to go after AGI at a time

0:04

when in the field you weren't allowed to

0:06

say that because that just seemed impossibly crazy.

0:09

I remember a rash of criticism

0:11

for you guys at that moment. We really wanted

0:13

to push on that and

0:16

we were far less resourced than DeepMind and others

0:18

and so we said okay they're going to try

0:20

a lot of things and we've just got to

0:22

pick one and really concentrate and that's how we

0:24

can win here. Most of the world

0:26

still does not understand the value of like a

0:28

fairly extreme level of conviction on one bed. That's

0:31

why I'm so excited for startups right now is

0:33

because the world is still sleeping on all of this

0:35

to such an astonishing degree. We

0:43

have a real treat for you today. Sam

0:46

Altman, thanks for joining us. Thanks Gary.

0:48

This is actually a reboot of your series, How

0:50

to Build the Future and so welcome back to

0:53

the series that you started. That was like eight

0:55

years ago I was trying to think about that,

0:57

something like that. That's wild. Very good. That's

0:59

right. Let's talk about your newest

1:02

essay on the age of intelligence. Is

1:05

this the best time ever to be

1:07

starting a technology company? Let's

1:10

at least say it's the best time yet. Hopefully there

1:12

will be even better times in the future. I sort

1:14

of think with each successive major technological

1:16

revolution you've been able to do more than

1:18

you could before and

1:20

I would expect the companies to be

1:22

more amazing and impactful and everything else.

1:25

So yeah I think it's the best

1:27

time yet. Many companies have the edge

1:29

when things are moving slowly and

1:31

not that dynamic and then

1:33

when something like this or mobile or

1:35

the internet or semiconductor revolution happens or

1:37

probably like back in the days of

1:40

the industrial revolution that was when

1:42

upstarts have their edge. So

1:44

yeah this is like and it's been a while

1:46

since we've had one of these so this is like

1:48

pretty exciting. In the essay you actually say a

1:50

really big thing which is ASI,

1:52

super intelligence, is

1:55

actually thousands of days

1:57

away. Maybe. I mean that's our

1:59

hope I guess. I

8:00

always felt a little bit weird about that. And

8:02

then I remember one of the things I thought was so

8:04

great about YC, and still that I care so much about

8:06

YC about, is it

8:08

was like a collection of the weird people who were just like,

8:10

I'm just going to do my thing. The

8:13

part of this that does resonate as an accurate

8:16

self-identity thing is I do think you

8:18

can just do stuff or try

8:20

stuff a surprising amount of the time. And

8:24

I think more of that is a good thing. And

8:26

then I think one of the things that both

8:28

of us found at YC was a bunch of

8:30

people who all believed that you could just do

8:33

stuff. For a long time when I

8:35

was trying to figure out what made YC so special,

8:37

I thought that it was like, okay,

8:39

you have this very amazing

8:43

person telling you, you

8:46

can do stuff I believe in you. And

8:48

as a young founder, that felt so special

8:51

and inspiring, and of course it is. But

8:53

the thing that I didn't understand until much later was

8:55

it was the peer group of other people doing that.

8:59

And one of the biggest

9:01

pieces of advice I would give to young

9:03

people now is finding that peer group as

9:05

early as you can was so important to

9:07

me. And

9:10

I didn't realize it was something that mattered. I kind of

9:12

thought, ah, I'll

9:14

figure it out on my own. But man,

9:17

being around like inspiring peers was

9:20

so, so valuable. What's funny is both of us

9:22

did spend time at Stanford. I actually did graduate,

9:24

which is, I probably shouldn't have

9:26

done that, but I did. You

9:29

pursued the path of far greater

9:32

return by dropping out. But

9:34

that was a community that purportedly

9:37

had a lot of these characteristics,

9:39

but I was still beyond surprised

9:41

at how much more potent it

9:43

was with a room full of founders. I

9:45

was just going to say the same thing. I liked Stanford

9:47

a lot, but

9:51

I did not feel surrounded by people

9:54

that made me want to be better

9:56

and more ambitious and whatever else.

9:58

And to the degree I did, The thing you

10:00

were competing with your peers on was like, who

10:02

was going to get the internship at which investment

10:04

bank? Which I'm embarrassed to say,

10:06

I fell on that trap. This is like how powerful peer

10:09

groups are. It's

10:11

a very easy decision to not go

10:13

back to school after seeing what the

10:15

YC-5 was like. There's

10:17

a powerful quote by Carl Jung that

10:19

I really love. The

10:21

world will come and ask you

10:24

who you are and if you don't

10:26

know, it will tell you. It sounds

10:28

like being very intentional about who you want

10:30

to be and who you want to be

10:32

around as early as possible is very important.

10:35

Yeah, this was definitely one of my

10:37

takeaways, at least for myself, is no

10:40

one is immune to peer pressure and so all you

10:42

can do is pick good peers. Yeah. Obviously,

10:44

you went on to create

10:46

looped, sell that, go to

10:48

Green Dot and then we ended up getting to

10:50

work together at YC. Talk to me about the

10:53

early days of YC research. One of the really

10:55

cool things that you brought

10:57

to YC was this

10:59

experimentation. I remember

11:01

you coming back to partner rooms and talking about

11:04

some of the rooms that you were getting to

11:06

sit in with the Larian surrogates of the world

11:08

and that AI was at

11:10

the tip of

11:12

everyone's tongue because it felt so

11:14

close and yet that was 10 years ago. The thing I always thought

11:23

would be the coolest retirement job was to get to

11:25

run a research lab. It

11:28

was not specific to AI at

11:30

that time. When we started talking about YC

11:33

research, well, not only was it going to it,

11:35

it did end up funding a bunch of different

11:37

efforts. And I wish I

11:39

could tell the story of what was obvious that AI

11:41

was going to work and be the thing, but we

11:43

tried a lot of bad things too around

11:45

that time. I read

11:48

a few books on

11:50

the history of Xerox, Park and

11:52

Bell Labs and stuff and I think there were a lot of people, it

11:54

was in the air of Silicon Valley at the time, that

11:56

we need to have good research labs again. And I

11:59

just... I thought it would be so cool to do.

12:01

And it was sort of similar to what YC

12:04

does in that you're gonna allocate capital to smart

12:06

people and sometimes it's gonna work and sometimes

12:08

it's not going to. And

12:10

I just

12:12

wanted to try it. AI for sure

12:14

was having a mini moment. This

12:17

was like kind of late 2014, 2015, early 2016, was

12:21

like the super intelligence

12:23

discussion, like the book super intelligence

12:25

was happening. Yeah,

12:27

the DeepMind that had a few

12:29

like impressive results but a little bit

12:31

of a different direction. You know, I

12:33

had been an AI nerd forever. So I was like, oh,

12:36

it'd be so cool to try to do something but it

12:38

was very hard to say what to do. Was ImageNet out

12:40

yet? ImageNet was out. Yeah. Yeah. For

12:42

a while at that point. So you could tell if it was

12:44

a hot dog or not. You could sometimes.

12:46

Yeah, that was getting there, yeah. You

12:49

know, how did you identify the initial people

12:51

you wanted involved in, you know, YC

12:54

research and OpenAI? Greg

12:57

Brockman was early. In retrospect, it feels like this

12:59

movie montage and there were like all of these,

13:01

like, you know, at the beginning of like the

13:03

Bankai's movie when you're like driving around to find

13:05

the people and whatever. And

13:07

they're like, you son of a bitch, I'm in. Right,

13:10

like Ilya, I like heard

13:13

he was really smart. And then I watched

13:15

some video of his and he's also, now

13:17

he's extremely smart, like true, true, genuine, genius

13:19

and visionary but also he has this incredible

13:22

presence. And so I watched this video of

13:24

his on YouTube or something. I was like,

13:26

I gotta meet that guy. And I emailed him, he didn't respond. So

13:28

I just like went to some conference he was

13:30

speaking at and we met up and then after that we

13:32

started talking a bunch. And

13:34

then like Greg, I had known a little

13:36

bit from the early Stripe days. What was

13:38

that conversation like though? It's like, I really

13:40

like what your ideas about AI and

13:42

I wanna start a lab.

13:44

Yes, and one of the things that

13:47

worked really well in retrospect was

13:50

we said from the very beginning we were gonna go

13:52

after AGI at a time when

13:54

in the field you weren't allowed to say that

13:57

because that just seemed impossibly.

14:00

crazy and borderline

14:02

irresponsible to talk about. So that got

14:04

his attention immediately. It got all of

14:06

the good young people's attention and the

14:09

derision, whatever that word is, of the mediocre old people.

14:12

And I felt like somehow that was a really good

14:14

sign and really powerful. And we were like this

14:17

ragtag group of people. I

14:19

mean, I was the oldest by a decent amount. I was like, I

14:21

guess I was 30 then. And

14:24

so you had these people who were like,

14:26

those are these irresponsible young kids who don't

14:28

know anything about anything. And they're like saying

14:30

these ridiculous things. And

14:33

the people who that was really appealing to, I guess,

14:35

are the same kind of people who would have said like, it's

14:37

a, you know, I'm a sophomore and I'm coming or whatever. And

14:39

they were like, let's just do this thing. Let's take a run

14:41

at it. And

14:44

so we kind of went around and met people one

14:46

by one and then in different configurations of groups. And

14:49

it kind of came together over the course of,

14:52

in fits and starts, but over the course of like nine

14:54

months. And then it started

14:57

happening. And then it started happening. And

14:59

one of my favorite like memories

15:01

of all of OpenAI was Ilya

15:05

had some reason that Google or something

15:07

that we couldn't start in. We announced in December of

15:10

2015, but we couldn't start until January of 2016. So

15:13

like January 3rd, something like that of 2016 for

15:16

like very early in the month, people come back from

15:18

the holidays and we go to Greg's

15:20

apartment. Maybe there's 10

15:22

of us, something like that. And

15:25

we sit around and it felt like

15:27

we had done this monumental thing to get it started. And

15:30

everyone's like, so what do we do now? What

15:34

a great moment. It reminded me of when

15:36

startup founders work really hard to like raise

15:38

a round and they think like, oh, I

15:40

accomplished this. We did it. We did

15:42

it. And then you sit down and say

15:44

like, fuck, we gotta like figure out what we're going to do. It's

15:47

not time for popping champagne. That was actually the starting

15:49

gun. And now we got to run. And

15:52

you have no idea how hard the race is going to be. It

15:54

took us a long time to figure out what we're going to do.

15:58

But one of the things that I'm... really

16:00

amazingly impressed by Hylia in

16:02

particular, but really all of the early people about it, is

16:05

although it took a lot of twists and turns

16:07

to get here, the

16:10

big picture of the original ideas was

16:13

just so incredibly right. And

16:15

so they were like up on like one of

16:17

those flip charts or white boys on I Remember

16:19

Witch in Greg's apartment. And

16:22

then we went off and, you know,

16:24

did some other things that worked or didn't work or

16:26

whatever. Some of them did and eventually now we have

16:28

this like system. And

16:32

it feels very crazy

16:34

and very improbable looking backwards, that

16:36

we went from there to here with so

16:38

many detours on the way, but got where

16:40

we were pointing. Was deep learning even on

16:43

that flip chart initially? Yeah, I

16:45

mean more specifically than that, like do

16:47

a big unsupervised model and then solve RL was on

16:49

that flip chart. One of the flip charts

16:52

from a very, this is before Greg's apartment, but from

16:54

a very early off-site. I think

16:56

this is right. I believe there were three goals

16:58

for the effort at the time. It

17:01

was like, figure out how to do unsupervised learning,

17:03

solve RL and never get more than 120 people.

17:07

Missed on the third one. That's right. The like

17:10

the predictive direction of the first two is pretty good.

17:13

So deep learning, then

17:16

the second big one sounded like

17:18

scaling, like the idea that you

17:21

could scale. That was another heretical

17:23

idea that people actually found even

17:25

offensive. I remember a

17:28

rash of criticism for you guys at that

17:30

moment. When we

17:32

started, yeah, the core beliefs were deep

17:34

learning works and it gets better with scale.

17:38

And I think those were both somewhat

17:40

heretical beliefs. At the time, we didn't know how

17:42

predictably better it got with scale. That didn't come

17:44

for a few years later. It was a hunch

17:46

first and then you got the data to show

17:49

how predictable it was. But people already knew that

17:51

if you made these neural networks bigger, they got

17:53

better. We were sure of that before

17:57

we started. And... You

42:00

mentioned earlier that thing about discover all

42:02

of physics. I was wanting to

42:04

be a physicist, wasn't smart enough to be a good

42:07

one, had to contribute in this other way. But the

42:09

fact that somebody else, I really believe is now going

42:11

to go solve all the physics with this stuff. I'm

42:14

so excited to be alive for that. Let's

42:16

get to level four. I'm so happy for whoever that person

42:19

is. Yeah. Do you want

42:21

to talk about level three, four, and five briefly? Yeah.

42:24

So we realized that AGI had become

42:26

this badly overloaded word and people

42:29

meant all kinds of different things. We tried to

42:31

just say, okay, here's our best guess roughly of

42:33

the order of things. You have these level one

42:35

systems which are these chatbots. There'd

42:37

be level two that would come which would

42:39

be these reasoners we think we got there

42:41

earlier this year with

42:43

the 01 release. Three

42:46

is agents ability to

42:48

go off and do these longer-term tasks. Maybe

42:51

like multiple interactions with an

42:53

environment, asking people for help when they

42:55

need it, working together, all of that.

42:59

I think we're going to get there faster than

43:01

people expect. Four is

43:03

innovators, that's like a scientist and

43:05

that's ability to go explore like

43:07

a not

43:09

well-understood phenomena over

43:13

a long period of time and understand what's just,

43:15

kind of go just figure it out. Then

43:19

level five, this is the

43:21

slightly amorphous like, do

43:23

that but at the scale of the whole company or a whole

43:26

organization or whatever. That's

43:29

going to be a pretty powerful thing. Yeah. It

43:31

feels kind of fractal, right? Like even the

43:33

things you had to do to get to

43:35

two, sort of rhyme with level five and

43:38

that you have multiple agents that then self-correct, that

43:40

work together. I mean, that kind of sounds like

43:42

an organization to me just at like a very

43:44

micro level. Do you think that we'll have, I

43:46

mean, you famously talked about it. I think Jake

43:49

talks about it. It's like you

43:51

will have companies that make billions of

43:53

dollars per year and

43:55

have like less than a hundred

43:57

employees, maybe 50, maybe 20 employees.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features