Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Released Wednesday, 16th April 2025
Good episode? Give it some love!
Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Wednesday, 16th April 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

I'm Reed Hoffman. And I'm Aria

0:04

Finger. We want to know

0:06

what happens if, in the

0:09

future, everything breaks humanity's way.

0:11

With support from Stripe, we

0:13

typically ask our guests for

0:16

their outlook on the best

0:18

possible future. But now, every other

0:20

week, I get to ask Reed

0:22

for his take. This is

0:24

possible. All

0:29

right, Reed, so lovely to be here

0:32

with you. So recently, Toby Luka, the

0:34

CEO of Shopify, put out a memo

0:36

that was covered by everyone because it

0:39

had to do with internal employees

0:41

at Shopify and what they were expected

0:43

to do with AI. It talked

0:45

to them about how if you're going

0:47

to request more resources for your team,

0:50

you actually better check if AI could

0:52

do the job better or faster,

0:54

and you actually don't need that additional

0:56

headcount. It also said, like, everyone

0:58

here should be expected to be using

1:00

AI every day. And he said, I'm

1:03

the CEO, I'm no different. We want

1:05

to grow by 20, 30, 40% of

1:07

year, every employee needs to grow as

1:09

well. I think some people were shocked

1:11

by this memo. Other people found it

1:13

reasonable. What did you think about the

1:15

contents of the memo and also

1:17

Toby sort of putting out this

1:20

bold statement for the industry? You

1:22

know, I also found Toby's memo

1:24

to be exactly right. you know,

1:26

Toby does. And then also thinking

1:28

about kind of his classic technologists,

1:30

because he's obviously an engineer, is

1:33

how we use tools, you know,

1:35

AI as amplification intelligence, how it

1:37

is that we get, you know,

1:39

super agency through doing this. And

1:42

his memo, I think, is exactly

1:44

the kind of thing that I

1:46

think everybody, not just technology companies,

1:48

should be doing, every single CEO

1:51

of anything from a, you know, five to

1:53

seven person company. to a tens

1:55

of thousands of people company should

1:57

look at that and say what's

1:59

my version of how I should do that and

2:01

how I should integrate it. And Toby

2:03

obviously has given it enough thought to

2:06

kind of say, look, here are some,

2:08

you know, key, you know, kind of checkpoints

2:10

that work within companies, which is

2:12

you ask for resources, make sure

2:14

that you have, if you're going

2:16

to ask more resources, how you're

2:18

asking for more resources, in a

2:21

context of, and here is how

2:23

I'm already using AI, and here

2:25

are the reasons I need more

2:27

resources, given how I'm using AI,

2:29

either AI, lacks or the AI

2:31

opportunities from doing it. So one

2:33

of the things that I've been

2:36

telling, you know, kind of my

2:38

portfolio companies is to actually have

2:41

kind of weekly monthly check-ins where

2:43

everyone has to bring a, and

2:45

here is the new thing I've

2:48

learned about how to use AI

2:50

to help me do my job,

2:52

help us perform better in our

2:55

mission as a company for doing

2:57

that. Because The answer is

2:59

if you actually haven't found

3:01

something that was useful to

3:03

you, it's useful to your

3:05

group, it's useful to your

3:07

company, you haven't tried hard

3:09

enough. I think Toby's memo is

3:12

the kind of thing that in fact,

3:14

you know, CEOs and all group leaders

3:16

should be looking at saying, great, how

3:18

do I build on that? Thank you

3:20

for the open source, you know, kind

3:23

of, you know, kind of management technique.

3:25

What are the things... that

3:27

I should do specifically for our

3:29

group, for our company, for our

3:32

mission, for our culture, what is

3:34

our version of that, and then

3:36

start iterating in the same way. I

3:38

mean I have to admit last week on

3:40

LinkedIn I saw a marketing agency and they

3:42

sent on LinkedIn you know we promise our

3:44

clients that you will never get a image

3:46

that was started in an image generator that

3:48

was using AI we promise you you're never

3:51

going to get a tagline from us where

3:53

we use chat gPT to create it and

3:55

I literally had to look at the posting

3:57

date because I thought it was April Fool's

3:59

joke. And it wasn't. And like, I

4:01

get the nervousness and the scared

4:04

about your job and about the

4:06

future, but I just sort of

4:08

couldn't imagine that this marketing agency

4:10

was essentially doing the exact opposite

4:12

of Shopify and sort of banning

4:14

AI in their workplace. I'm sure

4:16

it befuddles you as much as

4:18

it befuddles me. Well, I mean,

4:20

I think generally speaking, that's the

4:22

similar idiocy in the education space

4:25

saying our students shouldn't use chat

4:27

chat GBTT. because the whole answer

4:29

is you're preparing them for the

4:31

future, you're preparing them for being

4:33

citizens, for being workers, for being,

4:36

you know, you know, people who

4:38

are navigating life. And here is

4:40

this fundamental tool. It's kind of like

4:42

saying, hey, none of our people. can

4:45

use anything that uses electricity. And that's

4:47

how they learn. They have to use

4:49

pencils and papers and no electricity whatsoever

4:52

in anything. You're like, well, I would

4:54

idiotic close similar to the chat DVD.

4:56

And so I think that marketing agency,

4:58

the question is really, when is

5:00

it going to have to shift or it's

5:03

probably going to die or become very esoteric

5:05

boutique. Right. Absolutely. If you

5:07

want to be the most boutique agency,

5:10

perhaps that's the way to go. So

5:12

another concern people have with AI though

5:14

is misinformation, disinformation, all of this synthetic

5:17

media that was created. And actually last

5:19

week, and this wasn't created by AI,

5:21

this was a tweet that sent the

5:23

market, made an $8 trillion worth of

5:25

market volatility because someone tweeted that the

5:28

tariffs were off when they in fact

5:30

were not. And so... If a single

5:32

tweet can move the market by $8

5:35

trillion, like what does this mean for

5:37

the future when disinformation, misinformation is increasing

5:39

and perhaps with algorithmic trading and AI

5:42

able to do this at sort of

5:44

greater quantities and greater speeds, like how

5:46

do we protect against that for

5:48

the future? There's a combination

5:51

of a free market response, which I

5:53

think is partially correct, and a societal

5:55

response, which is also partially correct.

5:58

And so that's the balance of...

6:00

makes us challenging. So the free

6:02

market responses simply say, well, if

6:04

people who are doing trading are

6:06

going to be idiots and not

6:09

track false posts and so forth,

6:11

they're going to lose money and

6:13

eventually they will be disempowered. And

6:16

so what you principally need to

6:18

do is to just make sure

6:20

that there is validated sources of

6:23

information, you know, kind of are

6:25

the anchors and then to increase

6:27

that validation. you know accuracy availability

6:29

and then allow the market to

6:31

sort it out and that's a

6:34

partial answer and my principal you know

6:36

thought there is like we should not be trying

6:38

to restrict technology as much as

6:40

we should be trying to shape

6:42

technology because the question isn't like

6:45

let's not have algorithmic training and

6:47

it's like okay that's kind of

6:49

foolish it's Let's have algorithmic trading

6:51

work in the following way,

6:53

generating the following awards, making

6:56

sure it's involving the following

6:58

kinds of data is only

7:00

deployable by entities that have

7:02

a method by which they

7:04

participate in the market in

7:06

a way that is healthy

7:08

for not creating, you know,

7:10

crazy volatility swings that damaged society.

7:12

It's a little bit like similar

7:14

to saying. Hey, you know, car

7:16

manufacturers don't want to manufacture seat

7:18

belts, drivers don't want to wear

7:20

seat belts, but actually in fact,

7:22

because the cost to the society

7:24

and the health care system and

7:26

everything else is so high, like

7:28

you would say, hey, it's a

7:30

free market, you're going to take

7:33

their risk and they're going to

7:35

dive. It's like, no, no, no,

7:37

actually in fact, there's so many

7:39

injuries and so many costs here,

7:41

and the cost of enforcing you

7:43

to wear a seat belt seat

7:45

belt is very low. for making

7:47

the overall system work is I

7:49

think an ongoing and kind of

7:52

thoughtful thing that banks and regulators

7:54

and intellectuals and economists should think

7:56

about like what are those minimal

7:58

kind of also where shaping

8:00

technology or technology

8:02

ads that keep the cost

8:04

of the cost of not

8:06

having an overly centralized system

8:09

and the benefits of all

8:11

the free market and broad

8:14

network, you know, working while

8:16

navigating the fact that we

8:18

kind of live in a more volatile

8:21

space now. On

8:26

this podcast, we like to focus on

8:28

what's possible with AI, because we know

8:30

it's the key to the next era

8:32

of growth. A truth well understood

8:34

by Stripe. Makers of Stripe billing,

8:36

the go-to monetization solution for

8:39

AI companies. Stripe knows that

8:41

when launching a new product,

8:43

your revenue model can be just

8:45

as important as the product itself.

8:48

In fact, every single one of

8:50

the Forbes Top 50 AI companies

8:52

that has a product on the

8:55

market today uses Stripe to monetize

8:57

it. See what Stripe can do

8:59

for your business at stripe.com. On

9:06

a lighter note, if there's any parents out

9:09

there who are navigating this, I just

9:11

read the book Lemonscello's library with my

9:13

nine and seven year olds and a

9:15

main plot point is a fake Wikipedia

9:17

post that leads to ruining someone's reputation

9:19

and the kids who like don't believe

9:21

it. So anyway, try that out if

9:23

you're looking to teach your kids about

9:25

misinformation on the internet. But actually moving

9:27

on to another thing that people think

9:29

is childlike and play, one of the

9:31

fun things about our conversation with Demis

9:34

Asabas last week was we talked about

9:36

games. And it was so clear that

9:38

Demis grew up playing chess and games

9:40

were so important to him, both in

9:42

terms of his scientific research, but also

9:44

in the progression of AI, whether it

9:46

was Alpha Go or the famous IBM

9:48

Watson chess chess competition. And so when

9:50

you think about the future as AI

9:52

is more in mesh in our daily

9:54

lives, like will that give. Humans the

9:56

opportunity to play more are we going

9:58

to be playing with AI? Are we

10:00

going to be interacting it solo with

10:02

teams as a game? Like, how do

10:04

you see that connection between games and

10:06

sort of our AI future? There's

10:08

a fun book, which, you know, Demis

10:11

also knows about homoludence, which is like,

10:13

we're not just sapiens, we're game players.

10:15

Obviously, you know, I have this version

10:17

of homo technically. because I think part

10:20

of games is as technologies and the

10:22

technologies that enable different kinds of gameplay

10:24

as part of it, but games is

10:27

a way we think. And as you

10:29

know, you know, I tend to approach

10:31

like most of my strategic thinking through

10:33

the lens of games. So it's like,

10:36

it's like with a startup, what's your

10:38

theory of the game, with creating.

10:40

a book super agency, what's your

10:43

theory of the game? And so,

10:45

because game playing brings tactics and

10:47

strategies and transformation, like large language

10:50

transformers together, and also has a

10:52

notion of increasing learning and competence

10:55

because how well you're playing the

10:57

game, what are the conceptual tools

10:59

you're bringing to it, etc. So,

11:02

so games is a way that we operate

11:04

across, you know, kind of,

11:06

called intelligent experience, like

11:08

almost like is species X intelligence,

11:10

how do they play games is

11:12

actually in fact, you know, kind

11:14

of directly correlated to that. It's

11:17

one of the reasons why we

11:19

know that other kinds of mammals

11:21

and other things have intelligence because

11:23

we see dolphins playing games. We

11:25

see chips playing games. We play

11:27

games with our dogs and we

11:29

play games with with our cats

11:31

and kind of that initiating gameplay

11:33

and everything else is part of

11:35

how that tends to operate. We

11:37

don't just play games solo. We

11:39

don't just play solitaire. We don't

11:41

just play games one-on-one. We play

11:44

games as teams. You know, sports

11:46

games and all of us. And

11:48

that's part of how you model

11:50

what companies go. And when it

11:53

gets to this kind of

11:55

superagency future of saying, well,

11:57

how it is that we're deploying? It's

11:59

like, with agents. I should deploy with

12:01

these tools. And by the way, we

12:04

as teams should deploy with these tools.

12:06

We as companies should deploy with these

12:08

tools. We as individual scientists

12:10

as groups of scientists should deploy

12:12

with these tools. And that's the

12:14

kind of the pattern that we're

12:16

on and that's a the model

12:18

of games is a good way for us

12:21

thinking about it. But it's also a

12:23

good way for thinking about like how

12:25

do we construct these devices and

12:27

also how do we interact

12:29

with them? like the very

12:31

first genius moment that you know

12:33

Demis and Shane and Mustafa

12:35

brought to scalable AI was

12:38

realizing here is a way

12:40

you can apply scalable computer

12:42

learning systems to

12:44

creating amazing cognitive

12:47

as opposed to like we program

12:49

the AI the AI learns and it

12:51

learns at scale because you can

12:53

use self-play as a way

12:56

of doing it. And that was actually

12:58

part of like seeing this genius moment

13:00

by them was part of what got

13:03

me back in AI from my you

13:05

know kind of my undergraduate days where

13:07

I concluded that they that the the

13:10

the the mindsets and program and

13:12

programming AI would actually not work

13:14

you know I hadn't gone to

13:16

well what what are the

13:18

scalable compute learnable systems

13:20

because back then by

13:22

the way like a

13:24

single computer was super

13:26

expensive let alone like

13:28

you know how do

13:30

you create a server

13:32

farm of a hundred

13:34

thousand kind of working

13:36

in concert and all

13:39

the rest and and

13:41

by the way the

13:44

computers back then were

13:46

less powerful than

13:49

the smartphone that's

13:52

in your pocket.

13:54

Possible is produced by Katie

13:57

Sanders, Eddie Allard, Sarah Schlead,

13:59

Vanessa Haynes. Andy, Leah

14:01

Yates, Palomo Moreno

14:03

Jimenez, and Malia

14:06

Agudello. Jenny Kaplan

14:08

is our executive

14:10

producer and editor.

14:13

Special thanks to

14:15

Suria Yalomanchili, Saeda

14:18

Sepiava, Thanasi Delos,

14:20

Ian Alice, Greg

14:22

Viato, Park Patel,

14:24

and Ben Relis.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features